Srikanth V. Krishnamurthy

University of California, Riverside, Riverside, California, United States

Are you Srikanth V. Krishnamurthy?

Claim your profile

Publications (177)73.22 Total impact

  • [Show abstract] [Hide abstract]
    ABSTRACT: Video transfers using smartphones are becoming increasingly popular. To prevent the interception of content from eavesdroppers, video flows must be encrypted. However, encryption results in a cost in terms of processing delays and energy consumed on the user's device. We argue that encrypting only certain parts of the flow can create sufficiently high distortion at an eavesdropper preserving content confidentiality as a result. By selective encryption, one can reduce delay and the battery consumption on the mobile device. We develop a mathematical framework that captures the impact of the encryption process on the delay experienced by a flow, and the distortion seen by an eavesdropper. This provides a quick and efficient way of determining the right parts of a video flow that must be encrypted to preserve confidentiality, while limiting performance penalties. In practice, it can aid a user in choosing the right level of encryption. We validate our model via extensive experiments with different encryption policies using Android smartphones. We observe that by selectively encrypting parts of a video flow one can preserve the confidentiality while reducing delay by as much as 75% and the energy consumption by as much as 92%.
    Proceedings of the ninth ACM conference on Emerging networking experiments and technologies; 12/2013
  • [Show abstract] [Hide abstract]
    ABSTRACT: Cloud-based radio access networks (C-RAN) have been proposed as a cost-efficient way of deploying small cells. Unlike conventional RANs, a C-RAN decouples the baseband processing unit (BBU) from the remote radio head (RRH), allowing for centralized operation of BBUs and scalable deployment of light-weight RRHs as small cells. In this work, we argue that the intelligent configuration of the front-haul network between the BBUs and RRHs, is essential in delivering the performance and energy benefits to the RAN and the BBU pool, respectively. We then propose FluidNet - a scalable, light-weight framework for realizing the full potential of C-RAN. FluidNet deploys a logically re-configurable front-haul to apply appropriate transmission strategies in different parts of the network and hence cater effectively to both heterogeneous user profiles and dynamic traffic load patterns. FluidNet's algorithms determine configurations that maximize the traffic demand satisfied on the RAN, while simultaneously optimizing the compute resource usage in the BBU pool. We prototype FluidNet on a 6 BBU, 6 RRH WiMAX C-RAN testbed. Prototype evaluations and large-scale simulations reveal that FluidNet's ability to re-configure its front-haul and tailor transmission strategies provides a 50% improvement in satisfying traffic demands, while reducing the compute resource usage in the BBU pool by 50% compared to baseline transmission schemes.
    Proceedings of the 19th annual international conference on Mobile computing & networking; 09/2013
  • [Show abstract] [Hide abstract]
    ABSTRACT: To meet the capacity demands from ever-increasing mobile data usage, mobile network operators are moving toward smaller cell structures. These small cells, called femtocells, use sophisticated air interface technologies such as orthogonal frequency division multiple access (OFDMA). While femtocells are expected to provide numerous benefits such as energy efficiency and better throughput, the interference resulting from their dense deployments prevents such benefits from being harnessed in practice. Thus, there is an evident need for a resource management solution to mitigate the interference that occurs between collocated femtocells. In this paper, we design and implement one of the first resource management systems, FERMI, for OFDMA-based femtocell networks. As part of its design, FERMI: 1) provides resource isolation in the frequency domain (as opposed to time) to leverage power pooling across cells to improve capacity; 2) uses measurement-driven triggers to intelligently distinguish clients that require just link adaptation from those that require resource isolation; 3) incorporates mechanisms that enable the joint scheduling of both types of clients in the same frame; and 4) employs efficient, scalable algorithms to determine a fair resource allocation across the entire network with high utilization and low overhead. We implement FERMI on a prototype four-cell WiMAX femtocell testbed and show that it yields significant gains over conventional approaches.
    IEEE/ACM Transactions on Networking 01/2013; 21(5):1447-1460. · 2.01 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: The wide channels feature combines two adjacent channels to form a new, wider channel to facilitate high-data-rate transmissions in multiple-input-multiple-output (MIMO)-based IEEE 802.11n networks. Using a wider channel can exacerbate interference effects. Furthermore, contrary to what has been reported by prior studies, we find that wide channels do not always provide benefits in isolation (i.e., one link without interference) and can even degrade performance. We conduct an in-depth, experimental study to understand the implications of wide channels on throughput performance. Based on our measurements, we design an auto-configuration framework called ACORN for enterprise 802.11n WLANs. ACORN integrates the functions of user association and channel allocation since our study reveals that they are tightly coupled when wide channels are used. We show that the channel allocation problem with the constraints of wide channels is NP-complete. Thus, ACORN uses an algorithm that provides a worst-case approximation ratio of O(1/Δ + 1), with Δ being the maximum node degree in the network. We implement ACORN on our 802.11n testbed. Our evaluations show that ACORN: 1) outperforms previous approaches that are agnostic to wide channels constraints; it provides per-AP throughput gains ranging from 1.5 × 6×; and 2) in practice, its channel allocation module achieves an approximation ratio much better than the theoretically predicted O(1/Δ + 1).
    IEEE/ACM Transactions on Networking 01/2013; 21(3):896-909. · 2.01 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: The end-user experience in viewing a video depends on the distortion; however, also of importance is the delay experienced by the packets of the video flow since it impacts the timeliness of the information contained and the playback rate at the receiver. Unfortunately, these performance metrics are in conflict with each other in a wireless network. Packet losses can be minimized by perfectly avoiding interference by separating transmissions in time or frequency; however, this decreases the rate at which transmissions occur, and this increases delay. Relaxing the requirement for interference avoidance can lead to packet losses and thus increase distortion, but can decrease the delay for those packets that are delivered. In this paper, we investigate this trade-off between distortion and delay for video. To understand the trade-off between video quality and packet delay, we develop an analytical framework that accounts for characteristics of the network (e.g. interference, channel variations) and the video content (motion level), assuming as a basis, a simple channel access policy that provides flexibility in managing the interference in the network. We validate our model via extensive simulations. Surprisingly, we find that the trade-off depends on the specific features of the video flow: it is better to trade-off high delay for low distortion with fast motion video, but not with slow motion video. Specifically, for an increase in PSNR (a metric that quantifies distortion) from 20 to 25 dB, the penalty in terms of the increase in mean delay with fast motion video is 91 times that with slow motion video. Our simulation results further quantify the trade-offs in various scenarios.
    INFOCOM, 2013 Proceedings IEEE; 01/2013
  • Jianxia Ning, K. Pelechrinis, S.V. Krishnamurthy, R. Govindan
    [Show abstract] [Hide abstract]
    ABSTRACT: Transmission Evidence (TE for short) refers to a historic trail of the packet transmissions in the network. TE is collected and maintained in a distributed manner by the nodes in the network and can be queried on demand by a network forensics system to trace past events. The latter can facilitate crucial applications such as identifying malicious or malfunctioning nodes. Recently, we developed an analytical framework towards computing the likelihood of TE availability in wireless networks. Our prior efforts [1] brought to light the impact of the network's operational parameters (such as transmission rate and packet length) on the availability of TE. However, provisioning for TE could impact the network performance in terms of throughput and/or delay. Our objective in this work is to capture and quantify the trade-offs between provisioning transmission evidence and achieving high performance in wireless networks. In particular, we investigate the network performance hit, under the constraint of TE availability guarantees. Our results indicate that the performance remains unaffected up to a certain TE requirement. Beyond this, the throughput could degrade and the delay could increase by as much as 30%. To the best of our knowledge, this is the first study of its kind.
    Communications (ICC), 2013 IEEE International Conference on; 01/2013
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Network coding has been shown to offer significant throughput benefits over store-and-forward routing in certain wireless network topologies. However, the application of network coding may not always improve the network performance. In this paper1, we provide a comprehensive analytical study, which helps in assessing when network coding is preferable to a traditional store-and-forward approach. Interestingly, our study reveals that in many topological scenarios, network coding can in fact hurt the throughput performance; in such scenarios, applying the store-and-forward approach leads to higher network throughput. We validate our analytical findings via extensive testbed experiments, and we extract guidelines on when network coding should be applied instead of store-and-forward.
    INFOCOM, 2013 Proceedings IEEE; 01/2013
  • Tae-Suk Kim, Yong Yang, J.C. Hou, S.V. Krishnamurthy
    [Show abstract] [Hide abstract]
    ABSTRACT: Many next generation applications (such as video flows) are likely to have associated minimum data rate requirements in order to ensure satisfactory quality as perceived by end-users. In this paper, we develop a framework to address the problem of maximizing the aggregate utility of traffic flows in a multi-hop wireless network, with constraints imposed both due to self-interference and minimum rate requirements. The parameters that are tuned in order to maximize the utility are (i) transmission powers of individual nodes and (ii) the channels assigned to the different communication links. Our framework is based on using a cross-decomposition technique that takes both inter-flow interference and self-interference into account. The output of our framework is a schedule that dictates what links are to be activated in each slot and the parameters associated with each of those links. If the minimum rate constraint cannot be satisfied for all of the flows, the framework intelligently rejects a sub-set of the flows and recomputes a schedule for the remaining flows. We also design an admission control module that determines if new flows can be admitted without violating the rate requirements of the existing flows in the network. We provide numerical results to demonstrate the efficacy of our framework.
    IEEE Transactions on Wireless Communications 01/2013; 12(5):2046-2054. · 2.42 Impact Factor
  • Tae-Suk Kim, Gentian Jakllari, Srikanth V. Krishnamurthy, Michalis Faloutsos
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we propose a new integrated framework for joint routing and rate adaptation in multi-rate multi-hop wireless networks. Unlike many previous efforts, our framework considers several factors that affect end-to-end performance. Among these factors, the framework takes into account the effect of the relative positions of the links on a path when choosing the rates of operation and the importance of avoiding congested areas. The key element of our framework is a new comprehensive path metric that we call ETM (for expected transmission cost in multi-rate wireless networks). We analytically derive the ETM metric. We show that the ETM metric can be used to determine the best end-to-end path with a greedy routing approach. We also show that the metric can be used to dynamically select the best transmission rate for each link on the path via a dynamic programming approach. We implement the ETM-framework on an indoor wireless mesh network and compare its performance with that of frameworks based on the popular ETT and the recently proposed ETOP metrics. Our experiments demonstrate that the ETM-framework can yield throughput improvements of up to 253 and 368 % as compared with the ETT and ETOP frameworks.
    Wireless Networks 01/2013; 19(5). · 0.74 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Although today's online social networks provide some privacy controls to protect a user's shared content from other users, these controls aren't sufficiently expressive to provide fine-grained protection. Twitsper offers fine-grained control over who sees a Twitter user's messages, enabling private group communication while preserving Twitter's commercial interests.
    IEEE Security and Privacy Magazine 01/2013; 11(3):46-50. · 0.96 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Every night, a large number of idle smartphones are plugged into a power source for recharging the battery. Given the increasing computing capabilities of smartphones, these idle phones constitute a sizeable computing infrastructure. Therefore, for an enterprise which supplies its employees with smartphones, we argue that a computing infrastructure that leverages idle smartphones being charged overnight is an energy-efficient and cost-effective alternative to running tasks on traditional server infrastructure. While parallel execution and scheduling models exist for servers (e.g., MapReduce), smartphones present a unique set of technical challenges due to the heterogeneity in CPU clock speed, variability in network bandwidth, and lower availability compared to servers. In this paper, we address many of these challenges to develop CWC---a distributed computing infrastructure using smartphones. Specifically, our contributions are: (i) we profile the charging behaviors of real phone owners to show the viability of our approach, (ii) we enable programmers to execute parallelizable tasks on smartphones with little effort, (iii) we develop a simple task migration model to resume interrupted task executions, and (iv) we implement and evaluate a prototype of CWC (with 18 Android smartphones) that employs an underlying novel scheduling algorithm to minimize the makespan of a set of tasks. Our extensive evaluations demonstrate that the performance of our approach makes our vision viable. Further, we explicitly evaluate the performance of CWC's scheduling component to demonstrate its efficacy compared to other possible approaches.
    Proceedings of the 8th international conference on Emerging networking experiments and technologies; 12/2012
  • [Show abstract] [Hide abstract]
    ABSTRACT: User privacy has been an increasingly growing concern in online social networks (OSNs). While most OSNs today provide some form of privacy controls so that their users can protect their shared content from other users, these controls are typically not sufficiently expressive and/or do not provide fine-grained protection of information. In this paper, we consider the introduction of a new privacy control---group messaging on Twitter, with users having fine-grained control over who can see their messages. Specifically, we demonstrate that such a privacy control can be offered to users of Twitter today without having to wait for Twitter to make changes to its system. We do so by designing and implementing Twitsper, a wrapper around Twitter that enables private group communication among existing Twitter users while preserving Twitter's commercial interests. Our design preserves the privacy of group information (i.e., who communicates with whom) both from the Twitsper server as well as from undesired Twitsper users. Furthermore, our evaluation shows that our implementation of Twitsper imposes minimal server-side bandwidth requirements and incurs low client-side energy consumption. Our Twitsper client for Android-based devices has been downloaded by over 1000 users and its utility has been noted by several media articles.
    Proceedings of the 28th Annual Computer Security Applications Conference; 12/2012
  • [Show abstract] [Hide abstract]
    ABSTRACT: Visible light communications (VLC) are gaining popularity and may provide an alternative means of communications in indoor settings. However, to date, there is very little research on the deployment or higher layer protocol design for VLC. In this paper, we first perform channel measurements using a physical layer testbed in the visible light band to understand its physical layer characteristics. Our measurements suggest that in order to increase data rates with VLC (1) the beam width of a communicating link can be shrunk, and (2) the transmission beam can be tuned to point towards the target recipient. We then perform Matlab simulations to verify that the human eye is able to accommodate the changes brought by shrinking a beam or by tuning the beam direction appropriately. As our main contribution, we then design a configuration framework for a VLC indoor local area network, which we call VICO; we leverage the above features towards achieving the highest throughput while maintaining fairness. VICO first tunes the beamwidths and pointing angles of the transmitters to configurations that provide the highest throughput for each client. It then tries to schedule transmissions while accounting for conflicts and the VLC PHY characteristics. Finally, it opportunistically tunes the idle LEDs to reinforce existing transmissions to increase throughput to the extent possible. We perform extensive simulations to demonstrate the effectiveness of VICO. We find that VICO provides as much as 5-fold increase in throughput compared to a simple scheduler that does not exploit the possible variations in beamwidth or beam-angle.
    Proceedings of the 2012 IEEE 9th International Conference on Mobile Ad-Hoc and Sensor Systems (MASS); 10/2012
  • [Show abstract] [Hide abstract]
    ABSTRACT: Nodes that are part of a multihop wireless network, typically deployed in mission critical settings, are expected to perform specific functions. Establishing a notion of reliability of the nodes with respect to each function (referred to as functional reliability or FR) is essential for efficient operations and management of the network. This is typically assessed based on evidence collected by nodes with regards to other nodes in the network. However, such evidence is often affected by factors such as channel induced effects and interference. In multihop contexts, unreliable intermediary relays may also influence evidence. We design a framework for collaborative assessment of the FR of nodes, with respect to different types of functions; our framework accounts for the above factors that influence evidence collection. Each node (say Chloe) in the network derives the FR of other nodes (say Jack) based on two types of evidence: (i) direct evidence, based on her direct transactions with each such node and (ii) indirect evidence, based on feedback received regarding Jack from others. Our framework is generic and is applicable in a variety of contexts. We also design a module that drastically reduces the overhead incurred in the propagation of indirect evidence at the expense of slightly increased uncertainty in the assessed FR values. We implement our framework on an indoor/outdoor wireless testbed. We show that with our framework, each node is able to determine the FR for every other node in the network with high accuracy. Our indirect evidence propagation module decreases the overhead by 37% compared to a simple flooding based evidence propagation, while the accuracy of the FR computations is decreased only by 8%. Finally, we examine the effect of different routing protocols on the accuracy of the assessed values.
    Proceedings of the 2012 IEEE 9th International Conference on Mobile Ad-Hoc and Sensor Systems (MASS); 10/2012
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Network coding has been proposed as a technique that can potentially increase the transport capacity of a wireless network via mixing data packets at intermediate routers. However, most previous studies either assume a fixed transmission rate or do not consider the impact of using diverse rates on the network coding gain. Since in many cases, network coding implicitly relies on overhearing, the choice of the transmission rate has a big impact on the achievable gains. The use of higher rates works in favor of increasing the native throughput. However, it may in many cases work against effective overhearing. In other words, there is a tension between the achievable network coding gain and the inherent rate gain possible on a link. In this paper, our goal is to drive the network toward achieving the best tradeoff between these two contradictory effects. We design a distributed framework that: facilitates the choice of the best rate on each link while considering the need for overhearing; and dictates the choice of which decoding recipient will acknowledge the reception of an encoded packet. We demonstrate that both of these features contribute significantly toward gains in throughput. We extensively simulate our framework in a variety of topological settings. We also fully implement it on real hardware and demonstrate its applicability and performance gains via proof-of-concept experiments on our wireless testbed. We show that our framework yields throughput gains of up to 390% as compared to what is achieved in a rate-unaware network coding framework.
    IEEE/ACM Transactions on Networking 08/2012; · 2.01 Impact Factor
  • Yihua He, M. Faloutsos, S.V. Krishnamurthy, M. Chrobak
    [Show abstract] [Hide abstract]
    ABSTRACT: What topologies should be used to evaluate protocols for interdomain routing? Using the most current Internet topology is not practical since its size is prohibitive for detailed, packet-level interdomain simulations. Besides being of moderate size, the topology should be policy-aware, that is, it needs to represent business relationships between adjacent nodes (that represent autonomous systems). In this paper, we address this issue by providing a framework to generate small, realistic, and policy-aware topologies. We propose HBR, a novel sampling method, which exploits the inherent hierarchy of the policy-aware Internet topology. We formally prove that our approach generates connected and legitimate topologies, which are compatible with the policy-based routing conventions and rules. Using simulations, we show that HBR generates topologies that: 1) maintain the graph properties of the real topology; 2) provide reasonably realistic interdomain simulation results while reducing the computational complexity by several orders of magnitude as compared to the initial topology. Our approach provides a permanent solution to the problem of interdomain routing evaluations: Given a more accurate and complete topology, HBR can generate better small topologies in the future.
    IEEE/ACM Transactions on Networking 03/2012; · 2.01 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In today's OFDMA networks, the transmission power is typically fixed and the same for all the sub-carriers that compose a channel. The sub-carriers though, experience different degrees of fading and thus, the received power is different for different sub-carriers; while some frequencies experience deep fades, others are relatively unaffected. In this paper, we make a case of redistributing the power across the sub-carriers (subject to a fixed power budget constraint) to better cope with this frequency selectivity. Specifically, we design a joint power and rate adaptation scheme (called JPRA for short) wherein power redistribution is combined with sub-carrier level rate adaptation to yield significant throughput benefits. We further consider two variants of JPRA: (a) JPRA-CR where, the power is redistributed across sub-carriers so as to support a maximum common rate (CR) across sub-carriers and (b) JPRA-MT where, the goal is to redistribute power such that the transmission time of a packet is minimized. While the first variant decreases transceiver complexity and is simpler, the second is geared towards achieving the maximum throughput possible. We implement both variants of JPRA on our WARP radio testbed. Our extensive experiments demonstrate that our scheme provides a 35% improvement in total network throughput in testbed experiments compared to FARA, a scheme where only sub-carrier level rate adaptation is used. We also perform simulations to demonstrate the efficacy of JPRA in larger scale networks.
    01/2012;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Recently, tuning the clear channel assessment (CCA) threshold in conjunction with power control has been considered for improving the performance of WLANs. However, we show that, CCA tuning can be exploited by selfish nodes to obtain an unfair share of the available bandwidth. Specifically, a selfish entity can manipulate the CCA threshold to ignore ongoing transmissions; this increases the probability of accessing the medium and provides the entity a higher, unfair share of the bandwidth. We experiment on our 802.11 testbed to characterize the effects of CCA tuning on both isolated links and in 802.11 WLAN configurations. We focus on AP-client(s) configurations, proposing a novel approach to detect this misbehavior. A misbehaving client is unlikely to recognize low power receptions as legitimate packets; by intelligently sending low power probe messages, an AP can efficiently detect a misbehaving node. Our key contributions are: 1) We are the first to quantify the impact of selfish CCA tuning via extensive experimentation on various 802.11 configurations. 2) We propose a lightweight scheme for detecting selfish nodes that inappropriately increase their CCAs. 3) We extensively evaluate our system on our testbed; its accuracy is 95 percent while the false positive rate is less than 5 percent.
    IEEE Transactions on Mobile Computing 01/2012; 11:1086-1101. · 2.40 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The increase in mobile data usage is pushing broad-band operators towards deploying smaller cells (femtocells) and sophisticated access technologies such as OFDMA. The expected high density of deployment and uncoordinated operations of femtocells however, make interference management both critical and extremely challenging. Femtocells have to use the same access technology as traditional macrocells. Given this, understanding the impact of the system design choices (originally tailored to well-planned macrocells) on interference management, forms an essential first step towards designing efficient solutions for next-generation femtocells. This in turn is the focus of our work. With extensive measurements from our WiMAX OFDMA femtocell testbed, we characterize the impact of various system design choices on interference. Based on the insights from our measurements, we discuss several implications on how to efficiently operate a femtocell network.
    Proceedings - IEEE INFOCOM 01/2012;
  • [Show abstract] [Hide abstract]
    ABSTRACT: Traditional routing metrics designed for wireless networks are application agnostic. In this paper, we consider a wireless network where the application flows consist of video traffic. From a user-perspective, reducing the level of video distortion is critical. We ask the question “Should the routing policies change if the end-to-end video distortion is to be minimized?” Popular link-quality based routing metrics (such as ETX) do not account for dependence (in terms of congestion) across the links of a path; as a result, they can cause video flows to converge onto a few paths and thus, cause high video distortion. To account for the evolution of the video frame loss process we construct an analytical framework to first, understand and second, assess the impact of the wireless network on video distortion. The framework allows us to formulate a routing policy for minimizing distortion, based on which we design a protocol for routing video traffic. We find via simulations and testbed experiments that our protocol is efficient in reducing video distortion and minimizing the user experience degradation. Specifically, our protocol reduces the distortion by 20% over traditional methods, which significantly improves the video quality perceived by a user.
    Network Protocols (ICNP), 2012 20th IEEE International Conference on; 01/2012

Publication Stats

2k Citations
73.22 Total Impact Points

Institutions

  • 2002–2013
    • University of California, Riverside
      • Department of Computer Science and Engineering
      Riverside, California, United States
    • Loyola University Maryland
      Baltimore, Maryland, United States
  • 2010
    • University of Thessaly
      Iolcus, Thessaly, Greece
    • University of Pittsburgh
      Pittsburgh, Pennsylvania, United States
    • Pennsylvania State University
      • Department of Computer Science and Engineering
      University Park, MD, United States
  • 2008
    • University of Ottawa
      Ottawa, Ontario, Canada
  • 2004–2007
    • CSU Mentor
      Long Beach, California, United States
  • 1997–2001
    • University of California, San Diego
      • Department of Electrical and Computer Engineering
      San Diego, CA, United States
  • 2000
    • University of Maryland, College Park
      • Department of Electrical & Computer Engineering
      College Park, MD, United States
    • University of Illinois, Urbana-Champaign
      • Coordinated Science Laboratory
      Urbana, IL, United States
  • 1998–2000
    • HRL Laboratories, LLC
      Malibu, California, United States