Arogyaswami Paulraj’s research while affiliated with Stanford University and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (297)


An approach to physical layer security in MIMO wireless via vector perturbation precoding
  • Article

January 2020

·

26 Reads

Communications in Information and Systems

·

·

Arogyaswami J. Paulraj

Fig. 1. An illustrative example of a distributed wireless network supporting fog computing.
Distributed Online Optimization of Fog Computing for Selfish Devices With Out-of-Date Information
  • Article
  • Full-text available

September 2018

·

218 Reads

·

46 Citations

IEEE Transactions on Wireless Communications

·

·

Hui Tian

·

[...]

·

Arogyaswami Paulraj

By performing fog computing, a device can offload delay-tolerant, computationally demanding tasks to its peers for processing, and the results can be returned and aggregated. In distributed wireless networks, the challenges of fog computing include lack of central coordination, selfish behaviors of devices, and multi-hop signaling delays which can result in outdated network knowledge and prevent effective cooperations beyond one hop. This paper presents a new approach to enabling cooperations of N selfish devices over multiple hops, where selfish behaviors are discouraged by a tit-for-tat mechanism. The tit-fortat incentive of a device is designed to be the gap between the helps (in terms of energy) the device has received and offered; and indicates how much help the device can offer at the next time slot. The tit-for-tat incentives can be evaluated at every device by having all devices broadcast how much help they offered in the past time slot, and used by all devices to schedule task offloading and processing. The approach achieves asymptotic optimality in a fully distributed fashion with a time-complexity of less than O(N2). The optimality loss resulting from multi-hop signaling delays and consequently outdated tit-for-tat incentives is proved to asymptotically diminish. Simulations show that our approach substantially reduces the time-average energy consumption of the state of the art by 50% and accommodates more tasks, by engaging devices hops away under multi-hop delays.

Download



Cache-Assisted Broadcast-Relay Wireless Networks: A Delivery-Time Cache-Memory Tradeoff

March 2018

·

134 Reads

·

7 Citations

IEEE Access

An emerging trend of next generation communication systems is to provide network edges with additional capabilities such as storage resources in the form of caches to reduce file delivery latency. To investigate this aspect, we study the fundamental limits of a cache-aided broadcast-relay wireless network consisting of one central base station, M cache-equipped transceivers and K receivers from a latency-centric perspective. We use the normalized delivery time (NDT) to capture the per-bit latency for the worst-case file request pattern, normalized with respect to a reference interference-free system with unlimited transceiver cache capabilities. The objective is to design the schemes for cache placement and file delivery in order to minimize the NDT. To this end, we establish a novel converse and two types of achievability schemes applicable to both time-variant and invariant channels. The first scheme is a general one-shot scheme for any M and K that synergistically exploits both multicasting (coded) caching and distributed zero-forcing opportunities. We show that the proposed one-shot scheme (i) attains gains attributed to both individual and collective transceiver caches (ii) is NDT-optimal for various parameter settings, particularly at higher cache sizes. The second scheme, on the other hand, designs beamformers to facilitate both subspace interference alignment and zero-forcing at lower cache sizes. Exploiting both schemes, we are able to characterize for various special cases of M and K which satisfy K+M4K+M\leq 4 the optimal tradeoff between cache storage and latency. The tradeoff illustrates that the NDT is the preferred choice to capture the latency of a system rather than the commonly used sum degrees-of-freedom (DoF). In fact, our optimal tradeoff refutes the popular belief that increasing cache sizes translates to increasing the achievable sum DoF.


Fig. 1: A transceiver cache-aided BRC consisting of one BS, M RNs and K UEs. These nodes are connected through the wireless links fi, g k and hij, i = 1,. .. , M , j = 1,. .. , K. Each RN is equipped with a finite size cache. 
Fig. 4: 2D (µ, M )-plot of all Regions A, B, C, D and E for constant K (K = 2). The labels on the graph indicate the functional relationship at the borders of neighboring regions. The discrete points illustrate the fractional cache sizes µ ∈ 0, 1 M ,. .. , M −1 M , 1 for which the achievable one-shot NDT expression δOS(µ) in (8) actually hold. The annotations to the regions specify the main characteristics of the respective region. The channel limitations specify which channelbroadcast or interference channel-is characteristic for the delivery time overhead. The RN standalone frontier, where µM = K holds, represents scenarios for which all K UEs can be served by any subset of µM RNs without the need of the BS.
Delivery Time Minimization in Cache-Assisted Broadcast-Relay Wireless Networks with Imperfect CSI

March 2018

·

68 Reads

·

4 Citations

An emerging trend of next generation communication systems is to provide network edges with additional capabilities such as storage resources in the form of caches to reduce file delivery latency. To investigate the impact of this technique on latency, we study the delivery time of a cache-aided broadcast-relay wireless network consisting of one central base station, M cache-equipped transceivers and K receivers under finite precision channel state information (CSI). We use the normalized delivery time (NDT) to capture the worst-case per-bit latency in a file delivery. Lower and upper bounds on the NDT are derived to understand the influence of K,M, cache capacity and channel quality on the NDT. In particular, regimes of NDT-optimality are identified and discussed.


Information Prediction and Dynamic Programming-Based RAN Slicing for Mobile Edge Computing

February 2018

·

130 Reads

·

40 Citations

IEEE Wireless Communications Letters

In 5G networks, Mobile Edge Computing (MEC) and Radio Access Network (RAN) slicing can support the evolving services and satisfy the diverse service requirements efficiently, thereby being promising technologies. In this letter, the work is focused on the RAN slicing between MEC services and traditional services. An Information Prediction and Dynamic Programming based RAN Slicing (IP&DP-RS) algorithm is proposed. It guarantees the inter-slice isolation and realizes the intra-slice customization. Furthermore, it can optimize the network utility with high fairness in polynomial time complexity.


Fig. 1: A transceiver cache-aided HetNet consisting of one DeNB, M RNs and K UEs. 
Fig. 3: NDT lower bound for M = 1 and K ≥ 3. For K = 3, this line is in fact achievable. The dashed line shows the achievable NDT of a suboptimal time-sharing based unicasting-zero-forcing scheme, which is optimal for M = 1 and K ≤ 2. 
Fig. 4: Requested files by K = 3 users and M = 1 RNs and the availability illustrated by the symbols transmitted from the DeNB only or from both at the DeNB and the RN. 
Fig. 6: Interference Alignment Graph for the achievability at corner point ( 4 5 , 8 5 ) for M = 1 and K = 3. The graph consists of three (subspace) alignment chains. 
Delivery Time Minimization in Edge Caching: Synergistic Benefits of Subspace Alignment and Zero Forcing

October 2017

·

68 Reads

An emerging trend of next generation communication systems is to provide network edges with additional capabilities such as additional storage resources in the form of caches to reduce file delivery latency. To investigate this aspect, we study the fundamental limits of a cache-aided wireless network consisting of one central base station, M transceivers and K receivers from a latency-centric perspective. We use the normalized delivery time (NDT) to capture the per-bit latency for the worst-case file request pattern at high signal-to-noise ratios (SNR), normalized with respect to a reference interference-free system with unlimited transceiver cache capabilities. For various special cases with M={1,2}M=\{1,2\} and K={1,2,3}K=\{1,2,3\} that satisfy M+K4M+K\leq 4, we establish the optimal tradeoff between cache storage and latency. This is facilitated through establishing a novel converse (for arbitrary M and K) and an achievability scheme on the NDT. Our achievability scheme is a synergistic combination of multicasting, zero-forcing beamforming and interference alignment.


Optimal Schedule of Mobile Edge Computing for Internet of Things Using Partial Information

September 2017

·

618 Reads

·

263 Citations

IEEE Journal on Selected Areas in Communications

Mobile edge computing (MEC) is of particular interest to Internet of Things (IoT), where inexpensive simple devices can get complex tasks offloaded to and processed at powerful infrastructure. Scheduling is challenging due to stochastic task arrivals and wireless channels, congested air interface, and more prominently, prohibitive feedbacks from thousands of devices. In this paper, we generate asymptotically optimal schedules tolerant to out-of-date network knowledge, thereby relieving stringent requirements on feedbacks. A perturbed Lyapunov function is designed to stochastically maximize a network utility balancing throughput and fairness. A knapsack problem is solved per slot for the optimal schedule, provided up-to-date knowledge on the data and energy backlogs of all devices. The knapsack problem is relaxed to accommodate out-of-date network states. Encapsulating the optimal schedule under up-to-date network knowledge, the solution under partial out-of-date knowledge preserves asymptotic optimality, and allows devices to self-nominate for feedback. Corroborated by simulations, our approach is able to dramatically reduce feedbacks at no cost of optimality. The number of devices that need to feed back is reduced to less than 60 out of a total of 5000 IoT devices.


Opportunistic Downlink Interference Alignment for Multi-Cell MIMO Networks

January 2017

In this paper, we propose an opportunistic downlink interference alignment (ODIA) for interference-limited cellular downlink, which intelligently combines user scheduling and downlink IA techniques. The proposed ODIA not only efficiently reduces the effect of inter-cell interference from other-cell base stations (BSs) but also eliminates intra-cell interference among spatial streams in the same cell. We show that the minimum number of users required to achieve a target degrees-of-freedom (DoF) can be fundamentally reduced, i.e., the fundamental user scaling law can be improved by using the ODIA, compared with the existing downlink IA schemes. In addition, we adopt a limited feedback strategy in the ODIA framework, and then analyze the number of feedback bits required for the system with limited feedback to achieve the same user scaling law of the ODIA as the system with perfect CSI. We also modify the original ODIA in order to further improve sum-rate, which achieves the optimal multiuser diversity gain, i.e., loglogN\log\log N, per spatial stream even in the presence of downlink inter-cell interference, where N denotes the number of users in a cell. Simulation results show that the ODIA significantly outperforms existing interference management techniques in terms of sum-rate in realistic cellular environments. Note that the ODIA operates in a non-collaborative and decoupled manner, i.e., it requires no information exchange among BSs and no iterative beamformer optimization between BSs and users, thus leading to an easier implementation.


Citations (78)


... Nevertheless, since the processing capabilities of ESs are quite limited due to the scarce onboard resources (e.g., CPU, memory), it could be impossible to provide satisfactory offloading experience to each UE by a signal ES, especially when the task traffic is extremely high in a certain region. Therefore, cooperative task offloading has drawn growing attention to balance the spatially uneven task workloads among multiple geographically distributed ESs [3]- [6]. Cooperative task offloading allows ESs to either compute the UE tasks by themselves locally or offload them further to other ESs through the connected backhaul links. ...

Reference:

Adaptive Cooperative Task Offloading for Energy-Efficient Small Cell MEC Networks
Distributed Online Optimization of Fog Computing for Selfish Devices With Out-of-Date Information

IEEE Transactions on Wireless Communications

... In [16], the authors derive a closed-form expression for the successful delivery probability (SDP) of a cacheaided FD system, from which a heuristic-based caching design is proposed for SDP maximization. The worst case normalized delivery time (NDT) in heterogeneous networks is studied in [17] in the presence of FD relaying nodes. However, the results in [17] are based on an optimistic assumption of perfect self-interference cancellation. ...

Delivery Time Minimization in Edge Caching: Synergistic Benefits of Subspace Alignment and Zero Forcing
  • Citing Conference Paper
  • May 2018

... Therefore, the cell-free edge network studied in this paper is different from conventional edge network, in which MESs can collaborate freely to provide services to users, and there are no cell boundaries from the perspective of users. In addition, relay transmission scheme has been proposed to further improve service for the users' direct transmission link with poor channel quality via mobile edge network, in which users can exploit different relay coordinated transmission strategies to upgrade the data rate of users [10][11][12][13][14][15][16][17][18][19][20][21][22][23][24]. ...

Cache-Assisted Broadcast-Relay Wireless Networks: A Delivery-Time Cache-Memory Tradeoff

IEEE Access

... Motivated by the extreme popularity of video streaming applications, the seminal work (Shanmugam et al., 2013) has shown that small cell dense networks can improve the spectral efficiency of the wireless system. The work (Kakar et al., 2019) studies a cacheenabled broadcast-relay wireless network from a latency-centric perspective. Furthermore, the authors in (Shanmugam et al., 2013) proposed to equip the BSs with local memory to cache the most popular content in order to alleviate the bottleneck of fronthaul/backhaul communications and efficiently provide the video content to users. ...

Delivery Time Minimization in Cache-Assisted Broadcast-Relay Wireless Networks with Imperfect CSI

... It allows for establishing multiple independent logical networks, referred to as network slices, using a shared physical infrastructure. The objective is to adapt the current networks based on the application's requirements and transition from a uniform approach to a more logical solution, which involves creating separate slices with allocated resources, isolation, and specific applications [44]. Network slicing may be executed in various domains, including core networks, transport networks, and RAN, by leveraging cloud computing infrastructure or alternative network resources and functions. ...

Information Prediction and Dynamic Programming-Based RAN Slicing for Mobile Edge Computing
  • Citing Article
  • February 2018

IEEE Wireless Communications Letters

... However, a potential drawback is that it fails to take into account the energy consumption and time expenditure associated with data transmission. Lu et al. [27] introduced a computing offloading algorithm that leverages specific information within the Internet of Things (IoT) environment to optimize the number of feedback transmissions from nodes to the Mobile Edge Computing (MEC) server. Employing Lyapunov optimization, the algorithm ensures energy equilibrium and stability in the overall system data throughput. ...

Optimal Schedule of Mobile Edge Computing for Internet of Things Using Partial Information

IEEE Journal on Selected Areas in Communications

... The RUSK sounder is a commercial channel sounder developed by MEDAV GmbH [156]. It was used for different channel measurements in different university [157] [158] [159]. Each channel sounder is developed with their own characteristics relative to the channel measurement objectives. ...

Stanford July 2008 Radio Channel Measurement Campaign

... As summarized in the literature [18], IA techniques have been vigorously investigated, considering dimensions, network topologies, applications, and fundamental aspects: feasibility condition, performance metrics, and channel state information (CSI). Opportunistic IA (OIA) has also been extensively investigated as one of the most practical realizations of IA techniques for cellular networks [21][22][23][24][25][26]. More recently, an iterative IA scheme for a cognitive radio (CR) network has been investigated [27]. ...

Codebook-Based Opportunistic Interference Alignment
  • Citing Article
  • June 2014

IEEE Transactions on Signal Processing

... In [15], an optimal TAS algorithm is proposed which minimizes the error rate by exhaustively searching through all antenna configurations. In [16], a TAS scheme is proposed that considers transmit beamforming based on flexible numbers of transmit antennas with individual power constraints. ...

Antenna selection and power combining for transmit beamforming in MIMO systems
  • Citing Conference Paper
  • December 2012

... Furthermore, opportunistic interference alignment (OIA) has been proposed for both downlink [13]- [16] and uplink [17]- [23] transmissions in realistic multi-cell or multi-cluster systems. Compared with the traditional IA schemes [1]- [4], OIA enjoys many benefits. ...

A feasibility study on opportunistic interference alignment: Limited feedback and sum-rate enhancement
  • Citing Conference Paper
  • November 2012

Circuits, Systems and Computers, 1977. Conference Record. 1977 11th Asilomar Conference on