January 2020
·
26 Reads
Communications in Information and Systems
This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.
January 2020
·
26 Reads
Communications in Information and Systems
September 2018
·
218 Reads
·
46 Citations
IEEE Transactions on Wireless Communications
By performing fog computing, a device can offload delay-tolerant, computationally demanding tasks to its peers for processing, and the results can be returned and aggregated. In distributed wireless networks, the challenges of fog computing include lack of central coordination, selfish behaviors of devices, and multi-hop signaling delays which can result in outdated network knowledge and prevent effective cooperations beyond one hop. This paper presents a new approach to enabling cooperations of N selfish devices over multiple hops, where selfish behaviors are discouraged by a tit-for-tat mechanism. The tit-fortat incentive of a device is designed to be the gap between the helps (in terms of energy) the device has received and offered; and indicates how much help the device can offer at the next time slot. The tit-for-tat incentives can be evaluated at every device by having all devices broadcast how much help they offered in the past time slot, and used by all devices to schedule task offloading and processing. The approach achieves asymptotic optimality in a fully distributed fashion with a time-complexity of less than O(N2). The optimality loss resulting from multi-hop signaling delays and consequently outdated tit-for-tat incentives is proved to asymptotically diminish. Simulations show that our approach substantially reduces the time-average energy consumption of the state of the art by 50% and accommodates more tasks, by engaging devices hops away under multi-hop delays.
June 2018
·
8 Reads
·
2 Citations
May 2018
·
14 Reads
·
12 Citations
March 2018
·
134 Reads
·
7 Citations
IEEE Access
An emerging trend of next generation communication systems is to provide network edges with additional capabilities such as storage resources in the form of caches to reduce file delivery latency. To investigate this aspect, we study the fundamental limits of a cache-aided broadcast-relay wireless network consisting of one central base station, M cache-equipped transceivers and K receivers from a latency-centric perspective. We use the normalized delivery time (NDT) to capture the per-bit latency for the worst-case file request pattern, normalized with respect to a reference interference-free system with unlimited transceiver cache capabilities. The objective is to design the schemes for cache placement and file delivery in order to minimize the NDT. To this end, we establish a novel converse and two types of achievability schemes applicable to both time-variant and invariant channels. The first scheme is a general one-shot scheme for any M and K that synergistically exploits both multicasting (coded) caching and distributed zero-forcing opportunities. We show that the proposed one-shot scheme (i) attains gains attributed to both individual and collective transceiver caches (ii) is NDT-optimal for various parameter settings, particularly at higher cache sizes. The second scheme, on the other hand, designs beamformers to facilitate both subspace interference alignment and zero-forcing at lower cache sizes. Exploiting both schemes, we are able to characterize for various special cases of M and K which satisfy the optimal tradeoff between cache storage and latency. The tradeoff illustrates that the NDT is the preferred choice to capture the latency of a system rather than the commonly used sum degrees-of-freedom (DoF). In fact, our optimal tradeoff refutes the popular belief that increasing cache sizes translates to increasing the achievable sum DoF.
March 2018
·
68 Reads
·
4 Citations
An emerging trend of next generation communication systems is to provide network edges with additional capabilities such as storage resources in the form of caches to reduce file delivery latency. To investigate the impact of this technique on latency, we study the delivery time of a cache-aided broadcast-relay wireless network consisting of one central base station, M cache-equipped transceivers and K receivers under finite precision channel state information (CSI). We use the normalized delivery time (NDT) to capture the worst-case per-bit latency in a file delivery. Lower and upper bounds on the NDT are derived to understand the influence of K,M, cache capacity and channel quality on the NDT. In particular, regimes of NDT-optimality are identified and discussed.
February 2018
·
130 Reads
·
40 Citations
IEEE Wireless Communications Letters
In 5G networks, Mobile Edge Computing (MEC) and Radio Access Network (RAN) slicing can support the evolving services and satisfy the diverse service requirements efficiently, thereby being promising technologies. In this letter, the work is focused on the RAN slicing between MEC services and traditional services. An Information Prediction and Dynamic Programming based RAN Slicing (IP&DP-RS) algorithm is proposed. It guarantees the inter-slice isolation and realizes the intra-slice customization. Furthermore, it can optimize the network utility with high fairness in polynomial time complexity.
October 2017
·
68 Reads
An emerging trend of next generation communication systems is to provide network edges with additional capabilities such as additional storage resources in the form of caches to reduce file delivery latency. To investigate this aspect, we study the fundamental limits of a cache-aided wireless network consisting of one central base station, M transceivers and K receivers from a latency-centric perspective. We use the normalized delivery time (NDT) to capture the per-bit latency for the worst-case file request pattern at high signal-to-noise ratios (SNR), normalized with respect to a reference interference-free system with unlimited transceiver cache capabilities. For various special cases with and that satisfy , we establish the optimal tradeoff between cache storage and latency. This is facilitated through establishing a novel converse (for arbitrary M and K) and an achievability scheme on the NDT. Our achievability scheme is a synergistic combination of multicasting, zero-forcing beamforming and interference alignment.
September 2017
·
618 Reads
·
263 Citations
IEEE Journal on Selected Areas in Communications
Mobile edge computing (MEC) is of particular interest to Internet of Things (IoT), where inexpensive simple devices can get complex tasks offloaded to and processed at powerful infrastructure. Scheduling is challenging due to stochastic task arrivals and wireless channels, congested air interface, and more prominently, prohibitive feedbacks from thousands of devices. In this paper, we generate asymptotically optimal schedules tolerant to out-of-date network knowledge, thereby relieving stringent requirements on feedbacks. A perturbed Lyapunov function is designed to stochastically maximize a network utility balancing throughput and fairness. A knapsack problem is solved per slot for the optimal schedule, provided up-to-date knowledge on the data and energy backlogs of all devices. The knapsack problem is relaxed to accommodate out-of-date network states. Encapsulating the optimal schedule under up-to-date network knowledge, the solution under partial out-of-date knowledge preserves asymptotic optimality, and allows devices to self-nominate for feedback. Corroborated by simulations, our approach is able to dramatically reduce feedbacks at no cost of optimality. The number of devices that need to feed back is reduced to less than 60 out of a total of 5000 IoT devices.
January 2017
In this paper, we propose an opportunistic downlink interference alignment (ODIA) for interference-limited cellular downlink, which intelligently combines user scheduling and downlink IA techniques. The proposed ODIA not only efficiently reduces the effect of inter-cell interference from other-cell base stations (BSs) but also eliminates intra-cell interference among spatial streams in the same cell. We show that the minimum number of users required to achieve a target degrees-of-freedom (DoF) can be fundamentally reduced, i.e., the fundamental user scaling law can be improved by using the ODIA, compared with the existing downlink IA schemes. In addition, we adopt a limited feedback strategy in the ODIA framework, and then analyze the number of feedback bits required for the system with limited feedback to achieve the same user scaling law of the ODIA as the system with perfect CSI. We also modify the original ODIA in order to further improve sum-rate, which achieves the optimal multiuser diversity gain, i.e., , per spatial stream even in the presence of downlink inter-cell interference, where N denotes the number of users in a cell. Simulation results show that the ODIA significantly outperforms existing interference management techniques in terms of sum-rate in realistic cellular environments. Note that the ODIA operates in a non-collaborative and decoupled manner, i.e., it requires no information exchange among BSs and no iterative beamformer optimization between BSs and users, thus leading to an easier implementation.
... Nevertheless, since the processing capabilities of ESs are quite limited due to the scarce onboard resources (e.g., CPU, memory), it could be impossible to provide satisfactory offloading experience to each UE by a signal ES, especially when the task traffic is extremely high in a certain region. Therefore, cooperative task offloading has drawn growing attention to balance the spatially uneven task workloads among multiple geographically distributed ESs [3]- [6]. Cooperative task offloading allows ESs to either compute the UE tasks by themselves locally or offload them further to other ESs through the connected backhaul links. ...
September 2018
IEEE Transactions on Wireless Communications
... In [16], the authors derive a closed-form expression for the successful delivery probability (SDP) of a cacheaided FD system, from which a heuristic-based caching design is proposed for SDP maximization. The worst case normalized delivery time (NDT) in heterogeneous networks is studied in [17] in the presence of FD relaying nodes. However, the results in [17] are based on an optimistic assumption of perfect self-interference cancellation. ...
May 2018
... Therefore, the cell-free edge network studied in this paper is different from conventional edge network, in which MESs can collaborate freely to provide services to users, and there are no cell boundaries from the perspective of users. In addition, relay transmission scheme has been proposed to further improve service for the users' direct transmission link with poor channel quality via mobile edge network, in which users can exploit different relay coordinated transmission strategies to upgrade the data rate of users [10][11][12][13][14][15][16][17][18][19][20][21][22][23][24]. ...
March 2018
IEEE Access
... Motivated by the extreme popularity of video streaming applications, the seminal work (Shanmugam et al., 2013) has shown that small cell dense networks can improve the spectral efficiency of the wireless system. The work (Kakar et al., 2019) studies a cacheenabled broadcast-relay wireless network from a latency-centric perspective. Furthermore, the authors in (Shanmugam et al., 2013) proposed to equip the BSs with local memory to cache the most popular content in order to alleviate the bottleneck of fronthaul/backhaul communications and efficiently provide the video content to users. ...
March 2018
... It allows for establishing multiple independent logical networks, referred to as network slices, using a shared physical infrastructure. The objective is to adapt the current networks based on the application's requirements and transition from a uniform approach to a more logical solution, which involves creating separate slices with allocated resources, isolation, and specific applications [44]. Network slicing may be executed in various domains, including core networks, transport networks, and RAN, by leveraging cloud computing infrastructure or alternative network resources and functions. ...
February 2018
IEEE Wireless Communications Letters
... However, a potential drawback is that it fails to take into account the energy consumption and time expenditure associated with data transmission. Lu et al. [27] introduced a computing offloading algorithm that leverages specific information within the Internet of Things (IoT) environment to optimize the number of feedback transmissions from nodes to the Mobile Edge Computing (MEC) server. Employing Lyapunov optimization, the algorithm ensures energy equilibrium and stability in the overall system data throughput. ...
September 2017
IEEE Journal on Selected Areas in Communications
... The RUSK sounder is a commercial channel sounder developed by MEDAV GmbH [156]. It was used for different channel measurements in different university [157] [158] [159]. Each channel sounder is developed with their own characteristics relative to the channel measurement objectives. ...
January 2008
... As summarized in the literature [18], IA techniques have been vigorously investigated, considering dimensions, network topologies, applications, and fundamental aspects: feasibility condition, performance metrics, and channel state information (CSI). Opportunistic IA (OIA) has also been extensively investigated as one of the most practical realizations of IA techniques for cellular networks [21][22][23][24][25][26]. More recently, an iterative IA scheme for a cognitive radio (CR) network has been investigated [27]. ...
June 2014
IEEE Transactions on Signal Processing
... In [15], an optimal TAS algorithm is proposed which minimizes the error rate by exhaustively searching through all antenna configurations. In [16], a TAS scheme is proposed that considers transmit beamforming based on flexible numbers of transmit antennas with individual power constraints. ...
December 2012
... Furthermore, opportunistic interference alignment (OIA) has been proposed for both downlink [13]- [16] and uplink [17]- [23] transmissions in realistic multi-cell or multi-cluster systems. Compared with the traditional IA schemes [1]- [4], OIA enjoys many benefits. ...
November 2012
Circuits, Systems and Computers, 1977. Conference Record. 1977 11th Asilomar Conference on