Conference Paper

Utility-optimal scheduling in time-varying wireless networks with delay constraints

DOI: 10.1145/1860093.1860099 Conference: Proceedings of the 11th ACM Interational Symposium on Mobile Ad Hoc Networking and Computing, MobiHoc 2010, Chicago, IL, USA, September 20-24, 2010
Source: DBLP


Clients in wireless networks may have per-packet delay constraints on their traffic. Further, in contrast to wireline networks, the wireless medium is subject to fading. In such a time-varying environment, we consider the system problem of maximizing the total utility of clients, where the utilities are determined by their long-term average rates of being served within their delay constraints. We also allow for the additional fairness requirement that each client may require a certain minimum service rate. This overall model can be applied to a wide range of applications, including delay-constrained networks, mobile cellular networks, and dynamic spectrum allocation. We address this problem through convex programming. We propose an on-line scheduling policy and prove that it is utility-optimal. Surprisingly, this policy does not need to know the probability distribution of system states. We also design an auction mechanism where clients are scheduled and charged according to their bids. We prove that the auction mechanism restricts any selfish client from improving its utility by faking its utility function. We also show that the auction mechanism schedules clients in the same way as that done by the on-line scheduling policy. Thus, the auction mechanism is both truthful and utility-optimal. Finally, we design specific algorithms that implement the auction mechanism for a variety of applications.

Full-text preview

Available from:
  • Source
    • "For example, [11] makes a non-causal assumption that the scheduler knows the channel states in the future, which is unrealistic in practice. [12] requires that the arrivals and deadlines follow a periodic structure. For more general systems with causal multi-state channels and without a periodic structure , however, we are not aware of a tractable methodology to find optimal scheduling policies subject to deadline constraints. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Opportunistic scheduling of delay-tolerant traffic has been shown to substantially improve spectrum efficiency. To encourage users to adopt delay-tolerant scheduling for capacity improvement, it is critical to provide guarantees in terms of completion time. In this paper, we study application-level scheduling with deadline constraints, where the deadline is pre-specified by users/applications and is associated with a deadline violation probability. To address the exponentially-high complexity due to temporally-varying channel conditions and deadline constraints, we develop a novel asymptotic approach that exploits the largeness of the network to our advantage. Specifically, we identify a lower bound on the deadline violation probability, and propose simple policies that achieve the lower bound in the large-system regime. The results in this paper thus provide a rigorous analytical framework to develop and analyze policies for application-level scheduling under very general settings of channel models and deadline requirements. Further, based on the asymptotic approach, we propose the notion of Application-Level Effective Capacity region, i.e., the throughput region that can be supported subject to deadline constraints, which allows us to quantify the potential gain of application-level scheduling. Simulation results show that application-level scheduling can improve the system capacity significantly while guaranteeing the deadline constraints.
    IEEE/ACM Transactions on Networking 06/2016; 24(3). DOI:10.1109/TNET.2015.2416256 · 1.81 Impact Factor
  • Source
    • "Existing works on stochastic system control either focus on systems with perfect a-prior information, e.g., [9], [10], or rely on stochastic approximation techniques that do not require such information, e.g., [11], [12]. While the proposed solutions are effective, they do not capture how information affects algorithm design and performance, and do not provide interfaces for integrating the fast-developing " data science " tools, e.g., data collecting methods and machine learning algorithms, [13], [14], into system control. "
    [Show abstract] [Hide abstract]
    ABSTRACT: We consider the problem of \emph{optimal matching with queues} in dynamic systems and investigate the value-of-information. In such systems, the operators match tasks and resources stored in queues, with the objective of maximizing the system utility of the matching reward profile, minus the average matching cost. This problem appears in many practical systems and the main challenges are the no-underflow constraints, and the lack of matching-reward information and system dynamics statistics. We develop two online matching algorithms: Learning-aided Reward optimAl Matching ($\mathtt{LRAM}$) and Dual-$\mathtt{LRAM}$ ($\mathtt{DRAM}$) to effectively resolve both challenges. Both algorithms are equipped with a learning module for estimating the matching-reward information, while $\mathtt{DRAM}$ incorporates an additional module for learning the system dynamics. We show that both algorithms achieve an $O(\epsilon+\delta_r)$ close-to-optimal utility performance for any $\epsilon>0$, while $\mathtt{DRAM}$ achieves a faster convergence speed and a better delay compared to $\mathtt{LRAM}$, i.e., $O(\delta_{z}/\epsilon + \log(1/\epsilon)^2))$ delay and $O(\delta_z/\epsilon)$ convergence under $\mathtt{DRAM}$ compared to $O(1/\epsilon)$ delay and convergence under $\mathtt{LRAM}$ ($\delta_r$ and $\delta_z$ are maximum estimation errors for reward and system dynamics). Our results reveal that information of different system components can play very different roles in algorithm performance and provide a systematic way for designing joint learning-control algorithms for dynamic systems.
  • Source
    • "Some efforts have been made to improve different aspects of QoS. For example, many scheduling policies [11] [12] [13] [14] [15] [16] are proposed to handle the transmitting of packets with deadlines. These policies differ in their definitions of delay constraints. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Cyber physical systems collect plenty of information from physical world. This information must be transmitted to the base station immediately to support the making of controlling orders since the physical world keeps transforming rapidly. Such a fact needs the support of real-time transmitting in CPS. Real-time traffic often has some extra QoS requirements. For example, regulating the interservice time, which is the time between two consecutive transmissions of a link, is essential for the real-time traffic in wireless networks. A guarantee of the interservice time of a single user is a precondition to support the normal operating of a system. As far as we know, none of previous work can guarantee the performance on the interservice time. Motivated by this, we design a framework for interservice time guaranteed scheduling. We first define a new capacity region of networks with a strict interservice time guarantee. It is an extension of the well-accepted definition on basic capacity region. Then we propose a novel scheduling policy that is both throughput-optimal and interservice time guaranteed. Simulation results show the policy performs well in interservice time and throughput.
    International Journal of Distributed Sensor Networks 01/2015; 2015:1-10. DOI:10.1155/2015/280109 · 0.67 Impact Factor
Show more