On Combining Shortest-Path and Back-Pressure Routing Over Multihop Wireless Networks

Dept. of Electr. & Comput. Eng., Iowa State Univ., Ames, IA, USA
IEEE/ACM Transactions on Networking (Impact Factor: 2.01). 07/2011; DOI: 10.1109/TNET.2010.2094204
Source: DBLP

ABSTRACT Back-pressure-type algorithms based on the algorithm by Tassiulas and Ephremides have recently received much attention for jointly routing and scheduling over multihop wireless networks. However, this approach has a significant weakness in routing because the traditional back-pressure algorithm explores and exploits all feasible paths between each source and destination. While this extensive exploration is essential in order to maintain stability when the network is heavily loaded, under light or moderate loads, packets may be sent over unnecessarily long routes, and the algorithm could be very inefficient in terms of end-to-end delay and routing convergence times. This paper proposes a new routing/scheduling back-pressure algorithm that not only guarantees network stability (throughput optimality), but also adaptively selects a set of optimal routes based on shortest-path information in order to minimize average path lengths between each source and destination pair. Our results indicate that under the traditional back-pressure algorithm, the end-to-end packet delay first decreases and then increases as a function of the network load (arrival rate). This surprising low-load behavior is explained due to the fact that the traditional back-pressure algorithm exploits all paths (including very long ones) even when the traffic load is light. On the other-hand, the proposed algorithm adaptively selects a set of routes according to the traffic load so that long paths are used only when necessary, thus resulting in much smaller end-to-end packet delays as compared to the traditional back-pressure algorithm .

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The control of large queueing networks is a notoriously difficult problem. Recently, an interesting new policy design framework for the control problem called h-MaxWeight has been proposed: h-MaxWeight is a natural generalization of the famous MaxWeight policy where instead of the quadratic any other surrogate value function can be applied. Stability of the policy is then achieved through a perturbation technique. However, stability crucially depends on parameter choice which has to be adapted in simulations. In this paper we use a different technique where the required perturbations can be directly implemented in the weight domain, which we call a scheduling field then. Specifically, we derive the theoretical arsenal that guarantees universal stability while still operating close to the underlying cost criterion. Simulation examples suggest that the new approach to policy synthesis can even provide significantly higher gains irrespective of any further assumptions on the network model or parameter choice.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Shortest path is amongst classical problems of computer science. The problems is solved by hundreds of algorithms, silicon computing architectures and novel substrate, unconventional, computing devices. Acellular slime mould P. polycephalum originally its fame as a computing biological substrate due to its alleged ability to approximate shortest path from its inoculation site to a source of nutrients. Several algorithms were designed based on properties of the slime mould. Many of the Physarum-inspired algorithms suffer from a low converge speed. To accelerate the search of a solution and reduce a number of iterations we combined an original model of Physarum-inspired path solver with a new a parameter, called energy. We undertook a series of com-putational experiments on approximating shortest paths in networks with different topologies and number of nodes varying from 15 to 2000. We found that the improved Physarum algorithm matches well existing Physaruminspired approaches yet outperforms them in number of iterations executed and a total running time. We also compare our algorithm with other existing algorithms, including the ant colony optimization algorithm and Dijkstra algorithm.
    The Scientific World Journal 03/2014; · 1.73 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Motivated by the regular service requirements of video applications for improving Quality-of-Experience (QoE) of users, we consider the design of scheduling strategies in multi-hop wireless networks that not only maximize system throughput but also provide regular inter-service times for all links. Since the service regularity of links is related to the higher-order statistics of the arrival process and the policy operation, it is highly challenging to characterize and analyze directly. We overcome this obstacle by introducing a new quantity, namely the time-since-last-service (TSLS), which tracks the time since the last service. By combining it with the queue-length in the weight, we propose a novel maximum-weight type scheduling policy, called Regular Service Guarantee (RSG) Algorithm. The unique evolution of the TSLS counter poses significant challenges for the analysis of the RSG Algorithm. To tackle these challenges, we first propose a novel Lyapunov function to show the throughput optimality of the RSG Algorithm. Then, we prove that the RSG Algorithm can provide service regularity guarantees by using the Lyapunov-drift based analysis of the steady-state behavior of the stochastic processes. In particular, our algorithm can achieve a degree of service regularity within a factor of a fundamental lower bound we derive. This factor is a function of the system statistics and design parameters and can be as low as two in some special networks. Our results, both analytical and numerical, exhibit significant service regularity improvements over the traditional throughput-optimal policies, which reveals the importance of incorporating the metric of time-since-last-service into the scheduling policy for providing regulated service.