Conference Paper

A Call Admission Control Scheme Using NeuroEvolution Algorithm in Cellular Networks.

Conference: IJCAI 2007, Proceedings of the 20th International Joint Conference on Artificial Intelligence, Hyderabad, India, January 6-12, 2007
Source: DBLP

ABSTRACT This paper proposes an approach for learning call admission control (CAC) policies in a cellular net- work that handles several classes of traffic with different resource requirements. The performance measures in cellular networks are long term revenue, utility, call blocking rate (CBR) and handoff failure rate (CDR). Reinforcement Learning (RL) can be used to provide the optimal solution, however such method fails when the state space and action space are huge. We apply a form of NeuroEvolution (NE) algorithm to inductively learn the CAC policies, which is called CN (Call Admission Control scheme using NE). Comparing with the Q-Learning based CAC scheme in the constant traffic load shows that CN can not only approximate the optimal solution very well but also optimize the CBR and CDR in a more flexibility way. Additionally the simulation results demonstrate that the proposed scheme is capable of keeping the handoff dropping rate below a pre-specified value while still maintaining an ac- ceptable CBR in the presence of smoothly varying arrival rates of traffic, in which the state space is too large for practical deployment of the other learning scheme.

0 Bookmarks
 · 
77 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper defines a reinforcement learning (RL) approach to call control algorithms in links with variable capacity supporting multiple classes of service. The novelties of the document are the following: i) the problem is modeled as a constrained Markov decision process (MDP); ii) the constrained MDP is solved via a RL algorithm by using the Lagrangian approach and state aggregation. The proposed approach is capable of controlling classlevel quality of service in terms of both blocking and dropping probabilities. Numerical simulations show the effectiveness of the approach.
    European Journal of Control 01/2011; 17(1):89–103. · 0.79 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: The objective for Call admission control (CAC) is to accept or reject request calls so as to maximize the expected revenue over an infinite time period and maintain the predefined QoS constraints. This is a non-linear constraint optimization problem. This paper analyses the difficulties when handling QoS constraints in the CAC domain, and implements two constraint handling methods that cooperate with a NeuroEvolution algorithm called NEAT to learn CAC policies. The two methods are superiority of feasible points and static penalty functions. The simulation results are compared based on two evolution parameters: the ratio of feasible policies, and the ratio of "all accept' policies. Some researchers argue that superiority of feasible points may fail when the feasible region is quite small compared with the whole search space, however the speciation and complexification features of NEAT makes it a very competitive method even in such cases.
    Proceedings of the 3rd International Conference on Bio-Inspired Models of Network, Information and Computing Sytems; 11/2008
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper defines a Reinforcement Learning (RL) approach to call control algorithms in links with variable capacity supporting multiple classes of service. The novelties of the document are the following: i) the problem is modeled as a constrained Markov Decision Process (MDP); ii) the constrained MDP is solved via a RL algorithm by using the Lagrangian approach and state aggregation. The proposed approach is capable of controlling class-level quality of service in terms of both blocking and dropping probabilities. Numerical simulations show the effectiveness of the approach.
    Control & Automation (MED), 2010 18th Mediterranean Conference on; 07/2010

Full-text (2 Sources)

Download
43 Downloads
Available from
May 22, 2014