Preprint

Resource allocation in dynamic multiagent systems

Preprints and early-stage research may not have been peer reviewed yet.
If you want to read the PDF, try requesting it from the authors.

Abstract

Resource allocation and task prioritisation are key problem domains in the fields of autonomous vehicles, networking, and cloud computing. The challenge in developing efficient and robust algorithms comes from the dynamic nature of these systems, with many components communicating and interacting in complex ways. The multi-group resource allocation optimisation (MG-RAO) algorithm we present uses multiple function approximations of resource demand over time, alongside reinforcement learning techniques, to develop a novel method of optimising resource allocation in these multi-agent systems. This method is applicable where there are competing demands for shared resources, or in task prioritisation problems. Evaluation is carried out in a simulated environment containing multiple competing agents. We compare the new algorithm to an approach where child agents distribute their resources uniformly across all the tasks they can be allocated. We also contrast the performance of the algorithm where resource allocation is modelled separately for groups of agents, as to being modelled jointly over all agents. The MG-RAO algorithm shows a 23 − 28% improvement over fixed resource allocation in the simulated environments. Results also show that, in a volatile system, using the MG-RAO algorithm configured so that child agents model resource allocation for all agents as a whole has 46.5% of the performance of when it is set to model multiple groups of agents. These results demonstrate the ability of the algorithm to solve resource allocation problems in multi-agent systems and to perform well in dynamic environments.

No file available

Request Full-text Paper PDF

To read the file of this research,
you can request a copy directly from the author.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Scheduling is assigning shared resources over time to efficiently complete the tasks over a given period of time. The term is applied separately for tasks and resources correspondingly in task scheduling and resource allocation. Scheduling is a popular topic in operational management and computer science. Effective schedules ensure system efficiency, effective decision making , minimize resource wastage and cost, and enhance overall productivity. It is generally a tedious task to choose the most accurate resources in performing work items and schedules in both computing and business process execution. Especially in real-world dynamic systems where multiple agents involve in scheduling various dynamic tasks is a challenging issue. Reinforcement Learning is an emergent technology which has been able to solve the problem of the optimal task and resource scheduling dynamically. This review paper is about a research study that focused on Reinforcement Learning techniques that have been used for dynamic task scheduling. The paper addresses the results of the study by means of the state-of-the-art on Reinforcement learning techniques used in dynamic task scheduling and a comparative review of those techniques.
Article
Full-text available
In this paper, we consider the second-order consensus problem of hybrid multi-agent systems with unknown disturbances by using sliding mode control under the leader-follower network. First, the hybrid multi-agent system model with disturbances and nonlinear term is proposed, which is composed of continuous-time dynamic agents and discrete-time dynamic agents. Second, the definition of the second-order consensus of hybrid multi-agent system is given. Then, we assume that the interaction among all agents happens in sampling time and each continuous-time dynamic agent can observe its own states in real time. Based on the equivalent approaching law and the states information among agents, the sliding mode control protocols are designed to achieve the second-order consensus of the hybrid multi-agent system. Some sufficient conditions are given for solving the second-order consensus under the sliding mode control protocols. Finally, some simulations are also given to illustrate the validity of the proposed method.
Article
Full-text available
Deep reinforcement learning (RL) has achieved outstanding results in recent years. This has led to a dramatic increase in the number of applications and methods. Recent works have explored learning beyond single-agent scenarios and have considered multiagent learning (MAL) scenarios. Initial results report successes in complex multiagent domains, although there are several challenges to be addressed. The primary goal of this article is to provide a clear overview of current multiagent deep reinforcement learning (MDRL) literature. Additionally, we complement the overview with a broader analysis: (i) we revisit previous key components, originally presented in MAL and RL, and highlight how they have been adapted to multiagent deep reinforcement learning settings. (ii) We provide general guidelines to new practitioners in the area: describing lessons learned from MDRL works, pointing to recent benchmarks, and outlining open avenues of research. (iii) We take a more critical tone raising practical challenges of MDRL (e.g., implementation and computational demands). We expect this article will help unify and motivate future research to take advantage of the abundant literature that exists (e.g., RL and MAL) in a joint effort to promote fruitful research in the multiagent community.
Article
Full-text available
Unmanned aerial vehicles (UAVs) are capable of serving as aerial base stations (BSs) for providing both cost-effective and on-demand wireless communications. This article investigates dynamic resource allocation of multiple UAVs enabled communication networks with the goal of maximizing long-term rewards. More particularly, each UAV communicates with a ground user by automatically selecting its communicating user, power level and subchannel without any information exchange among UAVs. To model the dynamics and uncertainty in environments, we formulate the long-term resource allocation problem as a stochastic game for maximizing the expected rewards, where each UAV becomes a learning agent and each resource allocation solution corresponds to an action taken by the UAVs. Afterwards, we develop a multi-agent reinforcement learning (MARL) framework that each agent discovers its best strategy according to its local observations using learning. More specifically, we propose an agent-independent method, for which all agents conduct a decision algorithm independently but share a common structure based on Q-learning. Finally, simulation results reveal that: 1) appropriate parameters for exploitation and exploration are capable of enhancing the performance of the proposed MARL based resource allocation algorithm; 2) the proposed MARL algorithm provides acceptable performance compared to the case with complete information exchanges among UAVs. By doing so, it strikes a good tradeoff between performance gains and information exchange overheads.
Article
Full-text available
Vehicle to everything (V2X) is a new generation of information and communication technologies that connect vehicles to everything. It not only creates a more comfortable and safer transportation environment, but also has much significance for improving traffic efficiency, and reducing pollution and accident rates. At present, the technology is still in the exploratory stage, and the problems of traffic safety and information security brought about by V2X applications have not yet been fully evaluated. Prior to marketization, we must ensure the reliability and maturity of the technology, which must be rigorously tested and verified. Therefore, testing is an important part of V2X technology. This article focuses on the V2X application requirements and its challenges, the need of testing. Then we also investigate and summarize the testing methods for V2X in the communication process and describe them in detail from the architectural perspective. In addition, we have proposed an end-to-end testing system combining virtual and real environments which can undertake the test task of the full protocol stack.
Article
Full-text available
With the emergence of in-vehicle applications, providing the required computational capabilities is becoming a crucial problem. This paper proposes a framework named Autonomous Vehicular Edge (AVE) for edge computing on the road, with the aim of increasing the computational capabilities of vehicles in a decentralized manner. By managing the idle computational resources on vehicles and using them efficiently, the proposed AVE framework can provide computation services in dynamic vehicular environments without requiring particular infrastructures to be deployed. Specifically, this paper introduces a workflow to support the autonomous organization of vehicular edges. Efficient job caching is proposed to better schedule jobs based on the information collected on neighboring vehicles, including GPS information. A scheduling algorithm based on ant colony optimization (ACO) is designed to solve this job assignment problem. Extensive simulations are conducted, and the simulation results demonstrate the superiority of this approach over competing schemes in typical urban and highway scenarios.
Article
Full-text available
Intelligent Transportation Systems (ITS) require Vehicle-to-Everything (V2X) communication. In dense traffic, the communication channel may become congested, impairing the reliability of the ITS safety applications. Therefore, European Telecommunications Standard Institute (ETSI) demands Decentralized Congestion Control (DCC) to control the channel load. Our objective is to investigate whether message-rate or data-rate congestion control provides better application reliability. We compare LIMERIC and PDR-DCC as representatives of the two principles. We analyzed the application reliability of LIMERIC with different data-rates and PDR-DCC with different message-rates for varying traffic densities and application requirements. We observed that, for applications with demanding requirements and in a large variety of vehicular densities, PDR-DCC (data-rate) provides more reliable communication support than LIMERIC (message-rate). Furthermore, the study hints that a combined message-rate and data-rate congestion control can improve reliability further.
Article
Full-text available
A wide range of services has been developed for Vehicular Ad hoc Networks (VANETs) ranging from safety to infotainment applications. An essential requirement for such services is that they are offered with Quality of Service (QoS) guarantees in terms of service reliability and availability. Searching for feasible routes subject to multiple QoS constraints is in general an NP-hard problem. Besides, routing reliability needs to be paid special attention as communication links frequently break in VANETs. In this paper, we propose employing the Situational Awareness (SA) concept and an Ant Colony System (ACS) based algorithm to develop a Situation-Aware Multi-constrained QoS (SAMQ) routing algorithm for VANETs. SAMQ aims to compute feasible routes between the communicating vehicles subject to multiple QoS constraints and pick the best computed route, if such a route exists. To mitigate the risks inherited from selecting the best computed route that may turn out to fail at any moment, SAMQ utilises the SA levels and ACS mechanisms to prepare certain countermeasures with the aim of assuring a reliable data transmission. Simulation results demonstrate that SAMQ is capable of achieving a reliable data transmission as compared to the existing QoS routing algorithms even when the network topology is highly dynamic.
Article
Full-text available
Dynamic and appropriate resource dimensioning is a crucial issue in cloud computing. As applications go more and more 24/7, online policies must be sought to balance performance with the cost of allocated virtual machines. Most industrial approaches to date use ad hoc manual policies, such as threshold-based ones. Providing good thresholds proved to be tricky and hard to automatize to fit every application requirement. Research is being done to apply automatic decision-making approaches, such as reinforcement learning. Yet, they face a lot of problems to go to the field: having good policies in the early phases of learning, time for the learning to converge to an optimal policy and coping with changes in the application performance behavior over time. In this paper, we propose to deal with these problems using appropriate initialization for the early stages as well as convergence speedups applied throughout the learning phases and we present our first experimental results for these. We also introduce a performance model change detection on which we are currently working to complete the learning process management. Even though some of these proposals were known in the reinforcement learning field, the key contribution of this paper is to integrate them in a real cloud controller and to program them as an automated workflow.
Conference Paper
Full-text available
With the growing usage of the world-wide ICT networks, agent technologies and multiagent systems are attracting more and more attention, as they perform well in environments that are not necessarily well-structured and benevolent. Looking at the problem solving capacity of multiagent systems, emergent system behaviour is one of the most interesting phenomena, however, there is more to multiagent systems design than the interaction between a number of agents: For an effective system behaviour we need structure and organisation. But the organisation of a multiagent systems is difficult to specify at design time in the face of a changing environment. This paper presents basic concepts for a theory of holonic multiagent systems to both provide a methodology for the recursive modelling of agent groups, and allow for dynamic reorganisation during runtime.
Conference Paper
Full-text available
Multi-agent systems (MAS) are a field of study of growing interest in a variety of domains such as robotics or distributed controls. The article focuses on decentralized reinforcement learning (RL) in cooperative MAS, where a team of independent learning robots (IL) try to coordinate their individual behavior to reach a coherent joint behavior. We assume that each robot has no information about its teammates' actions. To date, RL approaches for such ILs did not guarantee convergence to the optimal joint policy in scenarios where the coordination is difficult. We report an investigation of existing algorithms for the learning of coordination in cooperative MAS, and suggest a Q-learning extension for ILs, called hysteretic Q-learning. This algorithm does not require any additional communication between robots. Its advantages are showing off and compared to other methods on various applications: bi-matrix games, collaborative ball balancing task and pursuit domain.
Conference Paper
Full-text available
The Stadtpilot project develops a vehicular technology targeting autonomous driving in metropolitan areas. Currently, demonstrations focus on the German city of Braunschweig. These activities are further supported by the AIM test bed, which provides communication infrastructure along Braunschweig's inner ring road. Besides communication capabilities, an autonomous vehicle needs to have a complete, consistent and correct understanding about its surroundings to enable safe driving. For this purpose, this paper presents a novel approach for information aggregation and exchange through a context model tailored for urban environments. Furthermore, integration of C2X hardware into the existing traffic infrastructure is shown.
Conference Paper
Full-text available
This paper describes five systems that exploit negotiation strategies to solve multiagent resource allocation problems. A deep comparison is drawn among them according to different criteria that involve general features of the systems; adherence to widely accepted agent definitions; domain, purpose, and approach; analysis, design and implementation of the negotiation protocol. Considerations on how extending one of the analyzed systems in order to move a concrete step towards the realization of an integrated platform for developing negotiation protocols are also provided in the conclusions.
Conference Paper
Full-text available
In this paper, an on-line change detection algorithm for resource allocation in service-oriented systems is presented. The change detection is made basing on a dissimilarity measure between two estimated probability distributions. In our approach we take advantage of the fact that streams of requests in service-oriented systems can be modeled by non-homogenous Poisson processes. Thus, for Bhattacharyya distance measure and Kullback-Leibler divergence analytical expressions can be given. At the end of the paper a simulation study is presented. The aim of the simulation is to demonstrate an effect of applying adaptive approach in resource allocation problem. © 2012 IFIP International Federation for Information Processing.
Conference Paper
Full-text available
In this paper, we present an adaptive organizational policy for multi-agent systems called \acro{trace}. \acro{trace} allows a collection of multi-agent organizations to dynamically allocate tasks and resources between themselves in order to efficiently process an incoming stream of task requests. \acro{trace} is intended to cope with environments in which tasks have time constraints, and environments that are subject to load variations. \acro{trace} is made up of two key elements: the task allocation protocol (\acro{tap}) and the resource allocation protocol (\acro{rap}). The \acro{tap} allows agents to cooperatively allocate their tasks to other agents with the capability and opportunity to successfully carry them out. As requests arrive arbitrarily, at any instant, some organizations could have surplus resources while others could become overloaded. In order to minimize the number of lost requests caused by an overload, the allocation of resources to organizations is changed dynamically by the resource allocation protocol (\acro{rap}), which uses ideas from computational market systems to allocate resources (in the form of problem solving agents) to organizations. We begin by formally defining the task allocation problem, and show that it is \acro{NP}-complete, and hence that centralized solutions to the problem are unlikely to be feasible. We then introduce the task and resource allocation protocols, focussing on the way in which resources are allocated by the \acro{rap}. We then present some experimental results, which show that \acro{trace} exhibits high performance despite unanticipated changes in the environment.
Article
Full-text available
The allocation of resources within a system of autonomous agents, that not only have preferences over alternative allocations of resources but also actively participate in computing an allocation, is an exciting area of research at the interface of Computer Science and Economics. This paper is a survey of some of the most salient issues in Multiagent Resource Allocation. In particular, we review various languages to represent the preferences of agents over alternative allocations of resources as well as different measures of social welfare to assess the overall quality of an allocation. We also discuss pertinent issues regarding allocation procedures and present important complexity results. Our presentation of theoretical issues is complemented by a discussion of software packages for the simulation of agent-based market places. We also introduce four major application areas for Multiagent Resource Allocation, namely industrial procurement, sharing of satellite resources, manufacturing control, and grid computing.
Article
Full-text available
Multiagent systems are rapidly finding applications in a variety of domains, including robotics, distributed control, telecommunications, and economics. The complexity of many tasks arising in these domains makes them difficult to solve with preprogrammed agent behaviors. The agents must, instead, discover a solution on their own, using learning. A significant part of the research on multiagent learning concerns reinforcement learning techniques. This paper provides a comprehensive survey of multiagent reinforcement learning (MARL). A central issue in the field is the formal statement of the multiagent learning goal. Different viewpoints on this issue have led to the proposal of many different goals, among which two focal points can be distinguished: stability of the agents' learning dynamics, and adaptation to the changing behavior of the other agents. The MARL algorithms described in the literature aim---either explicitly or implicitly---at one of these two goals or at a combination of both, in a fully cooperative, fully competitive, or more general setting. A representative selection of these algorithms is discussed in detail in this paper, together with the specific issues that arise in each category. Additionally, the benefits and challenges of MARL are described along with some of the problem domains where the MARL techniques have been applied. Finally, an outlook for the field is provided.
Conference Paper
Full-text available
Reinforcement learning can provide a robust and natural means for agents to learn how to coordinate their action choices in multiagent systems. We examine some of the factors that can influence the dynamics of the learning process in such a setting. We first distinguish reinforcement learners that are unaware of (or ignore) the presence of other agents from those that explicitly attempt to learn the value of joint actions and the strategies of their counterparts. We study Q-learning in cooperative multiagent systems under these two perspectives, focusing on the influence of partial action observability, game structure, and exploration strategies on convergence to (optimal and suboptimal) Nash equilibria and on learned Qvalues. 1 Introduction The application of learning to the problem of coordination in multiagent systems (MASs) has become increasingly popular in AI and game theory. The use of reinforcement learning (RL), in particular, has attracted recent attention [22, 1...
Article
Full-text available
. The eligibility trace is one of the basic mechanisms used in reinforcement learning to handle delayed reward. In this paper we introduce a new kind of eligibility trace, the replacing trace, analyze it theoretically, and show that it results in faster, more reliable learning than the conventional trace. Both kinds of trace assign credit to prior events according to how recently they occurred, but only the conventional trace gives greater credit to repeated events. Our analysis is for conventional and replace-trace versions of the offline TD(1) algorithm applied to undiscounted absorbing Markov chains. First, we show that these methods converge under repeated presentations of the training set to the same predictions as two well known Monte Carlo methods. We then analyze the relative efficiency of the two Monte Carlo methods. We show that the method corresponding to conventional TD is biased, whereas the method corresponding to replace-trace TD is unbiased. In addition, we show that t...
Article
Recently, the advancement in communications, Intelligent Transportation Systems (ITS), and computational systems, has opened up new opportunities for intelligent traffic safety, comfort and efficiency solutions. Artificial Intelligence (AI) has been widely used to optimize traditional data-driven approaches in different areas of the scientific research. Vehicle-to-Everything (V2X) system together with AI can acquire information from diverse sources, expand the driver’s perception, and predict to avoid potential accidents, thus enhancing the comfort, safety and efficiency of the driving. This article presents a comprehensive survey of the research works that have utilized AI to address various research challenges in V2X systems.We have summarized the contribution of these research works and categorized them according to the application domains. Finally, we present open problems and research challenges that need to be addressed for realizing the full potential of AI to advance V2X systems.
Article
In a connected vehicle environment, vehicles are able to communicate and exchange detailed information such as speed, acceleration, and position in real time. Such information exchange is important for improving traffic safety and mobility. This allows vehicles to collaborate with each other, which can significantly improve traffic operations particularly at intersections and freeway ramps. To assess the potential safety and mobility benefits of collaborative driving enabled by connected vehicle technologies, this study develops an optimization-based ramp control strategy and a simulation evaluation platform using VISSIM, MATLAB, and the Car2X module in VISSIM. The ramp control strategy is formulated as a constrained nonlinear optimization problem and solved by the MATLAB optimization toolbox. The optimization model provides individual vehicles with step-by-step control instructions in the ramp merging area. In addition to the optimization-based ramp control strategy, an empirical gradual speed limit control strategy is also formulated. These strategies are evaluated using the developed simulation platform in terms of average speed, average delay time, and throughput and are compared with a benchmark case with no control. The study results indicate that the proposed optimal control strategy can effectively coordinate merging vehicles at freeway on-ramps and substantially improve safety and mobility, especially when the freeway traffic is not oversaturated. The ramp control strategy can be further extended to improve traffic operations at bottlenecks caused by incidents, which cause approximately 25% of traffic congestion in the United States.
Article
Nowadays a major class of wireless sensor networks (WSNs) applications require a minimum quality of service parameters to be satisfied while the wireless sensor nodes are mobile. Most of the standard WSN routing protocols greedily choose the neighbor node with the best quality of service (QoS) parameter(s) as a next hop. However, the data packet might be able to be routed through other neighbors as it might require less QoS. So the energy of the neighbor node with the best QoS will deplete earlier than other nodes which will result in the reduction of network lifetime. Therefore, it is important that QoS routing protocols of WSNs be capable of efficiently balancing energy and other resources consumption throughout the network. In this paper, we proposed EQR-RL, energy-aware QoS routing protocol in WSNs using reinforcement learning. We compare the network performance of our proposed protocol with two other protocols (QoS-AODV and RL-QRP). The packet delivery ratio, average end-to-end delay and impact of the different traffic load on average end-to-end delay are investigated. Simulation results indicate the superiority of our proposed protocol over two others by considering different network traffic load and node mobility in terms of average end-to-end delay and packet delivery ratio.
Article
A challenging application of artificial intelligence systems involves the scheduling of traffic signals in multi-intersection vehicular networks. This paper introduces a novel use of a multi-agent system and reinforcement learning (RL) framework to obtain an efficient traffic signal control policy. The latter is aimed at minimising the average delay, congestion and likelihood of intersection cross-blocking. A five-intersection traffic network has been studied in which each intersection is governed by an autonomous intelligent agent. Two types of agents, a central agent and an outbound agent, were employed. The outbound agents schedule traffic signals by following the longest-queue-first (LQF) algorithm, which has been proved to guarantee stability and fairness, and collaborate with the central agent by providing it local traffic statistics. The central agent learns a value function driven by its local and neighbours' traffic conditions. The novel methodology proposed here utilises the Q-Learning algorithm with a feedforward neural network for value function approximation. Experimental results clearly demonstrate the advantages of multi-agent RL-based control over LQF governed isolated single-intersection control, thus paving the way for efficient distributed traffic signal control in complex settings.
Conference Paper
We present a distributed variant of Q-learning that allows to learn the optimal cost-to-go function in stochastic cooperative multi-agent domains without communication between the agents.
Article
The contract net protocol has been developed to specify problem-solving communication and control for nodes in a distributed problem solver. Task distribution is affected by a negotiation process, a discussion carried on between nodes with tasks to be executed and nodes that may be able to execute those tasks. The specification of the protocol is presented, and its use in the solution of a problem in distributed sensing is demonstrated. The utility of negotiation as an interaction mechanism is discussed. It can be used to achieve different goals, such as distributing control and data to avoid bottlenecks and enabling a finer degree of control in making resource allocation and focus decisions than is possible with traditional mechanisms.
Article
Typescript. Thesis (Ph. D.)--University of Massachusetts at Amherst, 1984. Includes bibliographical references (leaves 200-210). Microfilm.
Article
In this paper, we formulate and solve a problem of resource allocation over a given time horizon with uncertain demands and uncertain capacities of the available resources. In particular, we consider a number of data sources with uncertain bit rates, sharing a set of parallel channels with time-varying and possibly uncertain transmission capacities. We present a method for allocating the channels so as to maximize the expected system throughput. The framework encompasses quality-of-service (QoS) requirements, e.g., minimum-rate constraints, as well as priorities represented by a user-specific cost per transmitted bit. We assume only limited statistical knowledge of the source rates and channel capacities. Optimal solutions are found by using the maximum entropy principle and elementary probability theory. The suggested framework explains how to make use of multiuser diversity in various settings, a field of recently growing interest in communication theory. It admits scheduling over multiple base stations and includes transmission buffers to obtain a method for optimal resource allocation in rather general multiuser communication systems. For additional information and references, please see http://www.signal.uu.se/Research/PCCwirelessIP.html and the PhD thesis by Mathias Johansson, 2004: http://www.signal.uu.se/Publications/abstracts/a041.html
Article
This paper provides a definition of automated negotiation within electronic commerce. It outlines two barriers to automated negotiation, the ontology issue and the strategy problem. State of the art overviews are given of automated negotiation, specifically Negotiation Support Systems, intelligent agents, the auction mechanism, and online marketspaces. Both academic research and currently functional systems are covered, and several World Wide Web addresses are given for readers who wish to investigate further on their own. 1 1 While every attempt is made to provide current URL locations, the Web changes more quickly than print media can ever capture. Hence, some of the URLs may not be current or correct by the time this article appears. We will try to keep our Negotiation Project web site, http://haas.berkeley.edu/~citm/nego-proj.html, current with respect to these addresses. 2 1.
A survey on multi-agent reinforcement learning methods for vehicular networks
  • I Althamary
  • C W Huang
Althamary, I., Huang, C. W., and Lin, P. A survey on multi-agent reinforcement learning methods for vehicular networks. 2019 15th International Wireless Communications and Mobile Computing Conference, IWCMC 2019 (2019), 1154-1159.
  • Y Chevaleyre
  • P E Dunne
  • U Endriss
  • J Lang
  • N Maudet
Chevaleyre, Y., Dunne, P. E., Endriss, U., Lang, J., Maudet, N., and Rodríguez-Aguilar, J. Multiagent resource allocation. Knowledge Engineering Review 20, 2 (2005), 143-149.
Content and context aware strategies for QoS support in VANETs
  • G Rizzo
  • M R Palattella
  • T Braun
  • T Engel
Rizzo, G., Palattella, M. R., Braun, T., and Engel, T. Content and context aware strategies for QoS support in VANETs. Proceedings -International Conference on Advanced Information Networking and Applications, AINA 2016-May (2016), 717-723.
Holonic multi-agent systems
  • S Rodriguez
  • V Hilaire
  • N Gaud
  • S Galland
  • A Koukam
Rodriguez, S., Hilaire, V., Gaud, N., Galland, S., and Koukam, A. Holonic multi-agent systems. Natural Computing Series (2011).
Context change detection for resource allocation in service-oriented systems
  • P Rygielski
  • J M Tomczak
Rygielski, P., and Tomczak, J. M. Context change detection for resource allocation in service-oriented systems. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 6882 LNAI, PART 2 (2011), 591-600.
V2X-based traffic congestion recognition and avoidance
  • B Schünemann
  • J W Wedel
  • I Radusch
Schünemann, B., Wedel, J. W., and Radusch, I. V2X-based traffic congestion recognition and avoidance. Tamkang Journal of Science and Engineering 13, 1 (2010), 63-70.
  • S Vries De
  • R V Vohra
Vries de, S., and Vohra, R. V. Combinatorial Auctions : A Survey. Journal on Computing 15, 3 (1998), 284-309.