Conference Paper

All Bids for One and One Does for All: Market-Driven Multi-agent Collaboration in Robot Soccer Domain

DOI: 10.1007/978-3-540-39737-3_66 Conference: Computer and Information Sciences - ISCIS 2003, 18th International Symposium, Antalya, Turkey, November 3-5, 2003, Proceedings
Source: DBLP


In this paper, a novel market-driven collaborative task allocation algorithm called “Collaboration by competition / cooperation”
for the robot soccer domain is proposed and implemented. In robot soccer, two teams of robots compete with each other to win
the match. For the benefit of the team, the robots should work collaboratively, whenever possible. The market-driven approach
applies the basic properties of free market economy to a team of robots for increasing the profit of the team as much as possible.
The experimental results show that the approach is robust and flexible and the developed team is more succcessful than its

KeywordsMarket-driven-multi-agent-collaboration-robot soccer

Download full-text


Available from: Hatice Kose, Jun 09, 2014
  • Source
    • "uses a market-driven role allocation algorithm and a similar potential field approach in which the coefficients of the field forces are trained by using Genetic Algorithms (GA) [19], [20]. Being a strong and offensive team, MarketTeam forced our team to learn some defensive behavior. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Robot soccer is an excellent testbed to explore innovative ideas and test the algorithms in multi-agent systems (MAS) research. A soccer team should play in an organized manner in order to score more goals than the opponent, which requires well-developed individual and collaborative skills, such as dribbling the ball, positioning, and passing. However, none of these skills needs to be perfect and they do not require highly complicated models to give satisfactory results. This paper proposes an approach inspired from ants, which are modeled as Braitenberg vehicles for implementing those skills as combinations of very primitive behaviors without using explicit communication and role assignment mechanisms, and applying reinforcement learning to construct the optimal state-action mapping. Experiments demonstrate that a team of robots can indeed learn to play soccer reasonably well without using complex environment models and state representations. After very short training sessions, the team started scoring more than its opponents that use complex behavior codes, and as a result of having very simple state representation, the team could adapt to the strategies of the opponent teams during the games.
    Robotics and Biomimetics, 2008. ROBIO 2008. IEEE International Conference on; 03/2009
  • Source
    • "Next, the agent, which is closest to its own goal among the unassigned agents, becomes the defender. Finally, the remaining two agents help the attacker by holding strategic positions behind the attacker (Kaplan, 2003). This method has also some drawbacks. "
    [Show abstract] [Hide abstract]
    ABSTRACT: In this work, the target is the coordination problem among the members of a robot soccer team. In order to solve this problem several methods which are extensions of a marketdriven approach are implemented. In this work these approaches are studied and compared in detail. The first developed method was the method with static role assignment. Since it has many drawbacks, a novel market-driven approach was implemented to increase the team success by using the full benefits of collaboration. In this first version, roles are fixed, and the agents are assigned suitable roles according to the available cost functions to increase success, in the current situation. This strategy was quite successful and takes good results in during the matches done by other teams, but there are different teams with different game strategies like in the real life case, so there is a need to change the game strategy (e.g. playing offensive or defensive) according to the opponent team strategy. So the original MarketTeam is extended by the addition of reinforcement-based learning method, which allows the team to learn new strategies, as it plays matches with other teams, and use a dynamic strategy to choose the roles for the players. Later this strategy which uses marketbased cost values and other domain specific values in its state vector is further extended to eliminate the drawbacks, and increase success. The results show that reinforcement learning is a good solution for role assignment problem in the robot soccer domain. However, encoding of the problem into the learner is an important issue. When the configuration space is quite large, the policy may not cover all possible states. As a result, the agent is forced to select random actions and the system performance decreases. The communication problem is not addressed in this work. It is assumed that each agent can broadcast limited amount of data. The controller simply collects available data from any other agent. The data may be noisy. Since, at each frame the communication data is refreshed, the error is not cumulative. The solution can also be used in other highly dynamic environments where it is possible to introduce some reinforcement measures for the team. In the robot soccer domain, the reinforcement measures are the goals scored by either our team or the opponent team.
    Cutting Edge Robotics, 07/2005; , ISBN: 3-86611-038-3
  • Source

Show more