Questions related to Multi-Agent-Simulation
Put differently, how does the reductionism explicit in agent-based modelling (noting that ABM is gaining popularity with social scientists) square with the seeming inclination of social theoreticians to seek to describe and find the causes of social phenomena only at the social level?
I have come across the terms multi-agent systems, multi-agent models, and agent-based models in the literature. It seems some authors tend to use these terms interchangeably while some prefer one over the other. But do they mean the same thing or do they refer to different things? After thinking about this, I drafted a way to make this differentiation as follows. I would appreciate it if you could let me know what you think about it, whether you can reasonably agree with me or whether you have a completely different opinion.
multi-agent system - a complex, real-life system where many independent and inter-dependent agents simultaneously interact to reach a system-wide outcome within a set of pre-defined constraints.
multi-agent (or agent-based) model - a computer-based (often simplified) simulation model of a complex, real-life system where many independent and inter-dependent agents simultaneously interact to reach a system-wide outcome within a set of pre-defined constraints.
The question surrounds about applications where this two concept exists. if you have comparative studies, review articles or support it will be very helpful.
the question surrounds bout applications where this two concept exists. if you have comparative studies, review articles or support it will be very helpful.
I am interested in creating a multi-layer mechanical network. Therefore I would like to find a software where you can visualise nodes and links moving around in 2D and 3D space.
Please provide reference models and frameworks for agent-based modeling being used in social dynamics during disaster emergencies.
There are a lot of agents in my model while they have interaction just through the environment. I’m using a Q-Learning algorithm to solve this model so that all the agents share a static Q-table in java (because here the agents are homogenous). Here, the environment is dynamic and the time step of environment changes is a lot smaller than the time step of agent state changes. So, the state of an agent won’t be changed until the environment has been updated through plenty of steps. Furthermore, the agents and environment have interaction with each other and can affect each other. In one hand, I need to know the new state of the agents at the next time step (i.e., to find the MaxQ(s(t+1),a) in Q-Learing algorithm). On the other hand, I can’t postpone updating the Q-table until the next step because it is shared between the agents. So, do you have any suggestion to handle my problem?
I'm new in reinforcement learning and I don't know the difference between value iteration and policy iteration methods!
I am also very confused about categories of methods in reinforcement learning. Some studies classified reinforcement learning methods in two groups: model-based and model-free. But, some other studies classified reinforcement learning methods as: value iteration and policy iteration.
I were wondering if anybody help me to know the relation between these classification, as well.
I am trying to add properties to objects - agents, to be able to differentiate between different type of agents in MASON(Multi Agent Simulation Software).
I'm trying to write a software for a multi agent system. My first choice was PyQt4 but it seems that it has a lot of drawbacks when it comes to multi threading. The software should control and guide a real robots to complete a task (e. g. Forming a shape with some cubes)
I am looking for new research direction in cooperative control of multi-agent systems. What are the latest trends in this field of study? any comment is much appreciated.
I am trying to implement the ESFM which was introduced in paper "An Integrated Pedestrian Behavior Model Based on Extended Decision Field Theory and Social Force Model".
Assume the direction agent is facing would be the same direction as agent current acceleration(the total force he receives) is. When there is an obstacle between the agent and his destination, the agent firstly will see the obstacle and heading back. But after he looked back he could not see the obstacle again so will turn around immediately. This will be an infinite loop and agent will stuck in one place turning around until the environment changes.
So is there a better definition of agent heading?
Thanks in advance,
I know that in the Markov decision process (MDP), the probability of transition to a new state depends on the current state and chosen action of an agent. However, in my model, the new state of an agent also depends on the last previous action of itself and its neighbors. Can I solve the problem by a trick? The trick is to consider the last previous action of an agent and its neighbors to be a part of its current state space. I would appreciate it if you could let me know if there is any better solution.
I am searching for efficient simulation tool, which enable to simulate multiple robots in distributed environment and the underlying framework is based upon ROS. If know or have any idea please let me know, in worst case please share this question to increase the chances of getting the right answer.
I mean, are MDP and reinforcement learning as powerful as evolutionary game theory to model evolutionary dynamics of populations?
I’ve known that there are two main approaches for reinforcement learning in continuous state and action spaces: model-based and model-free. Does anybody know if this classification (classification of reinforcement learning approaches into model-based and model-free) is right for reinforcement learning in continuous state and action spaces as well. If not, what are the main approaches for continuous case?
As much as I have read, most of the work on multi-agent-systems and thereby,on design of an agent, JADE (or other similar platforms, say JANUS,GAMA,etc) has been extensively used to model a single agent and the entire agent-based-framework.
My question is:
Is it acceptable/standard/suitable to design/model an agent as a user-defined function/class (taking-in some input arguments and yielding some outputs), whose some of the inputs may/can be outputs of other agents(also modeled as functions/classes) and its outputs may/can be inputs to other agents(also modeled as a functions/classes), without using the JADE or similar platforms?
I want to use "Tile Coding" for discretization of my state space in reinforcement learning. But, I don't know how "Tile Coding" exactly works and how I can implement it, so I were wondering if you could mention me more or suggest me some source code of implementing "Tile Coding" in Matlab, R, Java, C and so on.
I mean how we can know a whether a "model-based reinforcement learning algorithm" or a "model-free reinforcement learning algorithm" is suitable for our case . Furthermore, there are a lot of algorithms to choose in each category (i.e., model-based or model-free), how we can find the most suitable algorithm. For example how we can choose between Q-learning, SARSA or TD-learning?
Intelligent agents are one of the most promising future emerging fields. The more intelligent they become the more useful they are ! However, intelligent agents without ethical behaviours may turn out to dramatic consequences. How can we define (unformally and formally) an "unethical intelligent agent" in a cooperative multi agent environment ?
I just began using the platform Janus for the development of MAS and it seems interesting. Can the users of this platform give us their feedback about it?
I am writing a computer program that implements an abstract social network of inter-communicating individuals (so a multiple agent system) and I want to be able to compute for each agent in the network of computational agents its individual POWER. I mean actual power, not e.g. power attributed by reputation or constitution. Thus does the mayor of the city of Metropolis have more or less power than the person about to detonate a bomb that will collapse a dam and flood the city? In the UK does the Prime Minister David Cameron have more or less power than Queen Elizabeth or Ian Hislop, editor of the famous satirical magazine Private Eye? By how much?
TO CLARIFY, although the suggestive examples I have given involve human beings [OK, maybe there is some slight doubt about Ian Hislop...] I am looking for (and not yet finding) an ALGORITHMIC means of calculating the "size" of some dynamic attribute reasonably called "power" for a COMPUTATIONAL agent that is a member of a dynamic network of COMPUTATIONAL agents.within a computer. All help much appreciated and duly acknowledged in any consequent publication(s)!
I am looking to find the optimal size of the multi-agent based coalition. The goal of the coalition is to share the renewable power among the members of the coalition. Does any one know about the generic method or technique for finding the optimal size of the coalition?
I am looking into available power market simulators (preferably to be used in the context of multi-agent systems).
I am familiar with the AMES Wholesale Power Market Testbed (http://www2.econ.iastate.edu/tesfatsi/AMESMarketHome.htm) as well as with PowerWeb (http://www.pserc.cornell.edu/powerweb/) and MASCEM (from the university of Porto).
I wish to design a method for modeling Ebola Virus Disease (EVD) infection using multi-agent simulation and to apply it in practice. Can anyone suggest a proper way to do this with references?
For example, consider a multi-agent system M that, when run, displays a recurring pattern: an exponentially increasing number of inter-agent messages abruptly followed by an almost total communication collapse. This pattern recurs indefinitely. The algorithm I am seeking would find a simplification of M, call it M~, (or several alternative such simplifications) that has essentially the same communication properties through time as M. M~ would itself be a multi-agent system.
The algorithm should be applicable to ANY multi-agent system for ANY large-scale property.
Clearly computationally precise definitions will be needed for a multi-agent system, a simplification of a MAS, large-scale behaviour, etc.
One method of precisely defining a MAS is in terms of agents that are production systems as these are defined in computer science. But, of course, there are others.
Part of the motivation for this line of investigation is to find a means to examine the possibility that the large-scale behaviour of the human race (Homo sapiens sapiens) over the past 100,000 years or so entails that human individuals have certain cognitive characteristics which might include some or all of learning, imagination, plan creation and execution, a tendency towards cooperation, aggression, and a preference for risk taking.
Stability and sensitivity of Consensus in Multi Agent System.
I am trying to develop my own agent that extends Agent class of Jade. But, When I open Agent Class, it has so many errors that I can not remove! what can I do?? I copy all needed class to my package too. But, errors still remain!
A known method for modeling and simulating the dynamics of multi-agent systems is the Petri nets. It provides the best results? It can be used for large systems? Know you a better way?
I found a description of this in several books about artificial intelligence for game developers [e.g. Buckland (2006) Programming Game AI by Example - Chapter 2; Bourg and Seemann (2004) AI for Game Developers - Chapter 9] and was wondering if anyone had applied this approach to program Social Simulations or Management Science Simulations (without using secondary software like Swarm or Repast)? Would you be willing to share your code?
We are attempting to develop a novel agent-based simulation modelling framework based on principles adopted from software engineering (object oriented analysis and design) to help studying the behaviour of elephants (or any other types of animals) in captivity.
Unlike current models that use an agent based approach for defining the agents and their interactions we want to use UML (use case diagrams, sequence diagrams, class diagrams, and state machine diagrams) for defining our agents and their interaction. But we also want to embed some theoretical knowledge about animal behaviour in our agent definitions. So in the end it will be a mixture between software agents and social simulation agents usually used in the field.
To give you an idea what I am talking about here is a link to a presentation I recently gave to some of my colleagues from the economics department. Although it does not feature animal behaviour the problem we are approaching is similar to that described above – trying out a novel approach to defining agents in a field where UML is relatively unknown.
Do you have any tips for us? Any references we should look at? Any similar projects you know of?
These last years, I have developed many models to tackle real problems, but until now none of them have been used in real situations by decision-makers.
I am curious to know if some of you have heard about examples of agent-based models that are actually used by decision-makers (city-planners, environmental health and safety manager, etc.) and not only by researchers/modelers.
I just know about GAMA Multi-Agent Simulator. I would like to have some comparison with other simulators and feedback from researchers who have used it.
I would be interested by references to papers dealing with "agile modeling" for complex systems simulation (esp. agent-based simulation). Although I'm aware of methodological proposals that more or less "import" or mimic this concept from software engineering, I've actually found very few papers about real applications, or software environments that can enable a continuous loop between modeling and simulation, allow an interactive design of models through the interaction of modelers with a simulation, etc.