Questions related to Experimental Economics
"People often prefer (ask for) more information but might ignore their difficulty of understanding, using, and acting (appropriately) on this information."
Does anyone have literature/ experimental evidence on this?
I know about the literature on information avoidance. I am looking more for evidence that people request (costly) information that they deem helpful but they would not request if they were aware that they would not use it because they would, e.g., misinterpret it.
Thanks in advance
In economics and biology, the terms "conditional cooperation" and "indirect reciprocity" are used to describe behavior, where subjects condition their behavior in stage t of a repeated game on the opponent's reputation (see Bolton, Katok, Ockenfels / J Pub Econ 2006), or where subjects play a one-shot game and condition their behavior on the opponent's expected behavior (see Fischbacher, Gächter, Fehr / Economic Letters 2001). I’m wondering whether there is a difference between "conditional cooperation" and "indirect reciprocity," or whether these terms are interchangeable?
By definition, Public Good (PG) and Common Pool Resource (CPR) are both non-excludable. The main difference is their rivalry property: PG can be consumed without reducing availability for others, while consuming CPR will decrease the available resources for others. PG has free-riders problem (lack of contributions); CPR has "tragedy of the commons problem" (overuse).
I have 3 questions:
1. So in experimental economics, how do you set up a experiment that distinguishes the differences between two games? For example, if it is a CPR game, will you tell participants that 'the resource is limited and you cannot play anymore when it is depleted'?
2. In a paper by Pahl-Wostl and Ebenhöh (2004), they developed a CPR simulation, in which data is from a PG experiment of Fehr and Gächter (2002). How the data of PGG can be used in CPR? Is there any modification required?
3. The difference between the experiment setup in 2 works above is their utility function. For Fehr and Gächter, the_return = total_investment x 0.4. For Pahl-Wostl and Ebenhöh, the_return = total_investment x 0.6 / 4 = total_investment x 0.15. Is it the difference between experiment's description of CPR and PGG?
- Pahl-Wostl and Ebenhöh (2004) - http://jasss.soc.surrey.ac.uk/7/1/3.html
- Fehr and Gächter (2002) - http://www.nature.com/nature/journal/v415/n6868/full/415137a.html
I want to compare two populations, but we can only measure 6 participants at a time at most (the total sample is larger of course). Therefore running the task classically is difficult.
A possible solution is having participants play against an algorithm (tit-for-tat, or adaptive pavlov). However, I can't find any literature of humans vs. algorithm in the prisoner's dilemma.
Am I missing something?
I'm trying to compute the point of indifference (as in Kubota et al 2014. "The Price of Racial Bias: Intergroup Negotiations in the Ultimatum Game") in the third party ultimatum game but I'm facing some problems due the specific design of the third party version of the task.
The computation is pretty easy in the standard version of the ultimatum game but in the third party version it is not so straight forward given the symmetry of the decision function.
We can empirically find 2 points of indifference, one below and one above the equal split (50%, 50%). Both points lay on a probability function with high values around the equal split and low values on both tails. Such function will never be fitted by any logistic or binomial function therefore, I tried to split the data two differnt datasets containing above-equality or below-equality levels of the splits and compute points of indifference separatedly. However, I often find subjects that accept or reject all the offers so the function fits very poorly the data. This produces aberrant values such as very extreme point of indifference (eg. 1000000 dollars in a task with splits of 10 dollars).
Does anyone know how to obtain reliable points of indifference?
I am trying to design an experiment and considering if I should use a control group in my design. There are about 200 participants in the experiment and 11 treatments to be tested. The participants will be given the same situation (sitting in a car in a traffic jam) and will be asked what would they choose if there are choices like bus, tram, bike and walk available. I've finished the literature review and came to the result that there are 11 factors that effect the travel mode choice of travelers, and those factors should all be tested.
Which kind of experimental design would you recommend? A problem that I'm having now, is that I don't have more than 200 participants, and even a fractional factorial design with 2-level-treatments would need much more than 200 runs.
Another question is, how many runs/how many participants should be given the same set of treatment so that we can have a result which is acceptable?
Thank you for your help!
I am designing an experiment in economics that will have subjects do a real effort task. The task will be to find pairs of identical letters in a text.
This type of task has already been used in other experiments in economics, yet I am unable to find what text is usually given to subjects - I suppose there must be a "standard text" to permit greater comparability between studies, but I can't find anything.
Do you know what material is generally used for tasks where subjects have to find pairs of identical letters in a text, please ?
I am a PHD student, I work on the determinants of the firm performance. Among the independent variables i included the investor sentiment. I use the annual report to get my data set. Could anyone help me by explaining how can I compute the investor sentiment indicator using information from balancesheet and statement of income?
Has anyone ever conducted choice experiments on alternative contracting arrangements in farming or anything similar? For example, providing processors/traders with alternative contract choices with farmers. I would like to analyze which types of contracts are most attractive for farmers versus millers in order to stimulate better linkages through improved coordination. Any suggestions?
This is a very general question. I have seen a lot of repeated public goods game which have different round numbers. Some have more than 20 rounds, some are less 10 rounds. To test for public goods contribution in different mechanisms, do experimental economists have a rule of thumb to determine how many rounds is enough in one treatment? Thanks in advance.
The trust game (Berg 1995) is quite well known in experimental economics. Is there game theoretical analysis of user behavior for this repeated game?
The paper of Berg:
 Berg, Joyce, John Dickhaut, and Kevin McCabe. "Trust, reciprocity, and social history." Games and economic behavior 10, no. 1 (1995): 122-142.
You can get the PDF from:
The topic of my thesis is fairness students in financial decision-making. We did an experiment where we fairness surveyed by the dictator game . I mean I need advice on what to put in the practical part . How to handle the data . What statistical methods I could use , I had found that the perception of risk ?
I am currently trying to design a variation of the original experiment,but have not been able to find a standard trust game file.
The paradox thrift theory claims that if we spend our money we are helping the country to have high GDP, but if we spend less we will harm the country's GDP therefore there will be crises.
So what about if we overspend our money, will this harm the economy?
The topic of my thesis is the perception of economic risk students. We did an experiment where we investigated the perception of risk by lottery games . I mean I need advice on what to put in the practical part . How to handle the data . What statistical methods I could use , I had found that the perception of risk ?
According to Jeffrey M. Wooldrige (Introductory Econometrics), experimental data are often collected in laboratory environments in the natural sciences, but they are more difficult to obtain in the social sciences, adding that "although some social experiments can be devised, it is often impossible, prohibitively expensive, or morally repugnant to conduct the kinds of controlled experiments that would be needed to address economic issues". In contrast to Jeffrey's suggestion however, I would like to know if there are means by which Jeffrey's suggestion can be refuted
I am conducting an experiment about public goods dilemma with a group of 4. My design has two kinds of treatment as experimental treatment and control treatment. However, I have very limit financial support so I am wondering how many participants should I invite to my experiment. Now I have collected 8 groups (32 participants) for experimental treatment and 7 groups (35 participants) for control treatment, so the total number is 68 persons. Is that enough?
I have checked many ralated papers, but many of them have more than 100 participants and some only have 64 subjects, for example “Climate change in a public goods game: Investment decision in mitigation versus adaptation”.
By the way, the result of the experiment with the current data is pretty good and it has verified my hypothesis.
Any idea will be most helpful.
With so many excellent answers, I have learned a lot, thank you!
This article in The Economist should be addressed by those of us who conduct research in the field and the lab in behavioral & environmental sciences. Except for simulations, I see less than ideal replications of key studies. If anyone can share replications we could respond better to The Economist´s challenge.
What is the essence of the ceteris paribus assumption in the dynamic world we live in? Must we continue to make unrealistic assumptions knowing the environment we reside in is never static?
I would like to know the mechanics behind such an observation/situation.
Friedman and Sunder defined experimental data as "data deliberately created for scientific (or other) purposes under controlled conditions", and laboratory data as "data gathered in an artificial environment designed for scientific (or other purposes)." Based on these definitions, I would like to know if experimental data are in anyway different from laboratory data. Where can the boundary be identified if they actually differ?
I'm looking for data from prisoner's dilemma experiments in which participants played only one round of the game. A closely related experiment, which I found, is Goeree, Holt and Laury (J Pub Econ 2002) where participants play ten one-shot games without feedback between games (hence, no learning effects).
Controlled laboratory experiments are used (i) to explore individual behavior and (ii) to test theories about individual behavior. A lot of anomalies (endowment effect, context dependence, influence of irrelevant alternatives or framing) are found, not to mention non-material incentives and social preferences. In most agent-based macro-model, the focus is on fluctuations at the macro-level (cyclical behavior of prices, wage-profit-cycles, wealth distribution in econophysics models) which emerge from individual interactions governed by simple rules of thumb. I would like to know more about how the empirical insights from behavioral economics can be taken into account for modeling the behavior of individual agents in agent-based macro-models. Given the variety and complexity of individual behavior found in controlled laboratory experiments, how should the individual behavior of agents be modeled?
I am trying to understand the different motivations of Game Theorists, Experimental Economists, Agent Based Computational Economists, and other agent based modellers (e.g. Social Scientist, Psychologists) for using the "Public Goods" games during their investigations.
I have two categories of questions for the different groups:
(1) What are you trying to learn from it? What kind of question are you trying to answer? Are your answers case-based or generic?
(2) How do you collect evidence for accepting or rejecting your hypotheses? Which metrics do you use? Do you normally focus on providing average outputs (result) or are you also interested in collecting information about the evolution of the system over time (time plots)?
In your response please do not forget to state which group you belong to ;-).
Many thanks for your help!
I am in a project on estimating economic values of some health activities related to the environment. We have already designed the choice cards (sets) within 3 blocks based on the optimal efficiency approach. However, we are still looking for the size of sample and the sampling strategy that is appropriate for this study (the population size is infinite). Any idea?
Opinion dynamics seeks to model both exchange and processing of information in a population of individuals. However, I haven't found evidence that validates the predictions of these models in real populations.
Among Vernon Smith's precepts for valid microeconomic experiments is 'dominance', whereby the payoff function needs to be sufficiently peaked so as to more-than offset the psychological costs of supplying the null-hypothesis response.
Yet, dominance is not a consistent feature of current behavioral economics and experimental economics. Experiments designed specifically to satisfy dominance are the exception, rather than the rule. At first sight, this appears to be cause for concern, at least from a conceptual standpoint.
In your experience, why has the profession (editors and referees) allowed the dominance precept to fall into the "not required" category, and why is it a superfluous methodological requirement -- or is it?
Continuous actions space means choosing any value in a range (e.g. in the range $0-$10, as opposed to only $0 and only $10)
Together with some of my colleagues, we are designing a decision experiment in three waves (with approximately 6 – 8 months intervals after each wave). On each experimental session we would like to gather answers from the same group of respondents. At the end of the process we expect sample of 40 people. Probably there will be some attrition, so the initial sample should be bigger than 40.
Did any of you face similar problem? Do you have any suggestions – based on your expertise or literature – on what size the initial sample should be?
I really appreciate any help you can provide.
Experimental auctions can be used to measure the value of quality characteristics of agricultural goods. Trust games can be used to assess the impact of institutional innovations (e.g. contracting) between stakeholders. Any other ideas?
I am developing a questionnaire to assess the QALY gain/loss attached to a (temporary, short term) procedure. We are considering TTO (both standard and waiting time trade off) but I cannot find much in the literature about how to ensure the questions posed are valid and will return useful values. It was suggested that I look in the field of experimental economics, but I have so far failed to find much of use. Please can anyone offer any advise/evidence/publications?
I am using the game to evaluate trust in Iran. I was wondering if it is a good tool? and also there is a wonder that if there is alternatives to this game?!?