Science topic

Experimental Economics - Science topic

Experimental economics is the application of experimental methods to study economic questions.
Questions related to Experimental Economics
  • asked a question related to Experimental Economics
Question
6 answers
"People often prefer (ask for) more information but might ignore their difficulty of understanding, using, and acting (appropriately) on this information."
Does anyone have literature/ experimental evidence on this? 
I know about the literature on information avoidance. I am looking more for evidence that people request (costly) information that they deem helpful but they would not request if they were aware that they would not use it because they would, e.g., misinterpret it.
Thanks in advance
Relevant answer
Answer
Dear Ligen Yu, thanks for the comment. I love Tao.
Information without understanding will cause mental constipation.
  • asked a question related to Experimental Economics
Question
3 answers
In economics and biology, the terms "conditional cooperation" and "indirect reciprocity" are used to describe behavior, where subjects condition their behavior in stage t of a repeated game on the opponent's reputation (see Bolton, Katok, Ockenfels / J Pub Econ 2006), or where subjects play a one-shot game and condition their behavior on the opponent's expected behavior (see Fischbacher, Gächter, Fehr / Economic Letters 2001). I’m wondering whether there is a difference between "conditional cooperation" and "indirect reciprocity," or whether these terms are interchangeable?
Relevant answer
Answer
Indirect reciprocity is one type of conditional cooperation. Direct reciprocity is also one type of conditional cooperation. Thus, conditional cooperation is more broad than indirect reciprocity.
Indirect reciprocity is to cooperate under the condition that the opponent's reputation is good, roughly speaking. Direct reciprocity is to cooperate under the condition that the opponent cooperated, roughly speaking. Both indirect reciprocity and direct reciprocity are conditional cooperation.
  • asked a question related to Experimental Economics
Question
8 answers
By definition, Public Good (PG) and Common Pool Resource (CPR) are both non-excludable. The main difference is their rivalry property: PG can be consumed without reducing availability for others, while consuming CPR will decrease the available resources for others. PG has free-riders problem (lack of contributions); CPR has "tragedy of the commons problem" (overuse).
I have 3 questions:
1. So in experimental economics, how do you set up a experiment that distinguishes the differences between two games? For example, if it is a CPR game, will you tell participants that 'the resource is limited and you cannot play anymore when it is depleted'?
2. In a paper by Pahl-Wostl and Ebenhöh (2004), they developed a CPR simulation, in which data is from a PG experiment of Fehr and Gächter (2002). How the data of PGG can be used in CPR? Is there any modification required?
3. The difference between the experiment setup in 2 works above is their utility function. For Fehr and Gächter, the_return = total_investment x 0.4. For Pahl-Wostl and Ebenhöh, the_return = total_investment x 0.6 / 4 = total_investment x 0.15. Is it the difference between experiment's description of CPR and PGG?
References:
Relevant answer
Answer
Davor Mance I really like the differentiation by Elinor Ostrom that you stated in the end. Do you, by any chance, have the reference for this distinction?
  • asked a question related to Experimental Economics
Question
5 answers
I want to compare two populations, but we can only measure 6 participants at a time at most (the total sample is larger of course). Therefore running the task classically is difficult.
A possible solution is having participants play against an algorithm (tit-for-tat, or adaptive pavlov). However, I can't find any literature of humans vs. algorithm in the prisoner's dilemma.
Am I missing something?
Relevant answer
Answer
Hi David,
Thanks for your input!
I looked at some of his research, yes.
I followed your suggestion and zeroed in on Axelrod's work. However, it's all simulations as well. Always algorithms vs. algorithms.
Oh well, thanks nevertheless!
  • asked a question related to Experimental Economics
Question
3 answers
I'm trying to compute the point of indifference (as in Kubota et al 2014. "The Price of Racial Bias: Intergroup Negotiations in the Ultimatum Game") in the third party ultimatum game but I'm facing some problems due the specific design of the third party version of the task.
The computation is pretty easy in the standard version of the ultimatum game but in the third party version it is not so straight forward given the symmetry of the decision function.
We can empirically find 2 points of indifference, one below and one above the equal split (50%, 50%). Both points lay on a probability function with high values around the equal split and low values on both tails. Such function will never be fitted by any logistic or binomial function therefore, I tried to split the data two differnt datasets containing above-equality or below-equality levels of the splits and compute points of indifference separatedly. However, I often find subjects that accept or reject all the offers so the function fits very poorly the data. This produces aberrant values such as very extreme point of indifference (eg. 1000000 dollars in a task with splits of 10 dollars).
Does anyone know how to obtain reliable points of indifference?
Thanks,
marco
Relevant answer
Answer
Please let me know if these references/sites are helpful to you:
1.  Theories of Fairness and Reciprocity - Massachusetts Institute of ...
by E Fehr - ‎Cited by 951 - ‎Related articles
point out open questions, and to suggest avenues for future research. ..... For these experiments the equilibrium is computed under the assumption that all ... called Ultimatum Game invented by Güth, Schmittberger and Schwarze (1982). ...... 1) Evidence from the Third Party Punishment Game and the PGG with punishment ...
2.  Minimally Acceptable Altruism and the Ultimatum Game
by JJ Rotemberg - ‎2004 - ‎Cited by 111 - ‎Related articles
Apr 4, 2004 - experimental results of ultimatum and dictator games. Moreover ... (1991), for example, Levine (1998) computes that even splits ..... At that point, the proposer is indifferent to the allocation of A between the two parties .... low offers in a dictatorship game that these individuals were playing with third parties. In.
Dennis
Dennis Mazur
  • asked a question related to Experimental Economics
Question
3 answers
I am trying to design an experiment and considering if I should use a control group in my design. There are about 200 participants in the experiment and 11 treatments to be tested. The participants will be given the same situation (sitting in a car in a traffic jam) and will be asked what would they choose if there are choices like bus, tram, bike and walk available. I've finished the literature review and came to the result that there are 11 factors that effect the travel mode choice of travelers, and those factors should all be tested. 
Which kind of experimental design would you recommend? A problem that I'm having now, is that I don't have more than 200 participants, and even a fractional factorial design with 2-level-treatments would need much more than 200 runs.
Another question is, how many runs/how many participants should be given the same set of treatment so that we can have a result which is acceptable?
Thank you for your help!
Relevant answer
Answer
What, precisely, should your experiment reveal that is not already known?
You wrote that 11 factors are known to affect such a choice, and that you will test all of these. This "affection" must be related to something, and this something is a control (situation/condition). Thus, to give an effect of any of these factors you have to relate the response to some control. Otherwise you can only compare the responses between different experimental conditions, what will become very messy with 11 factors, some of which might be continuous or have several different levels. And apart from this it might be that some factors interact (there are already possible 110 two-way interactions, not to think about higher-order interactions).
I think you should be much much more specific in your research question and then try to find an experimental design that addresses this specific question robustly and with high precision and gives an interpretable and practically useful answer.
  • asked a question related to Experimental Economics
Question
4 answers
Hello,
I am designing an experiment in economics that will have subjects do a real effort task. The task will be to find pairs of identical letters in a text.
This type of task has already been used in other experiments in economics, yet I am unable to find what text is usually given to subjects - I suppose there must be a "standard text" to permit greater comparability between studies, but I can't find anything.
Do you know what material is generally used for tasks where subjects have to find pairs of identical letters in a text, please ?
Thank you.
Relevant answer
Answer
Ha, actually I read about the slider task a while ago but I completely forgot about it. I'm not sure about this one because I read an article which said that women perform worse than men in this task, which might induce a gender bias.
Anyway, thanks for the suggestion, I will definitely consider it.
  • asked a question related to Experimental Economics
Question
3 answers
I am a PHD student, I work on the determinants of the firm performance. Among the independent variables i included the investor sentiment. I use the annual report to get my data set. Could anyone help me by explaining how can I compute the investor sentiment indicator using information from balancesheet and statement of income?
Relevant answer
Answer
If you have data for firms in different countries you could also add investor's perception about the risk of investment in that country together with measures of corruption and a good environment for doing business. You can find measures for all these variables in the International Country Risk Guide 
  • asked a question related to Experimental Economics
Question
3 answers
Has anyone ever conducted choice experiments on alternative contracting arrangements in farming or anything similar? For example, providing processors/traders with alternative contract choices with farmers. I would like to analyze which types of contracts are most attractive for farmers versus millers in order to stimulate better linkages through improved coordination. Any suggestions?
Relevant answer
Answer
Good day, Matty.
I have successfully designed and analyzed hierarchical choice modelling experiments for structuring and bundling telco service contracts for small-to-medium and large business clients. The hierarchical choice modelling approach might suit the problem type as market contract choice might involve top-down hierarchical decisions, e.g. "I see you chose this <broad type> alternative, which <options/features> from this <pop-up menu> would you then choose?". A nice side-benefit is the generation of inclusive value estimates - an estimate of the value of including a sub-menu of options as a feature of a higher-level set of alternative choice attributes. A few nice features of the nested choice designs or sub-menus is that they often need no additional experimental design and add little in terms of participant cognition load and therefore overall completion time. They can often be implemented as a 'tick one or more' list, optionally displayed if a participant chooses a higher-level alternative, often with explicit additional option prices and, for online surveys, a dynamic price or cost implication summary of boxes ticked. Lastly they can model real choice contexts with better fit. Ever tried to buy a Subway (tm) sandwich? Ha! I have been thinking of, but not had success in gaining the funding to research, a similar problem of how best to structure self-funded irrigation schemes to optimize scheme success, and I thought the above approach could work. What do you think?
Cheers.
Damien
  • asked a question related to Experimental Economics
Question
10 answers
most of the experiments are multi-players economic games
Relevant answer
Answer
Well, there are a lot of options available such as C++, Matlab, Python etc. but it depends upon the scenario and requirements of economic analysis as well to opt for the most suitable language. C++ has been suggested as the preferred language for the economic analysis. Nevertheless, you may want to consider what your requirements are to assess the suitability of a programming language. Please refer to following links for some references.
Hope this helps!
  • asked a question related to Experimental Economics
Question
2 answers
This is a very general question. I have seen a lot of repeated public goods game which have different round numbers. Some have more than 20 rounds, some are less 10 rounds. To test for public goods contribution in different mechanisms, do experimental economists have a rule of thumb to determine how many rounds is enough in one treatment? Thanks in advance.
Relevant answer
Answer
In my opinion, it is never a good idea to have a fixed number of rounds. If you do, you will have two different kinds of effects: learning effects in the starting rounds of the game and end effects in the final rounds of the game. In the final rounds, participants tend to behave more selfishly. Refraining from being too selfish makes sense if you do not want to destroy the prospects of long-term future cooperation. If the future is very brief, this incentive disappears. You can prevent end effects (to a certain extent) by not specifying the number of rounds N beforehand, but by terminating the game with a certain probability alpha per round. If you choose alpha=1/N, the expected number of rounds is N, but at every stage of the game the expected time horizon of the game is still the same.
I would make the (expected) number of rounds dependent on the complexity of the setting. In complex settings, it may take a long time before the players understand the situation and before they can make sense of the behaviour of the others. In other words, there will be a long learning phase, that is, it takes a long time before a group of players arrives at a stable pattern of behaviour. You can reduce this phase by changing the description of the game or by giving the players the opportunity of gaining experience (e.g. in separate training sessions). But be careful that this does not interfere with the intentions you have with your experiment. In very simple settings, it does not take more than 3 to 5 rounds before most groups settle at a stable behavioural pattern. If you are mainly interested in such equilibrium behaviour (studying the learning phase might actually be more interesting...), then it suffices to run, say, 10 rounds on average, giving you about 5 rounds for studying this equilibrium behaviour. Increasing the number of rounds does often more harm than good: participants get bored by the situtation and start doing "strange" things just in order to create novel situations.
Franjo Weissing
  • asked a question related to Experimental Economics
Question
4 answers
Hello everyone
The trust game (Berg 1995) is quite well known in experimental economics. Is there game theoretical analysis of user behavior for this repeated game?
The paper of Berg:
[1] Berg, Joyce, John Dickhaut, and Kevin McCabe. "Trust, reciprocity, and social history." Games and economic behavior 10, no. 1 (1995): 122-142.
You can get the PDF from:
Relevant answer
Answer
You may try to check this meta-analysis from 2011:
Johnson, N. D., & Mislin, A. A. (2011). Trust games: A meta-analysis. Journal of Economic Psychology, 32(5), 865-889, http://noeldjohnson.net/www.noeldjohnson.net/Research_files/Trust%20Games.pdf
  • asked a question related to Experimental Economics
Question
4 answers
I am currently doing a literature review on this topic.
Relevant answer
Answer
Thanks Diego. I will check these links
  • asked a question related to Experimental Economics
Question
3 answers
The topic of my thesis is fairness students in financial decision-making. We did an experiment where we fairness surveyed by the dictator game . I mean I need advice on what to put in the practical part . How to handle the data . What statistical methods I could use , I had found that the perception of risk ?
Relevant answer
Answer
Hi Stefan,
as Sacha, I would simply propose to employ an analysis, which was already done in similar investigations. So simply check your references, where your theory stems from and do the same. It is of course reasonable to expect the occurance of an effect based on assumptions, methods and analyses of previous studies.
However, if you want detailed advice beyond, you should lay out your design, method and procedure. And... usually chosing a specific statistical analysis should be considered during designing the experiment. ;). This is a presrequesit to determine statistical Power/sample size, and is a matter of scientific reliability. Furthermore, you could otherwise end up with a design on which no statistical test applies (it happened to me in my first-ever-experiment :)).
Best, René
  • asked a question related to Experimental Economics
Question
3 answers
I am currently trying to design a variation of the original experiment,but have not been able to find a standard trust game file.
Relevant answer
Answer
Check this page. You can find a version of the trust game coded in z-Tree.
  • asked a question related to Experimental Economics
Question
7 answers
The paradox thrift theory claims that if we spend our money we are helping the country to have high GDP, but if we spend less we will harm the country's GDP therefore there will be crises.
So what about if we overspend our money, will this harm the economy?
Relevant answer
Answer
Dear Aisha
Rob has answered your query in a very elaborate way, clarifying the pros and cons of the proposition.
I would, however, like to add as well as to differ on your hypothesis. The Keynesian 'paradox of thrift' is a generalization that suggests that on a macro economic scale the income generated in the process of production needs be spent on the output so as to keep the equilibrium level of income and employment going on. However, it will be very heroic and naive to assume that overspending can accelerate income levels. Because what is important is also the nature of spending like whether it is on capital goods or on consumer goods. The equilibrium occurs with the aggregate income being equal to output and aggregate spending. Rather than the quantum of spending the nature of spending is very important for long-term perspective.
There is one more thing. Overspending can lead to wastage and a false consumer self like a narcissist self. Overspending is not a way for the maintenance of long-term equilibrium. Rather a blend of saving and spending is. If one wants to help the economy she should save for her future and for herr future spending by investing the same in productive streams.
  • asked a question related to Experimental Economics
Question
3 answers
The topic of my thesis is the perception of economic risk students. We did an experiment where we investigated the perception of risk by lottery games . I mean I need advice on what to put in the practical part . How to handle the data . What statistical methods I could use , I had found that the perception of risk ?
Relevant answer
Answer
Hello Otilia,
As I understand, you are looking here to use the experimental game theory. For this you can refer to the studies of  Blomfield (1997); Zimbelman and Waller (1999); King (2002); Fischbacher and Stefani (2007) and Bowlin et al. (2009). Those authors use experimental game theory in audit field.
Attached some references:
Bowlin, K. 2011. Risk-Based Auditing, Strategic Prompts, And Auditor Sensitivity To The Strategic Risk Of Fraud. The Accounting Review, 86, 1231-1253.
Blomfield 1995. Strategic Dependence And Inherent Risk Assessment. The Accounting Review, 70, 71-90.
Zimbelman, M. F. & Waller, W. S. 1999. An Experimental Investigation Of Auditor-Auditee Interaction Under Ambiguities. Journal Of Accounting Research, 135-155.
For example,  Zimbelman and Waller (1999), after collecting data using experiment, they have used Ordinary least squares (OLS) and logistic regressions to analyze the different results. 
Hope that I have answer to your questions, really sorry I am not economist I am accounting researcher and I am interested on using game theory in accouting and tax purposes. 
All the best
  • asked a question related to Experimental Economics
Question
4 answers
According to Jeffrey M. Wooldrige (Introductory Econometrics), experimental data are often collected in laboratory environments in the natural sciences, but they are more difficult to obtain in the social sciences, adding that "although some social experiments can be devised, it is often impossible, prohibitively expensive, or morally repugnant to conduct the kinds of controlled experiments that would be needed to address economic issues". In contrast to Jeffrey's suggestion however, I would like to know if there are means by which Jeffrey's suggestion can be refuted
Relevant answer
Answer
Very interesting question. There is no reason at all that social experiment are always disputable on ethic grounds. First of all, there always are  unintend ed experiments, just beacuse there are policy variations across jurisdictions etc. These are quasi-experiments (see e.g. how we studied reforms of electricity and other sectors in Florio M, ' Network Industries and Social Welfare. The Experiment that reshuffled European Utilities', OUP, 2013. Second, in some cases the policy maker or the analyst doesn' attach any ethical value to one variant against the other, and is truly uncertain about outcomes. You can think to a lot of examples. And think to experiments with placecebo in medicine, which are much more widespread, despite potential ethical issues. However, it is true that social experiments are costly, but they would be much more informative than laboratory experiments in economics, which sometimes are cheap and silly. In my opinion the true reason why there aren't many social experiments is legal, not ethical. Social planners simoly do not know how to write the legislation in a form that allows social experiments and do not want to take risk. Thus, this is often much more a political economy issue than an ethical one. 
  • asked a question related to Experimental Economics
Question
3 answers
The EMH(s) are statements about prices of stocks etc. My question is not about that.
Relevant answer
Answer
i think that current  markets aren't very optimal.moreover to frictions and costs that you hinted, there are other problems with markets. MICHAEL J. SANDEL from Harvard has a good book named What Money Can’t Buy. in this book, he critique markets from a moral view. and all of his claim are thinkable.
but we must know that markets already are good place for exchange and we haven't better alternative. I think that change in market concept will be evolutionary rather than revolutionary.
  • asked a question related to Experimental Economics
Question
5 answers
My answer is presented in the book "S-economics", which describes the model of normalized economic mechanism based on e-services.
Relevant answer
Answer
Dear Alexander,
Today bankers are not interested in implementing PEBs, because they will lose the possibility of unauthorized use of clients' money. 
  • asked a question related to Experimental Economics
Question
22 answers
I am conducting an experiment about public goods dilemma with a group of 4. My design has two kinds of treatment as experimental treatment and control treatment. However, I have very limit financial support so I am wondering how many participants should I invite to my experiment. Now I have collected 8 groups (32 participants) for experimental treatment and 7 groups (35 participants) for control treatment, so the total number is 68 persons. Is that enough?
I have checked many ralated papers, but many of them have more than 100 participants and some only have 64 subjects, for example “Climate change in a public goods game: Investment decision in mitigation versus adaptation”.
By the way, the result of the experiment with the current data is pretty good and it has verified my hypothesis.
Any idea will be most helpful.
Yours sincerely
Joanna Zhang
____________________________________
With so many excellent answers, I have learned a lot, thank you!
Relevant answer
Calculate your sample size needed. Sample size calculator are available
Good luck
Béatrice
  • asked a question related to Experimental Economics
Question
2 answers
This article in The Economist should be addressed by those of us who conduct research in the field and the lab in behavioral & environmental sciences. Except for simulations, I see less than ideal replications of key studies. If anyone can share replications we could respond better to The Economist´s challenge.
Relevant answer
Answer
See also the this article from The Guardian on the problem of replicabiliy in psychology. It presents a nice summary of how psychology is changing.
  • asked a question related to Experimental Economics
Question
19 answers
What is the essence of the ceteris paribus assumption in the dynamic world we live in? Must we continue to make unrealistic assumptions knowing the environment we reside in is never static?
Relevant answer
Answer
Denis makes sense to me.
In economic models, there are endogenous variables, and exogenous variables. The model is in essence a theory about how the endogenous variables behave. The theory is based on the view that the exogenous variables determine the endogenous variables. Consumption behaviour of a group of people, for example, is determined by their incomes, the prices of the consumption goods they buy, and environmental factors like the weather that are influence their consumption behaviour. A natural question to ask in a model of this sort is this: how would a change in one exogenous variable affect the endogenous variables. The natural way to answer such questions is to change the exogenous variable of interest, holding other exogenous variables constant, and then observe what the model predicts with respect to the endogenous variables. In other words, the ceteris paribus assumption is what one needs to interrogate or explore a theory in which changes in certain variables are thought to be caused by changes in other variables.
  • asked a question related to Experimental Economics
Question
16 answers
I would like to know the mechanics behind such an observation/situation.
Relevant answer
Answer
The F-statistic in a multiple linear regression weighs the reduction in the residual some of squares (RSS) that is achieved by adding a number of covariates into the regression against the additional parameters needed to estimate their effects (the overall F-statistic compares the full model against the null model including only an intercept and no covariates) . If the overall reduction in RSS is moderate and the number of parameters large, the F-statistic may not be significant even if one or more of the betas are. With a single variable, the t-test of the hypothesis that beta=0 and the F-test for the simple linear regression are equivalent and a significant beta implies a significant F-statistic. The more "insignificant" or random explanatory variables you add to the model (i.e. the larger p, the number of parameters, becomes) , the less likely it is that the reduction in RSS achieved by the one significant variable justifies the addition of many (p) variables.
For example, if the null model (intercept only) has RSSnull=20 and a simple linear regression with variable x1 has RSS=15, the t-statistic becomes:
F = (RSS_null - RSS) / (p-1) / (RSS / (n-p))
With p = 2 and, for example, n=20:
F = 6
which is significant with a p-value of 0.0248
If adding a single random variable x2 to the model does not result in an appreciable reduction in RSS (say RSS=14.5), the overall model with p=3 is no longer significant:
F = 3.22
p = 0.0650
Thus the addition of random explanatory variables essentially "dilutes" the significant effect of a single important variable to the point that the overall model is no longer signficant.
  • asked a question related to Experimental Economics
Question
10 answers
Friedman and Sunder defined experimental data as "data deliberately created for scientific (or other) purposes under controlled conditions", and laboratory data as "data gathered in an artificial environment designed for scientific (or other purposes)." Based on these definitions, I would like to know if experimental data are in anyway different from laboratory data. Where can the boundary be identified if they actually differ?
Relevant answer
Answer
I guess some hints can also come from the use of data generated from a computer software (e.g. the atomic coordinates of an atomic assembly).
Astonishingly those can be considered as "experimental data". Can they be considered as laboratory data? It's a matter of meaning of the words.. a computer software is in any case an "artificial environment" (even if it is not a physical one), and we "gather" the data from a computer, even if it is different than "recording" them
  • asked a question related to Experimental Economics
Question
3 answers
I'm looking for data from prisoner's dilemma experiments in which participants played only one round of the game. A closely related experiment, which I found, is Goeree, Holt and Laury (J Pub Econ 2002) where participants play ten one-shot games without feedback between games (hence, no learning effects).
Relevant answer
Answer
Correct
  • asked a question related to Experimental Economics
Question
2 answers
Relevant answer
Answer
Thanks Richard, an update on this. We have tried to use a GEE approach as you advocate to deal with the autocorrelation. Our model is as follows
geeglm(Nash_decision~fishery+treatment+state_of_resource+period+Experience+Cumulative, family=binomial(link="logit"), data=data1, id=playerno., corstr = "ar1")
We recoded player and session (playerno.) to indicate a different player playing in each session (i.e. 1a - 8a for players 1-8 in session 1 and then 1b-8b for players 1-8 in session 2) and included period (i.e. round) in the model to cater for the time effect. We assume that playerno is the individual sample units in our dataset.
What we have found through testing however is that the QIC score is the same no matter what correlation matrix we specify (ar1 or exchangeable or independence).
geeglm(Nash_decision~fishery+treat+period+state_of_resource+Experience+Cumulative, family=binomial(link="logit"),
+ data=data1, id=playerno, corstr = "ar1")
> gee2 = update(gee1, corstr = "independence")
> gee3 = update(gee1, corstr = "exchangeable")
> sapply(list(gee1, gee2, gee3), QIC)
[,1] [,2] [,3]
QIC 871.02 871.02 871.02
Quasi Lik -434.02 -434.02 -434.02
Trace 1.49 1.49 1.49
px 1296.00 1296.00 1296.00
Also there is no Auto-correlation (checked with ACF) in each case, but if we remove period auto-correlation arises in some lags. Is this suggesting that the correlation between rounds is not so strong, so including period as factor is enough to deal with that? Is this an issue that the QIC is the same across the same models with different correlation structures?
Also do we need to consider the nesting factors (i.e. rounds nested within session)? We have read that you can ignore nested design through specifying the correlation structure correctly in GEE? Is this correct?
Thanks!
  • asked a question related to Experimental Economics
Question
3 answers
Studies have shown that people prefer positively skewed gambles but does that make them more risk loving too?
Relevant answer
Answer
You can find the best answers to your question in here:
1) Keeney, R., & Raiffa, H. (1976). Decisions with Multiple Objectives: Preferences and Value Tradeoffs: John Willey & Sons.
2) Kahneman, D., & Tversky, A. (1979). Prospect Theory: An Analysis of Decision under Risk. Econometrica, 47(2), 263-291. I think you can find these two in the Web.
  • asked a question related to Experimental Economics
Question
5 answers
Controlled laboratory experiments are used (i) to explore individual behavior and (ii) to test theories about individual behavior. A lot of anomalies (endowment effect, context dependence, influence of irrelevant alternatives or framing) are found, not to mention non-material incentives and social preferences. In most agent-based macro-model, the focus is on fluctuations at the macro-level (cyclical behavior of prices, wage-profit-cycles, wealth distribution in econophysics models) which emerge from individual interactions governed by simple rules of thumb. I would like to know more about how the empirical insights from behavioral economics can be taken into account for modeling the behavior of individual agents in agent-based macro-models. Given the variety and complexity of individual behavior found in controlled laboratory experiments, how should the individual behavior of agents be modeled?
Relevant answer
Answer
Since a few years we have been starting an endeavor which is based on the interplay between Experimental Economics and Agent-based Economics to the study of how to model artificial agents using human behaviors. You will find the paper on:
  • asked a question related to Experimental Economics
Question
4 answers
I am trying to understand the different motivations of Game Theorists, Experimental Economists, Agent Based Computational Economists, and other agent based modellers (e.g. Social Scientist, Psychologists) for using the "Public Goods" games during their investigations.
I have two categories of questions for the different groups:
(1) What are you trying to learn from it? What kind of question are you trying to answer? Are your answers case-based or generic?
(2) How do you collect evidence for accepting or rejecting your hypotheses? Which metrics do you use? Do you normally focus on providing average outputs (result) or are you also interested in collecting information about the evolution of the system over time (time plots)?
In your response please do not forget to state which group you belong to ;-).
Many thanks for your help!
Relevant answer
Answer
The specifics of an experiment depend on what you want to observe. If you want to look at intuitive reactions, you would be best looking at quick decisions and compare them with slow ones [1]. If you are interested in seeing how a group evolves, then you want to experiment an iterated public goods. And so on.
As for the general motivations why people keep experimenting the public goods game as well as other social dilemmas, the reason is that social dilemmas formalise many fundamental situation of our society, from depletion of natural resources to climate protection, from security of energy supply to price competition markets, workplace collaborations, political participation, and international relations. Consequently, understanding how and why cooperation can evolve in a social dilemma is a topic of primary importance.
The standard model (Nash equilibrium) predicts that humans contribute nothing to the public good. However, fortunately, this is not what happens. A large body of experimental studies has by now confirmed that humans do contribute something and the amount of contribution typically depends on the parameters of the game (marginal return and size of the group). These data are so regular and robust that people are getting more and more convinced that cooperation does have also a strategic component that can be captured by suitable models [2],[3].
References:
[1] Rand DG, Green JD, Nowak MA (2012) Spontaneous giving and calculated greed. Nature 489, 427-430.
[2] Charness G, Rabin M (2002) Understanding social preferences with simple tests. The Quaterly Journal of Economics 117 (3), 817-869.
[3] Capraro V (2013) A Model of Human Cooperation in Social Dilemmas. PLoS ONE 8(8): e72427.
  • asked a question related to Experimental Economics
Question
5 answers
I am in a project on estimating economic values of some health activities related to the environment. We have already designed the choice cards (sets) within 3 blocks based on the optimal efficiency approach. However, we are still looking for the size of sample and the sampling strategy that is appropriate for this study (the population size is infinite). Any idea?
Relevant answer
Answer
MINIMUM SAMPLE SIZE
There are two scenarios: (i) known population size: N and (ii) unknown population size. In scenario (i), population proportion is the most common approach, i.e. Cochran,'s equation, Yamane's equation or Scheaffer et al. However, the question concerns "infinite" population, i.e. unknown population size. The explanation for scnario (ii) follows:
MINIMUM SAMPLE SIZE WITH UNKNOWN POULATION SIZE
In infinite population, i.e. unknown or constantly changing population, the minimum sample size may be determined by: n = [Z square times sigma square] devided by sampling error square. Formally, it is given by:
(1) n = [(Z^2)s^2] / E^2
where ...
Z = standardized score at half-alpha;
s = population standard deviation
E = sampling error
STANDARDIZED SCORE AT HALF ALPHA: Z(a/2)
Use the Z-equation to determine the value for the Z(a/2): observed and compared this observed value to the standard value. For 0.95 confidence interval, this value of Z(a/2) is 1.96. See Unit Normal Distribution Table. The Z-equation is given by:
(2) Z(a/2) = (x^ - mu) / (s/sqrt. n)
where ...
Z(a/2) = 1.96;
n = initial sample size, i.e. 20, 30, 50, etc.
x^ = initial sample mean
s = population standard deviation: sigma
POPULATION STANDARD DEVIATION: SIGMA
The population standard deviation may be determined via the t-equation:
(3) t = (x^ - mu) / S/sqrt. n
where ...
mu = estimate population mean
S = initial sample standard deviation
n = initial sample size
Rearranging the equation to solve for assumed population mean: mu ...
(4) mu = t(S/sqrt. n) - x^
where ... t(a/2) = 1.83
NOW, solve for s (population standard deviation: sigma) in the Z-equation. By rearranging the Z-equation to express it in terms of sigma:
(5) s = [(x^ - mu)/Z] sqrt. n
With known Z(a/2) from the observation in the initial sample(s) and "sigma": s, now put it into equation (1).
SAMPLING ERROR: E^2
Sampling error is defined as the error between the measurements of sample and that of the assumed population. This example value is not known since N is unknown. We can only work with the estimate. The sampling error is gicen by:
(6) E = S / sqrt. n
where ...
S = Sample standard deviation
n = sample size
SAMPLING METHOD
Statements (1) through (6) may be used to detrmine the minimum sample size where N is infinite. In so doing, one asks: "what is the proper sampling method?' The sampling method employed may be in a form of Bootstrap process: resampling via series of small samples in order to obtain a fair characterization of the population. The following may provide some guidance:
Assume that in a sample space there exists a population N where the value of N is not known or infinite, series of samples called T (read theta) in a set T:{t1, t2, ..., tn} in various size are taken from unknown N pool. Each sample from the resampling set {t1, t2, ..., tn} may produce the following statistics:
t1: {mean t1, var, std}
t2: {mean t2, var, std}
...
tn: {mean tn, var, std}
NOW, do b-bar analysis for pair difference to test whether the population is homogeneous. If d-bar shows that the means: {(mean t1 - mean t2), (mean t2 - mean tn), ..., (mean tn-1 - mean tn)} is not significant then the population is homogeneous. If the d-bar test shows significant difference, continue wioth the resampling to double check whether the population is indeed non-homogenoeous. If subsequent bootstrap sampling coonfirms characteristics: homogeneity or otherwise, then collect of means from the bootstrap and use it as a single set of means, i.e. treating each mean as a single observation set: mean tn = x*. Set x* = x^ and calculate the minimum sample size using statements (1) through (6) following the steps outlined above. Note that x^ is the mean of a single sample and x* is the mean of a set of bootstrap means.
ASSUMPTION
The above explanation assumes that the population is normally distriuted. However, if the population is not normally distributed, the researcher is advised to use chi-square test for a single initial sampling or F-test for k-sample, i.e. if bootstrap process is used. To confirm whether the population may be normally distributed, see Anderson-Darling Test: A-square test.
  • asked a question related to Experimental Economics
Question
15 answers
Opinion dynamics seeks to model both exchange and processing of information in a population of individuals. However, I haven't found evidence that validates the predictions of these models in real populations.
Relevant answer
Answer
Using one my model I have predicted successfully the very unexpected rejection of the 2005 French referendum on the project of European Constitution. I even have alert before hand on the danger of holding referendum about the European construction few years before. A piece from the conclusion of a 2004 paper:
"Applying our results to the European Union leads to the conclusion that it would berathermisleadingtoinitiatelargepublicdebatesinmostoftheinvolvedcountries. Indeed, even starting from a huge initial majority of people in favor of the European Union, an open and free debate would lead to the creation of huge majority hostile to the European Union. This provides a strong ground to legitimize the on-going reluctance of most European governments to hold referendum on associated issues. "
From: S. Galam, « The dynamics of minority opinion in democratic debate », Physica A 336 (2004) 56-62
Using another model I have predicted the repetitive occurrence of hung elections at fifty/fifty in 2004 and indeed it did occurred. From the 2004 conclusion:
"Accordingly the associated “hanging chad elections” syndrome could become of a common occurrence in the near future."
From: S. Galam, « Contrarian deterministic effect: the hung elections scenario », Physica A 333 (2004) 453-460
But of course those predictions do not validate the models of opinion dynamics as proved models but instead validate the approach behind them. I refer to my book "Sociophysics, A Physicist's Modeling of Psycho-political Phenomena" published by Springer in 2012 in which I discuss lengthy those questions.
One paper in French is attached. Unfortunately the system does not allow more attachments
  • asked a question related to Experimental Economics
Question
5 answers
Among Vernon Smith's precepts for valid microeconomic experiments is 'dominance', whereby the payoff function needs to be sufficiently peaked so as to more-than offset the psychological costs of supplying the null-hypothesis response.
Yet, dominance is not a consistent feature of current behavioral economics and experimental economics. Experiments designed specifically to satisfy dominance are the exception, rather than the rule. At first sight, this appears to be cause for concern, at least from a conceptual standpoint.
In your experience, why has the profession (editors and referees) allowed the dominance precept to fall into the "not required" category, and why is it a superfluous methodological requirement -- or is it?
Relevant answer
Answer
Eric,
So the dominance requirement -- which is a methodological, experimental design condition on the 'peakedness' of the payoff function when testing null hypotheses derived from economic theories -- has tumbled down the priority list because of more general questions about whether incentives matter at all.
This seems eminently plausible.
I would also conjecture that it has fallen by the wayside because no recognised general-purpose procedure exists with which dominance may be verifiably implemented in our experimental designs. And many of our experimental design favourites -- such as the Random Lottery Incentive scheme -- create separate impediments. The closest approximation seems to be the inclusion of a "multiply payoffs by the factor X" treatment condition, where X is set sufficiently high (e.g. 10, or 100) so as to pre-empt any criticisms citing insufficient peakedness of the payoff function.
Incidentally #1: I particularly like the Hardnose paper/result. At first glance, there would seem to be some scope for trying this sort of approach in the decision making under risk and uncertainty context as well.
Incidentally #2: the Hardnose result reminds me of Winking & Mizer (2013) Natural-field dictator game shows no altruistic giving, Evolution and Human Behavior. These authors erase altruistic giving with a design that involves an experimenter with a bag of casino chips, a confederate, and an unaware member of the public standing at a Las Vegas bus stop.
  • asked a question related to Experimental Economics
Question
5 answers
Continuous actions space means choosing any value in a range (e.g. in the range $0-$10, as opposed to only $0 and only $10)
Relevant answer
Answer
Bikhchandani, Hirshleifer, & Welch 1992 cascades/herds are predicated on a binary action space. Todorova&Vogt 2012 show that this effect disappears when the action space granularity is increased from 2 to 1001. So with regard to information aggregation in an investment setting, the relevant transition in observable behavior occurs well before the infinite limit of a continuous action space is reached.
  • asked a question related to Experimental Economics
Question
3 answers
Together with some of my colleagues, we are designing a decision experiment in three waves (with approximately 6 – 8 months intervals after each wave). On each experimental session we would like to gather answers from the same group of respondents. At the end of the process we expect sample of 40 people. Probably there will be some attrition, so the initial sample should be bigger than 40.
Did any of you face similar problem? Do you have any suggestions – based on your expertise or literature – on what size the initial sample should be?
I really appreciate any help you can provide.
Relevant answer
Answer
re: Pauline's answer: Yes, of course, I didn't mention it but you should always start w/ a power analysis to determine what your ideal final sample size would be. Also, if there are other groups that have done longitudinal studies on a similar population you can ask them for estimates of attrition, but bear in mind that specific social factors in one sample or another (e.g., what city you're in; how difficult it is to travel to your campus; how you're reimbursing participants) can greatly influence attrition in any particular study. So don't rely too much on generalizations from previous studies.
  • asked a question related to Experimental Economics
Question
10 answers
Experimental auctions can be used to measure the value of quality characteristics of agricultural goods. Trust games can be used to assess the impact of institutional innovations (e.g. contracting) between stakeholders. Any other ideas?
Relevant answer
Answer
In the context of global value chains, I know the OECD has released some research on the role of SMEs. That might lend some ideas to your work, Matty. If you've come across any studies on upstream & downstream activities in agricultural value chains could you let me know. Thanks.
  • asked a question related to Experimental Economics
Question
7 answers
I am developing a questionnaire to assess the QALY gain/loss attached to a (temporary, short term) procedure. We are considering TTO (both standard and waiting time trade off) but I cannot find much in the literature about how to ensure the questions posed are valid and will return useful values. It was suggested that I look in the field of experimental economics, but I have so far failed to find much of use. Please can anyone offer any advise/evidence/publications?
Thanks
Relevant answer
Answer
Thanks so much, I really appreciate your help. Caz
  • asked a question related to Experimental Economics
Question
4 answers
I am using the game to evaluate trust in Iran. I was wondering if it is a good tool? and also there is a wonder that if there is alternatives to this game?!?
Relevant answer
Answer
What the trust game measures is an ongoing controversy among many economists. Early results showed that the trust game does not measure trust at all, but rather trustworthiness (Glaeser, et al. (2000), "Measuring Trust"). More recent research suggest that the trust game measures the beliefs component of trust (how likely is it that an unknown other will cheat you). See, e.g., Sapienza, Toldra and Zingales (2012) "Understanding Trust"; or Butler, Giuliano and Guiso (2012) "Trust and Cheating."
  • asked a question related to Experimental Economics
Question
6 answers
Laboratory economic experiments
Relevant answer
Answer
Overconfidence is actually tricky to measure, ex-ante, and how to measure it will depend on which definition of overconfidence you want to use. E.g., if you are interested in overconfidence defined as believing your information is more precise than it is, then you could simply elicit each participant's belief distribution for an objectively-known random event (e.g., state lottery outcome) and compare the elicited distribution to the actual distribution using some measure of dispersion (e.g., variance) for statistical testing. If you are interested in a more intuitive measure of overconfidence, then you could match pairs of participants to compete on a task where performance is orthogonal to ability so that, objectively there is a 50/50 chance (or, whatever probability you want to design) of winning. Then you could elicit participant's beliefs about the chances of winning. Labeling individuals reporting values sufficiently higher than 50/50 as overconfident would be warranted here, I think, where the definition of "sufficiently" is up to the experimenter.
  • asked a question related to Experimental Economics
Question
3 answers
Can experiments be used to study long term group decisions of organizations?
Relevant answer
Answer
It would be great if Paul Ojeaga could actually give some examples of the many studies in this area that he is aware of.