Science topic

Decision Theory - Science topic

A theoretical technique utilizing a group of related constructs to describe or prescribe how individuals or groups of people choose a course of action when faced with several alternatives and a variable amount of knowledge about the determinants of the outcomes of those alternatives.
Questions related to Decision Theory
  • asked a question related to Decision Theory
Question
1 answer
For example, I have Y~N(mean, variance), then the expected loss for each parameter, mean, E[L(mean,d*)], and variance E[L(variance,d*)]. I want to have a measure that integrates both expected losses.
Relevant answer
Answer
I got it. We can do it via the expected generalised loss function! Weighted or unweighted, one of the earliest work is here: https://www.jstor.org/stable/2958163!
  • asked a question related to Decision Theory
Question
22 answers
Such a system can be a Business Intelligence analytical platform connected to the Big Data database system, where information from the Internet is collected, collected, processed and analyzed, including comments from Internet users entered into social media portals.
On the basis of this data, analytics reports are created in the Business Intelligence system describing changes in interest, consumer preferences for specific products and services, as well as changes in the company's brand assessment that offers a specific product or service offer to the market.
These reports can be very tangible in the business management process, including they can support decision-making in the field of production planning as well as the distribution process, sales organization in the form via the Internet, in the form of e-commerce.
Do you agree with me on the above matter?
In the context of the above issues, the following question is valid:
How to build a decision support system in the field of selling on the Internet, online store, e-commerce?
Please reply
I invite you to the discussion
Thank you very much
The issues of the use of information contained in Big Data database systems for the purposes of conducting Business Intelligence analyzes are described in the publications:
I invite you to discussion and cooperation.
Best wishes
Relevant answer
Answer
Dear Gioacchino de Candia,
Yes, that's right. The issue of collecting and processing large sets of information on Big Data Analytics platforms is particularly crucial in the context of the discussed issues.
Thank you, Regards,
Dariusz Prokopowicz
  • asked a question related to Decision Theory
Question
1 answer
For this paper: A linguistic distribution behavioral multi-criteria group decision making model integrating extended generalized TODIM and quantum decision theory
How can I access the MATLAB code of this article?
How can I replace my data in this article?
Relevant answer
Answer
Dear Fahime Tlbl,
The best way is to contact/request the first author of the underline article.
Good Luck
  • asked a question related to Decision Theory
Question
6 answers
It seems that there is, more or less, some sort of consensus on academic standards. Who is responsible for drawing the guidelines that shape the way academia functions? Who do you think puts the standards for research publishing in influential journals?
Recommendations by ordinary researchers? Decisions by elite researchers? Do policy makers have a say in this? What connects these academic decision makers, whether individuals or institutions, and governs them?
I would appreciate your views. Thanks!
Relevant answer
Answer
"Informality in Metagovernance"! So that's what it's called. This is so intriguing, Remi. Thanks!
  • asked a question related to Decision Theory
Question
3 answers
Hello,
I am studying how WOM can affect the purchase decision of the customer for a specific industry. The purchase decision theory already exists and I am the studying if WOM can effect the purchase decision, as well I have created a hypothesis. For data collecting, I am only using a questionnaire my research is quantitative.
So, for the approach of the research should be deductive or abductive? Should remove the hypothesis to avoid any confusion? In addition, for the philosophy which kind of philosophy would be more suitable?
Relevant answer
Answer
You will have to use the hypothesis and make it a deductive approach.You could further test your model by the CFA approach.
  • asked a question related to Decision Theory
Question
3 answers
What is the enhancement of the MCDM methods towards air pollution? Can we adaptive decision theory to support air pollution problems?
Relevant answer
Answer
Thank you Dear professors Nolberto Munier and Theo K. Dijkstra
  • asked a question related to Decision Theory
Question
4 answers
Greetings, Researchers. On the perspective, that there are several political models in science, technology and innovation. Please, I need a literature that mensure impacts of this politicals on decision perspective of the researcher. Since, the influences of the media: politicians, social and academics define what and how to research. fact, which in the last order defines the technological trajectory.
Relevant answer
Greetings, researcher.
In view of the base, national systems of innovation, efforts should be made to identify, stratify and qualify the research versus culture/rationality of the respective researchers.
  • asked a question related to Decision Theory
Question
25 answers
Is Entropy Shanon a good technique for weighting in Multi-Criteria Decision-Making?
As you know we use Entropy Shanon for weighting criteria in multi-criteria decision-making.
I think it is not a good technique for weighting in real world because:
It just uses decision matrix data.
If we add some new alternatives, weights change.
If we change period of time weights change.
For example we have 3 criteria: price, speed, safety
In several period of time weights of criteria vary
For example if our period of time is one month"
This month may price get 0.7 (speed=0.2, safety=0.1)
Next month may price get 0.1 (speed=0.6, safety=0.3)
It is against reality! What is your opinion?
Relevant answer
Answer
Once I was working on several variables and I wanted to weight them. At this time, people usually say that we'd better provide a questionnaire and then through AHP, ANP or other related methods define the weights for variables. That's quite common but how about the bias of the those who fill the questionnaire. Therefore, I looked for some other methods to weight variables based on the reality and I came across with Entropy. In fact, I weighted variables based on the each of these methods and then I compared the results. Entropy results were much closer to what is going on in real world.
  • asked a question related to Decision Theory
Question
7 answers
Can numbers (the Look then Leap Rule OR the Gittins Index) be used to help a person decide when to stop looking for the most suitable career path and LEAP into it instead or is the career situation too complicated for that?
^^^^^^^^^^^^^^^^^
Details:
Mathematical answers to the question of optimal stopping in general (When you should stop looking and leap)?
Gittins Index , Feynman's restaurant problem (not discussed in details)
Look then Leap Rule (secretary problem, fiancé problem): (√n , n/e , 37%)
How do apply this rule to career choice?
1- Potential ways of application:
A- n is Time .
Like what Michael Trick did https://goo.gl/9hSJT1 . Michael Trick A CMU Operations Research professor who applied this to his decide the best time for his marriage proposal., though he seems to think that this is a failed approach.
In our case, should we do it by age 20-70= 50 years --- 38 years old is where you stop looking for example? Or Should we multiply 37% by 80,000 hours to get a total of 29600 hours of career "looking"?
B- n is the number of available options. Like the secretary problem.
If we have 100 viable job options, we just look into the first 37? If we have 10, we just look into the first 4? If we are still in a stage of our lives where we have thousands of career paths?
2- Why the situation is more complicated in the career choice situation:
A- You can want a career and pursue it and then fail at it.
B- You can mix career paths. If you take option c, it can help you later on with option G. for example, if I went as an IRS, the irs will help me later on if I decide to become a writer so there's overlap between the options and a more dynamic relationship. Also the option you choose in selection #1 will influence the likelihood of choosing other options in Selection 2 (For example, if in 2018 I choose to work at an NGO, that will influence my options if I want to do a career transition in 2023 since that will limit my possibility of entering the corporate world in 2023).
C- You need to be making money so "looking" that does not generate money is seriously costly.
D- The choice is neither strictly sequential nor strictly simultaneous.
E- Looking and leaping alternates over a lifetime not like the example where you keep looking then leap once.
Is there a practical way to measure how the probability of switching back and forth between our career options affects the optimal exploration percentage?
F- There is something between looking and leaping, which is testing the waters. Let me explain. "Looking" here doesn't just mean "thinking" or "self-reflection" without action. It could also mean trying out a field to see if you're suited for it. So we can divide looking into "experimentation looking" and "thinking looking". And what separates looking from leaping is commitment and being settled. There's a trial period.
How does this affect our job/career options example since we can theoretically "look" at all 100 viable job positions without having to formally reject the position? Or does this rule apply to scenarios where looking entails commitment?
G- * You can return to a career that you rejected in the past. Once you leap, you can look again.
"But if you have the option to go back, say by apologizing to the first applicant and begging them to come work with you, and you have a 50% chance of your apology being accepted, then the optimal explore percentage rises all the way to 61%." https://80000hours.org/podcast/episodes/brian-christian-algorithms-to-live-by/
*3- A Real-life Example:
Here are some of my major potential career paths:
1- Behavioural Change Communications Company 2- Soft-Skills Training Company, 3- Consulting Company, 4-Blogger 5- Internet Research Specialist 6- Academic 7- Writer (Malcolm Gladwell Style; Popularization of psychology) 8- NGOs
As you can see the options here overlap to a great degree. So with these options, should I just say "ok the root of 8 is about 3" so pick 3 of those and try them for a year each and then stick with whatever comes next and is better?!!
Relevant answer
Answer
Hey Kenneth Carling , I got this number from page 29 in their book (Always Be Stopping, Chapter 1). They quote research results from Seale & Rapoport (1997) who found that on average their subjects leapt at 31% when given the secretary problem - they say that most people leapt too soon. They also say that there are more studies ("about a dozen") with the same result, which makes it more credible in my view.
  • asked a question related to Decision Theory
Question
13 answers
I'm teacher and suffering a lot to complete my MS. I need to write an MS level research thesis. I can work in Decision Making (Preference relations related research work), Artificial Intelligence, Semigroups or Γ-semigroups, Computing, Soft Computing, Soft Sets, MATLAB related project etc. Kindly help me. I would be much grateful to you for this. Thanks.
Relevant answer
Answer
The answers to the question for this thread are excellent. There is a bit more to add.
Before starting either a M.Sc. or Ph.D. thesis, it is very important to read published theses by others. Here are examples:
Source of M.Sc. and Ph.D. theses:
Another source of theses:
  • asked a question related to Decision Theory
Question
1 answer
I am looking for a book or another reference which has different examples of decision theory in building construction. for example, some examples that show different actions and their outcomes regarding to an uncertain problem (e.g. existing of hazardous materials in an existing building.
I would be highly appreciated if anyone can help me.
Thanks,
Mansour
Relevant answer
Answer
Interesting..
  • asked a question related to Decision Theory
Question
2 answers
Is the canonical unit 2 standard probability simplex, the convex hull of the equilateral triangle in the three dimensional Cartesian plane whose vertices are (1,0,0) , (0,1,0) and (0,0,1) in euclidean coordinates, closed under all and only all convex combinations of  probability vector 
that is the set of all non negative triples/vectors of three real numbers that are non negative and sum to 1, ?
Do any  unit probability vectors, set of three non negative three numbers at each pt, if conceveid as a probability vector space,  go missing; for example <p1=0.3, p2=0.2, p3=0.5>may not be an element of the domain if the probability  simplex in barry-centric /probabilty coordinate s a function of p1, p2, p3 .
where y denotes p2, and z denotes p3,  is not  constructed appropriately?
and the pi entries of each vector,  p1, p2 p3 in <p1, p2,p3> p1+p2+p3=1 pi>=0
in the  x,y,z plane where x =m=1/3 for example, denotes the set of probability vectors whose first entry is 1/3 ie < p1=1/3, p2, p3> p2+p3=2/3 p1, p2, p3>=0; p1=1/3 +p2+p3=1?
p1=1/3, the coordinates value of all vectors whose first entry is x=p1=m =1/3 ie
Does using absolute barry-centric coordinates rule out this possibility? That vector going missing?
where <p1=0.3, p2=0.2, p3=0.5> is the vector located at p1, p2 ,p3 in absolute barycentric coordinates.
Given that its convex hull, - it is the smallest such set such that inscribed in the equilateral such that any subset of its not closed under all convex combinations of the vertices (I presume that this means all and only triples of non negative pi that sum to 1 are included, and because any subset may not include the vertices etc). so that the there are no vectors with negative entries
every go missing in the domain  in the , when its traditionally described in three coordinates, as the convex hull of three standard unit vectors  (1,0 0) (0,0,1 and (0,1,0), the equilateral triangle in Cartesian coordinates x,y,z in three dimensional euclidean spaces whose vertices are    Or can this only be guaranteed by representing in this fashion.
Relevant answer
Answer
Of course it is, its what it is by definition. I probably should have thought more about this, back when I originally posted. If it isnt, then nothing is. Its the "closed" convex hull of its vertices, or all points in [0,1]^3, that can be expressed by convex combinations of its vertices:, (1,0,0), (0,0,1), (0,1,0).
So that any vector (x,y,z), x+y+z=1 , or rather point, (x,y,z), in [0,1]^3, where x+y+z=1,and x >=0,y,>=0z>=0 can be simply expressed by x* (1,0,0)+(0,1,0)+z*(0,0,1)=(x,y,z), as closed under all convex combinations of the vertices convexity (of this form) simply means/ requires that its contains all points in [0,1]^3, that can be expressed by-negative c1, c2,c3 ; c1*(1,0,0)+c2*(0,1,0) +c3*(0,0,1) where c1+c2+c3 =1, it just is the set of 3 all and only non-negative coordinates, that sum to one . And clearly we can set c1=x, c2=y, and c3=y and these are non-negative and sum to one. So its all, and "only" probably triples (x,y,z),x+y+z=1 .
I should have thought this through.
The main questions are (1) and (2):
(1): does the canonical 2 probability simplex, contain the canonical 2 simplex of the sums at each point x+y, x+z, z+y, clearly at each point (x,y,z) in the canonical 2 probability simplex, x+y,in [0,1] x+z\in [0,1], z+yin [0,1] and their sum (x+y+x+z+z+y=2(x+y+z)=2*1=1, but does it contain at "every" point, every such convex combination of (x+y=l, x+z=g, z+y=h), 1>=h>=0,1>=g>=0, 1>=l>=0 & l+g, g+h, h+l in where h+g+l=2 at some point (x,y,z)?
Id say yes, because if there were a (l,g,h), l+g+h=2, 1>=h>=0,1>=g>=0, 1>=l>=0, but with no accompanying (x,y,z) x+y+z=1, (x,y,z)>=0, l= x+y, g=x+z, h=z+y
, g=x+h-y, y=x+h-g, so l=x+x+h-g, l=2x+h-: x=1/2*(l-h+g), which is uniquely determined where l-h+g<=l+g+h=2, as h, g, l>=0, and clearly x+y+z=1/2(2x+2y+2x)1/2(l+g+h)=1, the only way that point could not be in the simplex is if, for example x<0
which in this case of l+g<h, as h<=1, g<=1, l<l, as per the conditions above, that we have a contradiction as (l+g)+h=2 where, l+g<h gives (l+g)+(l+g)<(l+g)+h=2, to 2(l+g)<2, L+g<1 but h<=1, so 1+h<=1+1=2, so, 1+h<=2,
L+g<1 gives (l+g)+h< 1+h
and (l+g)+h<1+h<=2, L+g+h<2, ie l+g+h<1
1/2*(1-g+h)<=1/2*2=1
And moreover, (2):
(3)Does it contain all such h+g+l=1, where the l, g, h, are the sums of the first 2, the first and third, second and third coordinates, respectively of 3 "distinct" cartesian points in the simplex, (x1,y1,z1), (x2,y2,z2), and (x3, z3, y3), l=x1+y1, g=x2+z2, h= y3+z3, h+g+l=1, where none of the x1,x2,x3, x2,y2,z2, x3,z3, y3 are 0?.
Clearly for any m, in [0,1] there is a point in the simplex <x,y,z>, x=m.
So as x is an element of a point, in the simplex, then at those points, where the x coordinate is m, m=x>=0, and clearly x=m<=1.
As, 1>=m>=0 entails that 0<=1-m<=1, then (1-m) is in [0,1],. Then as, for any real in [0,1], there are pts in the simplex, where the x coordinate attains that real, then there is a pt, whose x coordinate is x1, where, (the x coordinate), assumes the value, x1=1-x=1-m.
<x1,y1,z1>, x1+y1+z1=1 so, y1+z1=1-x1=1-(1-m)=m, so x+z assumes the value m,
As the y and z coordinates can assume any value in [0,1] as well, then we have all non negative coordinate of three points, in [0,1]^3:, p1 =(x1,y1,z1), p2=(x2,y2,z2), and p3=(x3, z3, y3), where x1+y2+y3<=3 where x1,y2,y3, are all in [0,1], and thus, the subset, comprising all, sets of three points, p1, p2, p3 in [0,1]^3 where x1+y2+z3=2 , where x1,y2, z3 are non-negative and in [0,1,] are in the simplex.
As, at any pt, in the simplex, and at all pts , x+y+z=1, so: (x1+z1+y1)+)x2+y2+z2)+(x3+y3+z3)=3, and as x1+y2+z3=2 ,
so( x1+y1)+(y2+z2)+(z3+x3)= 3-(x1+y2+z3)= 1,
ie for any real in [0,1] x=m at(x,y,z) if and only there is a point (x1,y1,z1), x1+z1= m
(ie not on the edges of the simplex, where, aeprt from the vertices, only two of the coordinate of a point are non-zero and not 1 ?
At the edges we set y1=0, x1=x1+y1=0+x1= l,
x2=0, so as x2+y2+z2=1, x2+z2 in [0,1], so we set z2=g, z2+x2=z2+0=z2=h so,
z2+x2=g, and at the third point. Let z3=0, h=y3 so y3+z3=0+y3=y3=h
I suppose so, there was a counter-example there wou as if it did not there would have to be no point in the simplex (x,y,z), where 0<=y+z=l=1-g-h<=1 for some, (h,g, l), h+g+l=1 h>=0, g>=0, l>=0, where of course, this gives that 0=1-1<=1-(g+h)=l>=1-0=0 , It also has to express all such combinations because if did not for some l, g, h.
So as long as the the set of all probability triples, 2 canonical probability simplex , contain on its edges, (the 2 positive entries,), the set of call of all probability doubles, that is, the "canonical 1 probability simplex/ the set of 2 non-negative points in R^2 whose sum is 1 (all convex combinations of 2 non-negatives that sum to 1, then the it will contain some such three such edge points, meeting the above sum constraintm(x1,y1,z1), (x2,y2,z2), and (x3, z3, y3), l=x1+y1, g=x2+z2, h= y3+z3, h+g+l=1.
Where along the edges, the double in the canonical 1 probability simplex, which any triple on the edges of canonical 2 probability simplex, identifies, are those doubles, formed by the, 2 elements, of the 3 elements at that point (x,y,z), which happen to be positive (non-zero). SO, setting the first positive coordinate, in said triple (which will be x, or y) to be the x coordinate of the double, and the second positive entry (which will be z or y) to y, ie (x=1/6, y=0, z=5/6) goes to (x=1/6, y=5/6) which will be unique except at the vertices (where one can use both (1,0) or (0,1), and x+y=1 as the other point z will be zero, as the point was taken from the edge of the set of canonical 2 simplex (x,y, z), where precisely one of (x,y,z), is 0, (except at the vertices, themselves, where 2 of them are of course)
  • asked a question related to Decision Theory
Question
4 answers
Refers to the measurement of subjective utility or its neuroeconomic brother, subjective value. Ideally in isolated laboratory settings, i.e. no situational factors involved.
I would expect different heuristics or biases observed in such DM tasks due to the more abstract nature of public goods (PG), the issue of value appropriation, and perhaps stronger influence of emotions (or other factors).
I'm looking for reactions to the nature of the choice object that are much more pronounced for or unique to PG .
I think scope insensitivity is one such thing but there should be more.
However, I'm having a hard time locating good articles for this question. Do you know any?
Relevant answer
Answer
Hello Christian,
One possible approach to the question you posed might have to do with the fact that a decision maker has to carry out additional value computation for others' welfare in the PG context (either in the classical public good game paradigm, or in more general experimental contexts where one has to choose between reward for self vs. others). This adds at least two distinct elements to decision making process involving self alone : 1) forming the representation of other-benefiting values, 2) comparing subjective utility of self vs. other-regarding behaviors. The second component can also involve consideration of potential risk or reputational consequence. Accordingly, decisions involving PG may recruit some extra cognitive processes such as ToM (e.g. modeling how much other regarding behaviors could bring actual benefit to others), risk processing (e.g. calculating potential risk/ambiguity associated with public investment) , self-control (e.g. normative suppress of egoistic motives that intuitively favor self-serving behaviors), and impression management (e.g. modeling reputational consequences of other-regarding behaviors) as well as some uniquely social emotional experience such as inequity aversion or guilt (e.g., for not investing to public account). Some of these processes are particularly important when private vs. public good are in conflict. Yet, I think that we can infer some general element specific to judgments involving private vs. public good.
Recent findings in cognitive/decision neuroscience show that judgments involving others vs. self tend to be more "objective," "abstract" and in a way "cognitive" although the language might not exactly match the terms that have been used in psychology or behavioral economics. Here are some of the studies that might serve as useful reference.
1) Ruff, C. C., & Fehr, E. (2014). The neurobiology of rewards and values in social decision making. Nature Reviews Neuroscience, 15(8), 549.
-> This is one of the most recent, and best reviews that provides a conceptual framework for value-based decision making paradigm in cognitive neuroscience. It has a section showing how decisions involving others would recruit either common or distinct neural circuitries (see page 5) in the brain. This may point to some of the key differences between the two modes of decisions you are interested in.
3) Knoch, D., Schneider, F., Schunk, D., Hohmann, M., & Fehr, E. (2009). Disrupting the prefrontal cortex diminishes the human ability to build a good reputation. Proceedings of the National Academy of Sciences, pnas-0911619106.
-> This study used TMS showing that other-regarding behaviors could involve extra-cognitive control housed in the dlpfc: this could reflect keeping track of one's reputational gain, or/and also suppressing egoistic motives to pursue self-serving outcomes.
4) Strang, S., Gross, J., Schuhmann, T., Riedl, A., Weber, B., & Sack, A. T. (2014). Be nice if you have to—the neurobiological roots of strategic fairness. Social cognitive and affective neuroscience, 10(6), 790-796.
-> This is more recent finding that is in line with Knoch, but used fMRI studies to reveal more a detailed picture of neural circuitries.
5) Yu, H., Shen, B., Yin, Y., Blue, P. R., & Chang, L. J. (2015). Dissociating guilt-and inequity-aversion in cooperation and norm compliance. Journal of Neuroscience, 35(24), 8973-8975.
-> This paper provides evidence for dissociable consequence of "guilt" and "inequity aversion," which in my opinion could be considered as a product of social considerations that are missing in a private-decision making context.
6) Telzer, E. H., Masten, C. L., Berkman, E. T., Lieberman, M. D., & Fuligni, A. J. (2011). Neural regions associated with self control and mentalizing are recruited during prosocial behaviors towards the family. Neuroimage, 58(1), 242-249.
-> This works shows a potential involvement of self-control and mentalizing activities in the brain when one engages in prosocial behaviors towards family.
7> Emonds, G., Declerck, C. H., Boone, C., Vandervliet, E. J., & Parizel, P. M. (2012). The cognitive demands on cooperation in social dilemmas: an fMRI study. Social Neuroscience, 7(5), 494-509.
-> Similar to Telzer et al (2011), this study used two different economic decision making games (e.g. the PD and stag-hunt game) in fMRI sensing to investigate how decisions involving others could impose cognitive burdens to human brain.
8) Vives, M. L., & FeldmanHall, O. (2018). Tolerance to ambiguous uncertainty predicts prosocial behavior. Nature communications, 9(1), 2156.
-> This recent article shows that people who are more ambiguity tolerant are more likely to conduct prosocial behaviors at least in some experimental contexts. This results suggest that public good processing could also bring about consideration of risk and ambiguity.
------
I'd also like to note that the nature of public good can be vastly different depending on 1) decision maker's own (pro)social orientation, 2) how "public" is defined in relation to self, and 3) how social institution shapes individuals' primary mode of decisions. These also mean that specific cognitive or affective mechanisms subserving public vs. private decision making may differ across studies. Here are several studies that might help you address these issues:
1) Sul, S., Tobler, P. N., Hein, G., Leiberg, S., Jung, D., Fehr, E., & Kim, H. (2015). Spatial gradient in value representation along the medial prefrontal cortex reflects individual differences in prosociality. Proceedings of the National Academy of Sciences, 201423895.
-> This paper shows that a neural representation of other-regarding behaviors are different among participants with prosocial vs. pro-self orientation. Note that pro-self individuals tend to be more "cognitive" when they chose prosocial actions, while pro-self individuals remain relatively more intuitive when making the same other-regarding choices.
2) Kuss, K., Falk, A., Trautner, P., Elger, C. E., Weber, B., & Fliessbach, K. (2011). A reward prediction error for charitable donations reveals outcome orientation of donators. Social cognitive and affective neuroscience, 8(2), 216-223.
-> This paper also reveals how individual difference could modulate neural responses to choices that benefit self vs. public.
3) Strombach, T., Weber, B., Hangebrauk, Z., Kenning, P., Karipidis, I. I., Tobler, P. N., & Kalenscher, T. (2015). Social discounting involves modulation of neural value signals by temporoparietal junction. Proceedings of the National Academy of Sciences, 112(5), 1619-1624.
-> This is an interesting study showing that the relationship between self and other critically modulates value computation, which, in turn, leads to differential degree of generous/prosocial behaviors in a simple economic decision game.
4) Rand, D. G., Greene, J. D., & Nowak, M. A. (2012). Spontaneous giving and calculated greed. Nature, 489(7416), 427.
This is a very important study suggesting that people engage in prosocial behaviors (in the public good game) rather intuitively. This may seem to run counter to the argument that other-regarding judgments involve some extra-cognitive processes which would typically delay response time in cognitive experiments. But, in fact, it is possible that specific facet of decision mechanisms involving private good vs. public good could be largely contingent on individuals' value orientations (e.g. how they value public good over private good or vice versa) and how social institution shaped individuals' dominant mode of decision processes (e.g. how they are accustomed to making such a decision). I will elaborate this below and also list some relevant evidence. If you are interested in this line of works, look for "Social heuristics hypothesis" by David Rand at Yale. (For example, Rand, D. G. (2016). Cooperation, fast and slow: Meta-analytic evidence for a theory of social heuristics and self-interested deliberation. Psychological Science, 27(9), 1192-1206.)
----------
The answer has become somewhat wordy. But in short, I think that decisions involving public good may typically involve additional cognitive mechanisms required for: estimating how much other's welfare is valuable to self, comparing values of reward to others vs. self, accurately estimating other's preference (towards outcomes that you are about to deliver), overcoming risk and ambiguity, estimating how decisions promoting/undermining public good could impact one's own reputation, and other- and self-directed emotional processes associated with self vs. other-regarding choices. These some of these processes would make decisions involving public good vs. private good more abstract and cognitively taxing. However, the specific patterns of differences between representations of public good and private good may differ according to individuals' orientation towards pro-self vs. prosocial decisions, and to dominant decision modes widely promoted in a given socio-cultural context (which will determine individuals degree of familiarity and "decision habits.").
My answers are mostly grounded in decision neuroscience and might not use the exactly same terms you are looking for. However, I believe that lots of these works reveal how decision mechanisms involving private good vs. public good are different in various value/affective/cognitive domains. I hope this helps you.
Please feel free to let me know if you have any questions or other need other resources. I'd also be happy to discuss here.
Best,
Minwoo
  • asked a question related to Decision Theory
Question
5 answers
Decision theory used to solving complex problem of business. How?
  • asked a question related to Decision Theory
Question
6 answers
Iis there any discrete analogue m of star convexity at 0'  star convexity at 0',?
That is, in the same way that midpoint convexity can be seen to be a discrete analogue of convexity (and entails it under certain regularity assumptions, continuity etc)?
For example, where F:[0,1] \to [0,1],
star convexity at 0 :
"\for all x \in  dom (F)=[0,1],:(\forall t \in [0,1]) :"F(t * x_0 +(1-t) * x )<= t * F(x_0)+(1-t)* F(x)  where x_0 =0 "
Generally with F(0)=0  t it becomes :
For all x \in  dom (F)=[0,1],:(\forall t \in [0,1]) F(tx)<=tF(x)  
(it is convexity restricted so that the first argument is some specific minimum value usually x0=0 
'
such as '\mid-star convexity at x0=0' for F:[0,1] \to [0,1]  F(0)=0'
mid-star convexity at x0=0 \for all x \in  dom (F)=[0, F(1/2x)<=1/2F(x) which under certain regularity assumptions ensure the star convexity of the function or that its a retract \forall x in dom(F)=[0,1]
F(x)<=x which given F(1)=1 is generally entailed by star convexity
( i call this mid-star convexity with F(0)=0  and x0=0\in dom(F) for F:[0,1]\to [0,1]
strictly monotone increasing bi-jection F:[0,1]\to [0,1]
-where F is absolutely continuous
-F at1/2/3 times continuously differentiable
with F(1)=1 and F(0)=0 F(1/2)=1/2
I presume not as not all star convex functions are continuous to begin with?
Relevant answer
Answer
Great.
  • asked a question related to Decision Theory
Question
2 answers
State dependent additivity and state independent additivity? ;
akin to more to cauchy additivity versus local  kolmorgov additivity/normalization of subjective credence/utility,  in a simplex representation of subjective probability or utility ranked by objective probability  distinction? Ie in the 2 or more unit simplex (at least three atomic outcomes on each unit probability vector, finitely additive space) where every events is ranked globally within vectors and between distinct vectors by < > and especially '='
i [resume that one is mere representability and the other unique-ness
the distinction between the trival
(1)x+y+z x,y,z mutually exclusive and exhaustive F(x)+F(y)+F(z)=1
(2)or F(x u y) = F(x)+F(y) xu y ie F(A V B)=F(A)+F(B)=F(A)+ F(B) A, B disjoint
disjoint on samevector)
(3)F(A)+F(AC)=1 disjoint and mutually excluisve on the same unit vector
and more like this or the properties below something more these
to these (3a, 3b, 3C) which are uniqueness properties
forall x,y events in the simplex
(3.A) F(x+y)=F(x)+F(y) cauchy addivity(same vector or probability state, or not)
This needs no explaining
aritrarily in the simplex of interest (ie whether they are on the same vector or not)
or(B)  x+y =z+m=F(z)+F(m) (any arbitary two or more events with teh same objective sum must have the same credence sum, same vector or not) disjoint or not (almost jensens equality)
or (C)F(1-x-y)+F(x)+F(y)=1 *(any arbitrary three events in the simplex, same vector or not, must to one in credence if they sum to one in objective chance)
(D) F(1-x)+F(x)=1 any arbitary two events whose sum is one,in chance must sum to 1 in credence same probability space,/state/vector or not
global symmetry (distinct from complement additivity) it applies to non disjoint events on disitnct vectors to the equalities in the rank. 'rank equalities, plus complement addivitity' gives rise to this in a two outcome system, a
. It seems to be entailed by global modal cross world rank, so long as there at least three outcome, without use of mixtures, unions or tradeoffs. Iff ones domain is the entire simplex
that is adding up function values of sums of evenst on distinct vectors to the value of some other event on some non commutting (arguably) probability vector
 F(x+y)=F(x)+F(y)
In the context of certain probabilistic and/or utility unique-ness theorems,where one takes one objective probability function  and tries to show that any other probability function, given ones' constraints, must be the same function.
Relevant answer
Answer
and F(x+y)=F(x)+F(y)
In the context of certain probabilistic and/or utility unique-ness theorems,where one takes one objective probability function  and tries to show that any other probability function, given ones' constraints, must be the same function.
what is meant by state dependent additivity does that mean that instead of F(x U y)=F(x)+F(y) x,y disjoint (lie on the same vector) that (same finite probability triple) ; or instead of F(x)+F(y)+F(z)=1 iff x,y,z are elements of the very same vector (same triple)
in the simplex one literally instead has that F(x+y)=F(x)+F(y) over the entire domain of the function. ie adding up (arguably non commuting) elements of distinct vectors,   or F(x)+F(y)+F(1-x-y)=1 arbitarily over the simplex; or domain of interest, where the only restriction is that one can only add up elements as many times as they are present in the domain. Whilst with cauchy additivity one if one domain is merely a single vector. <1,3, 1,6, 1,2, unit event=1 > one so long 1/6 is in the domain (supposing that entire domain, probability vector space, just is that vector dom(F)={1,3 1,6, 1,2 1),  where if F(1)=1 one can arbitrarily add up 1=F(1)=F(1/6+1/6+1/6 ) six times= 6F(1/6), so F(1/6)=1/6.
I presume however, that if ones domain is the entire simplex there would not be any relevant difference, between outright Cauchy additivity and state independent additivity; and thus to presume would be outright presumptuous.  Or this a name for cross world global rank which entails its long as there at least three atomic events on each simplex (so long as the simplex is well constructed, and the rank is global and modal), even if finite local standard additivity is presumed). As one can transfer values of equiprobable events onto other vectors where they are disjoint?
if by state independent addivity  this means one can arbitrary add up the function values of F(1/6) for objective probability six times, to F(1)=1 the function value lets say at chance=1 to attain that F(1/6)=6 so long as those events are ranked equal and are present at least six times somewhere or other, even if in distinct state, or vectors (in the same system).
or does this apply to local additivity, where one has a global modally transitive rank over the simplex where n>=2 (ie the number of elements in each triple is at least three) because a cross world rank, with equalities, will entail this in any case, if justified. So if one can derive that cross world additivity must hold given finite additivity and global modal rank including equalities cross world/vector, on pain of either either local addivitity failing (probabilism)  or ones justified global and local  total rank  is violated, justified for whatever reason is violated)  including equalities must hold (for whatever reason) does this count as presumptious.
  • asked a question related to Decision Theory
Question
3 answers
In order to get a homogeneous population by inspecting two conditions and filtering  the entire population (all possible members) according these two conditions, then used the all remaining filtered members in the research, Is it still population? or it is a sample ( what is called?).
working on mathematical equation by adding other part to it then find the solution and applying it on the real world. can we generalize its result to other real world?
Relevant answer
Answer
Rula -
I am not sure I understand your process, but if your 'sample' is really just a census of a special part of your population, then you can get descriptive statistics on it, but you cannot do inference to the entire population from it. 
You might find the following instructive and entertaining.  I think it is quite good.
Ken Brewer's Waksberg Award article: 
 Brewer, K.R.W. (2014), “Three controversies in the history of survey sampling,” Survey Methodology,
(December 2013/January 2014), Vol 39, No 2, pp. 249-262. Statistics Canada, Catalogue No. 12-001-X.
Cheers - Jim
  • asked a question related to Decision Theory
Question
33 answers
What is the name for the identities (2) and (3) in the functional analysis literature F-1(is the inverse function?
where (1) F:[0,1] to [0,1] and F is strictly monotonic increasing
where F(0)=0, F(1/2)=1/2 and F(1)=1,
2)\forall (x)\in (F) F(1-x)+F(x)=1
(3)\forall (p)in codom(F)F-1(1-p) +F-1(p)=1; these are the equality bi-conditional (expressed in (2) and (3) , the equality cases of (4), bi-conditional, because it applies to the inverse function so they are expressed as
forall (x, x1)\in dom(F)=[0,1],[x+y =1]  (iff) [F(x)+F(y)=1]
forall (p, p1), in IM(F)\subseteq[0,1];[p+p =1] iff [ F-1(p1) +F-1(p)=1], F-1 is the inverse function and thus F-1(p) F-1(1-p) are elements of dom(F)=[0,1]
x+y=1 iff F(x)+F(y)=1
see, the attached paper 'order indifference and rank dependent probabilities around page 392, its the biconditional form of segal calls a symmetric probability transformation function
I presume that if in addition F satisfies (4)\forall x in [0,1]=dom(F); F(1/2x)=1/2F(x)
That such a function will be identity function, as F(x)=x for all dyadic rationals and some rationals and F is strictly monotone increasing and agrees with F(x) over a dense set note that
given midpoint convexity at 1 and 0
I presume that if in addition
@1\forall x in [0,1]=dom(F) F(1/2x)<=1/2F(x)
\@0 forall x in [0,1]  F(1/2+x/2)<=1/2+F(x)/2
That these equations collapse into equal-ties  
.F(1/2x)=1/2F(x)
F(1/2+x/2)=1/2+F(x)/2
given the symmetry equation;(2) F(1-x)+F(x)=1, andd(1) F:[0,1]to [0,1] and F(0)=0 (which gives F(1/2)=1/2, F(1)=1, follows from F(0)=0,), it follows then F(x)=x for all dyadics  rationals in [0,1]
where F(1)=1 and F(0)= and F strictly monotonic increasing as above
and some rationals and F becomes odd at all dyadic points in [0,1]
n
I am not sure if (3) is required but then given F is strictly monotone increasing (it should be applied by injectivity and (2) . In any case I presume F would collapse into F(x)=x.
What is the general form of a function that merely satisfies F(1)=1 F(0)=0 F(1/2)=1/2 and is strictly mo-none increasing and continuous and satisfies the inequalities?
@1\forall x in [0,1]=dom(F) F(1/2x)<=1/2F(x)
\@0 forall x in [0,1]=dom(F)  F(1/2+x/2)<=1/2+F(x)/2
 The function also satisfies
(4)\forall x,y\in dom (F) x+y>1 \leftrightarrow F(x) +F(y)>1,
\forall x,y\in dom (F) x+y<1 \leftrightarrow F(x) +F(y)<1,
\forall p,p2\in codom(F) p+p1>1 \leftrightarrow F-1(p) +F(p2)>1
\forall p,p2\in codom(F) p+p1<1 \leftrightarrow F-1(p) +F(p2)<1
]
Relevant answer
Answer
if u introduce new variable s=t-0.5 and new function q(s)=f(s)-0.5 then the equation is rewritten as q(s)=q(-s), i.e. it is just a condition of antisymmetry.
  • asked a question related to Decision Theory
Question
5 answers
Is there a distinction between strong or complete qualitative probability orders which are considered to be strong representation or total probability relations neither of which involve in-com parables events, use the stronger form of Scott axiom (not cases of weak, partial or intermediate agreements) and both of  whose representation is considered 'strong '
of type (1)P>=Q iff P(x)>= P(y)
versus type
(2) x=y iff P(x)=P(x)
y>x iff P(y)>Pr(x) and
y<x  iff P(y)<Pr(x)
The last link speaks about by worry about some total orders that only use
totallity A<=B or B<=A without a trichotomy https://antimeta.wordpress.com/category
/probability/page/3/
where they refer to:
f≥g, but we don’t know whether f>g or f≈g. S
However, as it stands, this dominance principle leaves some preference relations among actions underspecified. That is, if f and g are actions such that f strictly dominates g in some states, but they have the same (or equipreferable) outcomes in the others, then we know that f≥g, but we don’t know whether f>g or f≈g. So the axioms for a partial ordering on the outcomes, together with the dominance principle, don’t suffice to uniquely specify an induced partial ordering on the acti
.
 The both uses a total order  over
totality
A <=B or B >=A
l definition of equality and anti-symmetry,  A=B iff A<=B and B>=A
A<= B iff [A< B or A=B] iff not A>B
A>=B iff [A>B or A=B]iff not A<B
where A>B equiv B<A,and
A>=B  equiv B<=A iff (A<B)
where = is an equivalence relation, symmetric, transitive and reflexive
<=.=> are reflexive transitive, negative transitive,complementary and total
, whilst <, > are irreflexive and ass-ymetric,
transitive
A<B , B<C implies A>C
A<B B=C implies A>C
A<B, A<=B implies A>C
and negatively transitive
and complementary
A>B iff ~A<~B
<|=|>, are mutually exclusive.
and where equality s, is an equivalence class not denoting identity or in-comparability but generally equality in rank (in probability) whilst the second kind uses negatively transitive weakly connected strict weak orders,r <|=|>,
weak connected-ness not  (A=B) then A<B or A> B
whilst the second kind uses both trichotomous strongly connected strict total orders,  for <|=|>,.
(2) trichotomoy A<B or A=B or A>B are made explicit,  where the relations are mutually exclusive and exhaustive in (2(
(3) strong connectected. not  (A=B) iff A<B or A> B, and
and satisfy the axioms of A>= emptyset, \Omega > emptyset , \Omega >=  A
scotts conditions and the separability and archimedean axioms and monotone continuity if required
In the first kind <= |>= is primitive which makes me suspect, whilst in the second <|=|> are primitive.
Please see the attached document.And whether trich-otomoy fails
in the first type, which appears a bit fuzzier yet totality holds in both case A>=B or B<=B  where
What is unclear is whether there is any canonical meaning to weak orders (as opposed total pre-orders, or strict weak orders) .
In the context of qualitative probability this is sometimes seen as synonymous with a complete or total order. , as opposed to a partial order which allows for incomparable s, its generally a partial order, which allows for comparable equalities but between non identical events usually put in the same equivalence class (ie A is as probable as B, when A=B, as opposed, one and the same event, or 'who knows/ for in-comparability) Fihsburn hints at a second distinction where A may not be as likely as B, and it must be the case
not A>B and not  A< B  yet not A=B is possible in the second yet
A>= B or A<=B must hold
which appears to say that you can quasi -compare the events (can say that A less then or more probable,  than B ,but not which of the two A<B, A=B, , that is which relation it specifically stands in
but yet one cannot say that A>B  or A<B
)
and satisfy definitions
and A<=B iff A<B or A=B iff B>=A, iff ~A>=~B, where this mutually exclusive to A<B equiv ~B>~A
A>=B iff A>B or A<=B
iff iff B>=A where this mutually exclusive to A>B equiv ~B<~A
and both (1) and (2) using as a total ordering over >= |<=
(1)totalityA<= B or B<=A
(2)equality in rank and anti-symmetric biconditional  A=B iff A<=B and B>=A where = is an equivalence relation, symmetric, transitive and reflexive
(2) A<=B iff A<B or A=B, A>=B iff A>B or A<=B
(3) and satisfy the criterion that >|<|>=|<=,  are
complementary, A>B iff ~B<~A
transitive and negatively transitive,
where A<B iff B<A and where , =, <|> are mutually exclusive,
The difference between the two seem to be whether A>=B and A<= B is equivalent to A=B; or where in the first kind, it counts as strongly respresenting the structure even if A>=B comes out A>B because one could not specify whether A>B or A=B yet you could compare them in the sense that under <= one can say that its either less or equal in probability or more than or equal, but not precisely which of the two it is.
either some weakening of anti-symmetry of the both and the fact that the first kind use
whilst the less ambiguous orders trich-otomous orders use not  (A=B) iff A<B or A> B; generally trichotomy is not considered, when it comes to using  satisfying scotts axiom , in its strongest sense, for strict aggreement
and I am wondering whether the trich-otomous forms which appear to be required for real valued or strictly increasing probability functions are slightly stronger, when it comes to dense order but require a stronger form of scotts axiom, that involves <. > and not just <=.
but where in (1) these <=|>= relation is primitive and trich-otomoy is not explicit, nor is strong connected-ness whilst in (2)A neq B iff A>B or A<B
>|=|< is primitive and both
(1) totality A<= B or B<=A
(2) A<B or A=B or A>B are made explicit,  where the relations are mutually exclusive and exhaustive in (2(
and (2) trichotomy hold and are modelled as strict total trichotomous orders,
as opposed to a weakly connected strict weak order, with an associated total pre-order, or what may be a total order,
, or at least are made explicit.  I get the impression that the first kind as deccribed by FIshburn 1970 considers a weird relation that does not involve incomparables, and is consided total but A>=B and B<=A but one cannot that A is as likely as B, or that its fuzzy in the sense
that one can say that B is either less than or equal in probability to A, or conversely, but if B<= A one cannot /need not  whether say A=B or A<B,
not A=B] iff A<B or A>B
and strongly connected in the second.
where A=B iff A<=B and B>=A in both cases
where <= is transitive , negative transitive, complementary, total, and reflexive
A>=B or B<=A
are considered complete
and
y
Relevant answer
Answer
You are way too dispersed.
Try to undestand well the more basic stuff, one topic at a time.
Then move on.
Do not start from Gleason or weak measurements  for eample.
  • asked a question related to Decision Theory
Question
2 answers
How can I construct a multi-attribute utility function for attributes that I cannot prove utility independence for? 
Thanks. 
Relevant answer
Answer
Thanks, David. Yes, that seems like a good option. 
  • asked a question related to Decision Theory
Question
8 answers
in the case of decision-making process of energy retrofit actions while choosing the best retrofit intervention, which one should be kept under consideration? MODM or MADM?
Relevant answer
Answer
in MODM you have more than one goal (objective), so you want to optimize something AND something else... In MADM you have one goal, but more than one criterion (attribute).
  • asked a question related to Decision Theory
Question
3 answers
In neutrosophic sets all three measures (Truth, Falsehood, indeterminacy) are independent, how one effects another in decision making. For Example: In case of intuitionistic fuzzy sets, if membership of an element increases, then, certainly the sum of other two measures (non-membership and hesitation) will decrease.)
Relevant answer
Answer
This is quite interesting,. RK Mohanty '; I was wondering if there is a particular paper you can mention that concerns this issue. I have immediately noticed, that as soon as tries to build a probabilistic model that involves but only 3 outcomes, based on some parameter, on [0,1], call that U (the unit interval)  which maps to a 3 tuples of numbers <x,y,z> where x,y,z\in[0,1];s.t f:c>{<x,y,z};for,all,x,y,z\in[0,1], where P(A)+P(B)+P(C)=1, where for any c values, A occurs if
f(c)1,ie x<P(A), B occurs if y<P(B), z<P(C), where A, B, C are mutually exhaustive and disjoint, which allows that
(A) only one outcome occurs at most (mutually exclusive),
(B) one outcome does indeed occur (exhaustive),
(C) using a mechanism that is somewhat uniform dfistributed,
(D) where all values of x,y,z are in [0,1] are present for some value of c; surjective,
(E) which does not involve swapping the values of the x,y,z for any specific x, depending on which outcome occurs, or swapping the positions of the outcomes geometric location of the circumference of the circle if you will, where the c value is some on [0,1] denotes some position said circumference,
(F) which does not hard-code, a whole lot of F(C)=1,0,0, , with or without swapping, and (
G) and which allows that any outcome is possible for any c value, depending on what the probability value of that outcome, is except for the very view <1,0,0>, <0,0,1. <0,1,0> perhaps,
(H) AND WHICH is defined in a probability value independent fashion
(I) and is otherwise non-adhoc, that is, does not have, for every or at least in a great number of <x,y,z>  one such 1, or zero for one of the x,y,z, whilst the others are not, with or without swapping of digit values, or geometric location, depending on the outcome and (some sense indirect sense) the probability values or inequalities
In fact, other then a very view adhoc sets of values, and generally these involves some swapping depending on which outcome occurs, and or sometimes at least a singular one or zero,
I dont think one even, so much as find a single 3-tuple of numbers of <x.y.z> that works for all probabilities values,  and which does not involve a 1 or 0 for any x,y,z, which works in the same sense as the two outcome case.
That, , the two outcome case works perfectly well, except for the somewhat trivial case, where P(A)=x, P(B)=y. One cannot so much as even get close to that with three variables where issues only arise when P(A)=x,P(B)=z, P(z)=c, which would not be such an issue, insofar as it still appears for a single probability values for any, c, for the two outcome case in any case; ie where f(c)=<c,1-,c>, which works regardless of the probability values and satisfies the above, despite the aforementioned issue. A occurs P(A)>C, B occurs if P(B)>1-c, c, ranges between all numbers in the unit interval, and P(A)+P(B)=1
I do not think one can have for a three outcome  even so much a a singular 3-tuple of numbers, that satisfies the above constrains, for all values that P(A), P(B) and P(C) could have, at least up to the case, where P(A)=x, P(B)=y. P(C)=z,  where none of the x,z,y are either 1, or 0,and satisfies
(A) mutually  exclusivity,
(B) mutually exhaustive,
(G) one can alter the probability values of A, B, C, some such such value will allow that A and only A would occur, a distinct set of pr values for B that it would ensure it and only it would and could occur, likewise for C,
(F) non-triviality, no hardcoding but a singular one, or zero into said said numbers, x,y,z, ie x,y,x\neq0,x,,yz\,neq=1,,
(E) No outcome dependent or x,y,z value dependent swapping, function swapping, location of outcome on the circle swapping/
(H) Probability value invariant; set of numbers and likewise no probability values or inequality dependent swapping
Unless one puts certain zero or ones, or has weird values for disjunctions, or swaps the values depending on which outcome occurs or does not occur, (whether or not said swapping, involves 1, or zeros, or not,and even then that makes it probability value dependent), so as to ensure that one outcome and only one outcome occurs, whilst allowing that it could be any of the three outcomes depending on the values of the probabilitie
s.  At least not  in the same sense that it works for two outcomes (ie other then the case where P(A)=x, P(B)=1-x).
For three outcomes, one can sometimes, get a mechanism that ensures mutually exclusivit, but not exhaustive, and mutually exhaustive but not exclusive nes, but generally not both, even with swapping,and to make it work perfectly it generally involves at least one such 1, or 0 in said swapping, but swapping in itself is a probability values dependency.. And when you get to more than three outcomes, you have to ask which value is held fixed, and which one swaps, else there aint but even a leg to stand on.
If these x+y+z, in nay particular caes, or all cases, have some specific number they must some up to its clear that it is greater then 1, and smaller or equal to 2, generally around sqrt2/2, but even that hardly works. And even if they vary
  • asked a question related to Decision Theory
Question
12 answers
In many instant it has been said cutting a dendrogram at a certain level gives a set of clusters. Cutting at another level gives another set of clusters. How would you pick where to cut the dendrogram?
Is there something we could consider an optimal point? I have also wondered about this problem, but (unfortunately) haven't found any convincing answers yet.
So is that correct to say “there is no definitive answer since cluster analysis is essentially an exploratory approach; the interpretation of the resulting hierarchical structure is context-dependent and often several solutions are equally good from a theoretical point of view”. Please help me.
Relevant answer
Answer
Well, I guess it is depended the natural of the data set. I assume you are using hierarchical cluster analysis? The dendrogram is very flexible to be explained, which requires the domain knowledge of the data.
If you want more solid evidence about whether a cluster number is good or bad, go for silhouette analysis or BIC score.
If you don't know the initial number for clustering, I suggest you go for  Variational Bayesian Gaussian Mixture. It will output a cluster number that is proper based on the input data. 
Personally, I suggest you go for Latent Tree Model clustering, this type of clustering analysis will give you a tree structure with a set of reasonable and possible clustering strategy (the number of clusters and the weight between them).
  • asked a question related to Decision Theory
Question
4 answers
For both Bayesian and Frequentist expected loss, is the parameter an index of the data to which to make decisions on, or a state of nature?
Are there examples where a loss function is mapped using a vector of real observations to show what the parameter looks like? 
Relevant answer
Answer
Trying to understand your question.  Since the possibilities/models are infinite in number, parametrization cannot truly represent a state of nature but only help model a certain phenomenon (or part of it) and the loss function (in supervised methods) tests the goodness of fit of that model. The model of nature that one puts together can perform well or poorly.  The parameters of the model need to be viewed accordingly.  
The goal of all statistical methods (Bayesian or Frequentist) is to find a model that can adequately represent a phenomenon (represented by a set of available data -- real or simulated) and use that to make logical predictions.  
My sense here is that you're asking if that data can be used for bump hunting.  All unsupervised methods try to answer that very question and there are a vast number of techniques available for that, some better at certain things than others.  In these methods the term loss function is at a loss since the error functions used need to be seen more as frames of judgment for the fit. 
But even for supervised methods, one can use non-parametric methods that use the available data in different ways to establish the probability distribution, which might be the other answer that you're looking for.  
  • asked a question related to Decision Theory
Question
13 answers
I am putting together a decision matrix - 3D cube based on three factors. The three observable factors are measured and the coordinate opens a cell, that gives the decision. A trivial example is if you had a pet and you worked out there are three main indicators of what it  wanted; wagging of tail, excitement and what it brings to you. You want to leave a simple model so when you are away anyone looking after the pet can observe these three and understand what the pet wants. Eg. moderate tail wagging, high excitement and bringing you a leash  - means it want to go for a walk.  My question is there any work done on this type of model?
Relevant answer
Answer
Here are some branching points:
1) Do we have uncertainty or probabilities? What is probabilities scale: 0%-100% or {none, low, medium, high}
2) Can we formulate attribute like this:
"Is leader present?" with values {definitely not, there is a chance, 50/50, there is a strong evidence of presence, definitely yes}? Stepclass deals with these attributes just fine. It is an expert who processes uncertainty on the knowledge elicitation phase.
3) Preference based or knowledge based decision? They differ quite much. Former is solved with (multi-criteria) decision support systems. Latter is solved with expert systems and machine learning tools. I suppose you have knowledge based problem.
Concerning complexity. You can either devise complex model and implement it with software module or simply delegate it to experts during the knowledge elicitation phase. They consider all possible situations described by attributes available from a field officer. Stepclass automates this process and guaranties completeness, i. e. all situations are covered.
"limited observable knowledge" This makes uncertainty unremovable. But with verbal estimates uncertainty processing is a responsibility of experts. And they look for a solution with no haste.
Feel free to contact me for any questions.
  • asked a question related to Decision Theory
Question
1 answer
Suppose I want to give my robot a specific goal state to achieve, but the robot must also maximize some reward function while perusing the goal. What if the shortest sequence of actions to achieve the goal is also the most costly according to the reward function? Or what if the most rewarding sequence of actions is the longest possible? Is there some work which tackles this problem?
Relevant answer
Answer
Perhaps one should model time or plan length into the reward function and then simply maximize reward.
  • asked a question related to Decision Theory
Question
8 answers
Many authors deal with individual differences, but there is inconsistency on what constructs are individual differences. Especially in decision making style research, where some researchers study individual differences in decision making style, while others include decision making style as an individual difference.
Relevant answer
Answer
Stanvich and West have several papers on ind. differences, although they are from the late 90s and early 2000s.  You can find a lot of their papers from their websites:  http://www.keithstanovich.com/Site/Research_on_Reasoning.html  and  http://www.rfwest.net/Site_2/Welcome.html 
  • asked a question related to Decision Theory
Question
2 answers
I'm trying to do the IIA test for an unlabeled choice experiment in Stata. However it drops an error indicating : "Hausman IIA /Small-Hsiao IIA test requires at least 3 dependent categories"
My dependent variable is <<choice>> which takes the value of 1 if the one of the alternatives, A, B or C is chosen and 0 if not.
What can I do?
Relevant answer
Answer
It seems to me you need more variables..as IIA requires...
  • asked a question related to Decision Theory
Question
3 answers
Can anyone help with accessible literature or article suggestions on counterpower and sociocracy -. in English (preferably in the radical pedagogy framework). So far I've come to Peter Wohlleben /The Secret Lives Of Trees), but since I can't read in German, it's not very helpful. And on the counter power topic I got Tim Gee (just as a source - woiuld love to get his book, if anyone has it in pdf).
Thank you all soooo much in advance,
Maruša :)
P. s.: How does this requesting work? I' ve requested two articles already, but noone seems to respond. :-/
Relevant answer
Answer
Top!!
These papers will all be very helpful,
thank you both ever so kindly!! :)
All the best,
Maruša
  • asked a question related to Decision Theory
Question
1 answer
When there are large number of attributes (say 20+), ensuring that they are either independent of each other or slightly dependent is very difficult. However, the dependency among attributes is tried to be avoided when building MCDA hierarchy. So before generating final ranking, does PROMETHEE automatically takes care of this issue through positive, negative and net outranking flows?
  • asked a question related to Decision Theory
Question
12 answers
I am doing an AHP but have a regular questionnaire because I have multiple alternatives (up to 40) Using an AHP questionnaire would be cumbersome and perhaps too complicated for the respondents and would also be time consuming and might consistency problems. So I'm wondering if there's any justification for using regular scales in this circumstance.
Relevant answer
Answer
You can separate the evaluation of priorities and evaluation of alternatives using different methods. 40 alternatives - resulting in 780 (!) pairwise comparisons - cannot be handled by any decision maker. The recommended maximum number is the magic number 7 plus or minus 2. This applies to both, criteria and alternatives.
One way would be, to structure your alternatives hierarchically, i.e. group them in categories, if possible.
The other way would be to use AHP for the prioritization of  criteria only, and any other method for the evaluation of alternatives with respect to those prioritized criteria. It could be a simple table with a yes/no or applicable/not applicable scale, or any other scale, e.g. Likert scale, how good the individual alternative matches the specific criterion.
I did this in some of my projects, using AHP software for criteria prioritization (see e.g. http://bpmsg.com/academic/ahp.php), and an excel sheet for alternative evaluation. 
  • asked a question related to Decision Theory
Question
8 answers
I try to find the link between normative decision theory (NDT) and decision support system (DSS) domain. I'm confused. Both domains use a magical word "decision".
However, NDT is mostly concentrated on analyzing the decisions with respect to the outcomes or consequences, minimizing the lose function or maximizing outcomes.
DSS on the other side seems to be limited to the classification. Predict the decision on the basis of historical data, where the lose function is constructed over good/bad predictions. Many times the training data for DSS are the sets labeled by domain experts.
Is it true that DSS use term "decision" in a sense of "inference" or "judgment" in namespace of NDT?
Relevant answer
Answer
Both use the term decision in the same sense, but the normative models consider a very simplified situation in which a model exists with its alternatives fixed and known; a model of the future (probabilistic generally) and they focuse on the choice phase (optimizing).
DSSs try to stick to reality without ignoring the différents phases of the process "à la Simon", and they try to address other phases than choice. As regard choice they genrally work with open models. This means that a part of the model is built by the user via interaction with the system and that it generally suffices to "satisfice"..
  • asked a question related to Decision Theory
Question
5 answers
I begin with some general question. Is the normative decision theory in primary form applied to real problems? I can hardly find examples of real payoff matrices among toy examples.
Back to main question. I would like to represent in a form of payoff matrix such a problem: incident commander after arriving at the fire ground has such alternatives: 1. Gathering further information; 2. Evacuating of people; 3. Extinguishing the fire.
Candidates to states of nature: fire will extinguish itself;  people will evacuate thyself. 
How to construct the payoff matrix for this problem. Should be the states of nature composed as a combination of the candidates' values:
State 1: won't extinguish itself, won't evacuate thyself;
State 2: will extinguish itself, won't evacuate thyself;
State 3: won't extinguish itself, will evacuate thyself;
State 4: will extinguish itself, will evacuate thyself;
Let assume that candidates are non mutually exclusive and independent.
Relevant answer
Answer
A good book that explains the distinction between normative, descriptive, and prescriptive decision theories is Thinking and deciding by J. Baron
  • asked a question related to Decision Theory
Question
17 answers
What are the key aspects that differentiate normative and prescriptive models? The prescriptive models is something between normative and descriptive models. However, they have strong roots in normative theory. How to clearly distinguish these two models?
Relevant answer
Answer
This is a good question - thanks for bringing it up! I also think that this distinction is often not very clear. Here are a few thoughts on this that I encountered while studying bits and pieces of decision analysis.
I liked an idea presented by one of the lecturers at the LSE on a course on decision analysis that I attended: descriptive models focus on how people make their decisions in their every day lives, normative models focus on how perfectly rational agents should make decisions (where a perfectly rational agent is, for example, someone whose preferences over the considered options are always complete and continuous, who is a perfect Bayesian calculator, etc. etc.) and prescriptive models focus on how we should made decisions in practice, given that we may not always be perfectly rational agents (we need to, for example, be aware of various biases and intuitions that may lead us astray from the normative prescriptions and take extra steps to avoid them when making important decisions).
There is also a paper related to this that I particularly like by Larry Phillips in Acta Psychologica: "A Thoery of Requisite Decision Models" (1984). It suggests (if I understand it correctly) that a prescriptive decision theorist's job, as an advisor to somebody who makes a decision, is to make sure that the decision-maker, having thought about the decision problem, is left content and at ease with the final decision that he or she makes. A complex decision problem might be daunting at start due to multiple objectives, risks, uncertainties, etc., leaving decision-maker unsure about how best to proceed. Decision theoretic methods help break down that problem into various components (the analysis of the decision-maker's preferences over outcomes, his or her objectives, risks, subjective probability assessments of possible states of the world, etc.) and this process, in the end, may help the decision-maker decide which action he or she prefers the most, which is the goal of a prescriptive model.
Hope this helps and I look forward to other suggestions!
  • asked a question related to Decision Theory
Question
2 answers
I know two types of them: wsm (weighted sum method) and desirability function method .are there other methods?
Relevant answer
Answer
Maybe you can consider the non-parametric function of DEA (data envelopment analysis) which does not need criteria weights (so it is different), and that computes each alternative efficiency (but maybe we can call it 'value') as a ratio between its actual performance and the best performance (as shown in practice by other alternatives). Maybe this is not strictly MCDM but it is in the family and maybe is interesting to compare with. 
You can also consider more complex non-linear utility functions.
Look at the Wikipidea page on "utility". Maybe there is something interesting in there.
Good luck.
Fernando.
  • asked a question related to Decision Theory
Question
10 answers
I think about onsite decision support for incident commanders.
In my opinion currently the descriptive and naturalistic models are exploited. Why not prescriptive or normative?
If we consider a human as a decision maker, the factors underlying the use of descriptive or naturalistic models are: inability to comprehend and process in analytical way all the information, course of actions, consequences and costs of alternative activities in mental and time pressure environment.
If we consider a computer system as a decision maker, the factors are:
  • lack of information - we can not ask firefighters to insert data into computer system because they don't have such time;
  • poor sensory layer for recognition of phenomena or victims - there is no so far sensors in building which enable to track fire dynamics, people localization and their physical state;
  • huge uncertainty in modeling and foreseeing the fire and people behavior, reaction of the building to the fire exposure, changes in ventilation, extinguishing effects and many others.
What do you think about this problem?
Relevant answer
Answer
Hi Adam, I think this is a really interesting question.
I very much agree with answers above. An additional thought on the role of environment...
Klein's work demonstrated that expert firefighters can develop effective heuristics. That is, through experience, firefighters develop mental shortcuts that help them to classify rapidly and accurately a scenario and make an effective decision. A pure normative approach takes time. So, in contexts where time is critical and humans can develop effective heuristics, the descriptive approach will be most appropriate.
We should consider employing normative/prescriptive models in contexts where ineffective heuristics are developed, thus healing to reduce overconfidence and reckless decision making.
A great reference for this: Kahneman, D., & Klein, G. (2009). Conditions for intuitive expertise: a failure to disagree. The American Psychologist, 64(6), 515–526. http://doi.org/10.1037/a0016755
This is a current topic I'm wrangling with in Defence context, so welcome your thoughts.
Simon
  • asked a question related to Decision Theory
Question
13 answers
I am looking for the methods like ELECTRE IV or MAXIMIN, and for papers where the problem of the criteriaincomparability is considered.
Relevant answer
Answer
Maceij
You don't need to create weights for criteria. You can use Shannon's entropy method that gives exact weights based on  information contained in the problem, that is, without subjectivity. Zavadskas has an example of this
  • asked a question related to Decision Theory
Question
6 answers
I'm looking for theories or models which try to combine or to unify the theory of Alfred Schütz with common theories of explaining action via decision models like Rational Choice or Bounded Rationality models. The Frame Selection Theory of Hartmut Esser and Clemens Kroneberg is well known to me but I wonder whether there are similar but independent attempts.
Relevant answer
Answer
Hi Rene
There is an alternative framing theory of action which – like the frame selection model - also roots in a (wide) rational choice perspective: Lindenbergs theory of goal-framing and social rationality. Esser was – as far as I know - heavily influenced by Lindenbergs framing approach in the beginning of the development of his theory. But after a while he took a different path. Lindenbergs framing theory is all about goals; their association with ideologies/chunks of knowledge, their situational activation and the interaction between background and foreground goals in decision processes. I add a paper in which Lindenbergs discusses Essers frame selection model in comparision to his goal-frame approach. 
  • asked a question related to Decision Theory
Question
4 answers
Seek your advise on how to use this method. Thank you.
Relevant answer
Answer
Thanks Mr Faris, interesting journal. 
  • asked a question related to Decision Theory
Question
12 answers
I was thinking about the different decision making methods under certain and uncertain conditions. My specific question is that:
As you know, we have many MCDM tools like AHP- ANP- TOPSIS- VIKOR- PROMOTHEE- MOORA- SIR and many other methods and all of them have been developed to fuzzy, type-2 fuzzy, intuitionistic fuzzy and Grey environments. Which one is really more applicable under uncertain situations? fuzzy ? type-2 fuzzy? Intuitionistic fuzzy? or Grey environment for a decision making method? I know each uncertain logic has its applications but sometimes a tiny different in collected data may causes different results by each methods.
All the ideas and comments are appreciated. Hope that all of the experts take an action to this question by following or leaving their valuable comments. 
Relevant answer
Answer
Dear Amin,
Its totally depend on availability of data.
you may get little help form my paper attached here.
Basically, in real world situation when DMs cannot make fuzzy membership function to represent true situation then they can go for Grey systems theory.  
  • asked a question related to Decision Theory
Question
5 answers
I need a quick way to get participants to think/act as if they have made their own choice, while actually have their choice correspond to their assigned condition. In other words, I am looking for a way to get them to "choose" their assigned condition. 
I am considering offering multiple choices (out of 4) and telling them that their choice has to match a random selection in order for the task to begin. But wondering if there is a better, more efficient way to do this.
Relevant answer
Answer
Forcing sounds like a very interesting method and one I'm going to have read more about. On the other hand, many experimentalists are extremely reluctant to directly lie to participants. This shibboleth is particularly strong in experimental economics where Nick Bardsley offers an alternative method: Bardsley, Nicholas. "Control without deception: Individual behaviour in free-riding experiments revisited." Experimental Economics 3.3 (2000): 215-240.
  • asked a question related to Decision Theory
Question
5 answers
In intertemporal choice paradigms, I would like to build a discount utility function for each participant in my study based on a couple of intertemporal decisions performed by each participant. Is there any software that can easily perform such calculations? 
Is matlab the most appropriate software for doing this?
Relevant answer
Answer
A bit more complicated, but maybe also more elegant, is to try to estimate the function through a logistic regression.
The procedure is presented in this article and they also provide their code in Stata: Wileyto, E. P., Audrain-Mcgovern, J., Epstein, L. H., & Lerman, C. (2004). Using logistic regression to estimate delay-discounting functions. Behavior Research Methods, Instruments, & Computers, 36(1), 41-51.
  • asked a question related to Decision Theory
Question
9 answers
I have a weighted supermatrix and I am trying to convert it into a limit matrix.
Weighted supermatrix can be transformed into the limit supermatrix
by raising itself to powers until the matrix converges
How it can be performed ?
Relevant answer
Answer
Hi Sowmiyan,
the weighted supermatrix can be easily handled in .xls only if it is primitive: in this case, it is sufficient to raise it to powers of 2 until you find the result matrix has identical columns. One of the columns can be taken as the LAP (Limiting Absolute Priorities) of the problem. All other cases are not easily managed by means of an .xls file: you can use Superdecisions (http://www.superdecisions.com/), a free download software to implement ANP.
  • asked a question related to Decision Theory
Question
6 answers
The Analytic Hierarchy Process - AHP (Saaty 1980) is a multicriteria tool considered to be relevant to nearly any ecosystem management application that requires the evaluation of multiple participants or complex decision-making processes are involved (Schmoldt & Peterson 1997, Schmoldt et al. 2001, Reynolds & Hessburg 2005).
A need to consult an example of a form used to be filled by experts in a given area of knowledge in order to perform a pairwise comparison between environmental criteria that are useful to define the soil suitability of a region (e.g., soils, slope, aspect, clima,...). Two factors are compared using the rating scale which ranges from 1 to 9 with respect to their relative importance. Than we obtain the weights for each criteria that will be used in the map algebra.
Relevant answer
Answer
I've primarily used AHP in multiple groups where the answers of group members are calculated and entered into a multifactoral  program. This worked better than having folk simply fill out a questionnaire. See my The Limits of Principle for a description of this owrk. Also, my paper on medical decision making and AHP listed on my website.
  • asked a question related to Decision Theory
Question
10 answers
I conducted AHP using 3 pairwise comparisons. Unfortunately the CR comes out as 0.302. A balanced scale using principal eigen vectors also results in a CR of 0.22. Is there any way to be able to move forward with these results?
Relevant answer
Answer
Saaty has stated that just because the ratio is greater than 0.1 does not mean that there is certain inconsistency.  It means that human decision makers and respondents are human. 
  • asked a question related to Decision Theory
Question
8 answers
I am looking for some top and mathematical references in Bayesian analysis and Bayesian decision making. Books and Tutorial articles mostly. Thank you
Relevant answer
Answer
1)Berger, J.O (1985) Statistical Decision Theory and Bayesian Analysis, springer verlag
2) Bernardo,J.M and Amith A.F.M (1994) Bayesian Theory  .Wiley
  • asked a question related to Decision Theory
Question
11 answers
In hypothesis testing we use linear parameters. I am trying to work on factoring irrationality by using non linear modeling. I would like to factor the effect of correlation of the cause  variable in the result.
Is this possible ? Is there any papers on this ? 
Relevant answer
Answer
If you mean how can you model the "irrationality" (in quotes for reasons that are clear in the linked paper) then it is worth describing this irrationality. A good review is in:
The key thrust of the argument is that humans use procedures that appear non-optimal from some rational decision making perspectives, but are actually well-adapted for the situations in which the decisions are made (not always, but sometimes). If you mean just relaxing a model, follow Bruce's advice.
  • asked a question related to Decision Theory
Question
3 answers
Does anyone know what are the common group tasks that people use in their experiments? Tasks where performance can be easily evaluated objectively? I found in literature Michigan State University Distributed Dynamic Decision Making (MSU-DDD), but could not find the modified version for research. Does anyone have this game or know other games that I can use in research? Thanks!
Relevant answer
Answer
I was thinking about doing an experiment in teamwork efficiency, and the only thing that came to my mind is jigsaw puzzle solving: how much faster would a team of 2 solve the given 300-piece puzzle against team of 3, or something. You need something that is easy to parallelize, but that would require some interaction. My suspicion is that the good teamwork games would also be the games where individual effort is hard to quantify...
  • asked a question related to Decision Theory
Question
4 answers
I have used AHP, TOPSIS and Fuzzy TOPSIS in my research work. I would like to know the reliability of TOPSIS and its variations.
Relevant answer
In my view, the TOPSIS method is a good method due to its simplicity and ability to consider a non limited number of alternatives and criteria in the decision making process.
However, similarly to AHP method, TOPSIS presents the problem of ranking reversal (the final ranking can swap when new alternatives are included in the model). On the other hand, the Fuzzy TOPSIS method - proposed by Chen (2000) - does not present this problem.
More detail about the advantages of use and limitations of the Fuzzy TOPSIS method can be found in my paper "A comparison between Fuzzy AHP and Fuzzy TOPSIS methods to supplier selection" (http://www.sciencedirect.com/science/article/pii/S1568494614001203).
Regards,
Francisco Rodrigues L. Jr.
  • asked a question related to Decision Theory
Question
4 answers
The first axiomatic accounts of preference for flexibility and freedom of choice are due to Koopmans (1962) and Kreps (1979), who assumed that a Decision Maker always enjoys having more alternatives available. After that, e.g. Puppe (1996) refined the idea and distinguished the essential alternatives in an opportunity set as those whose exclusion “would reduce an agent’s freedom”.
Most applications I know of consider social choice problems that are relevant to economics theory. What other fields have seen applications of those concepts? I'm particularly interested in corporate decision-making and engineering design.
References:
T. C. Koopmans, “On flexibility of future preference,” Cowles Foundation for Research in Economics, Yale University, Cowles Foundation Discussion Papers 150, 1962.
D. M. Kreps, “A representation theorem for ”preference for flexibility”,”
Econometrica, vol. 47, no. 3, pp. pp. 565–577, 1979
C. Puppe, “An Axiomatic Approach to 'Preference for Freedom
of Choice'” Journal of Economic Theory, vol. 68, no. 1, pp. 174–
199, January 1996
Relevant answer
Answer
Hi Carlos! Flexibility and adaptability are considered now key elements in the design of engineering systems, since they enable reacting to uncertain futures. You may want to look at these papers:
  • asked a question related to Decision Theory
Question
3 answers
The reason for my question is that so many other terms in the defence refer to the "Freedom of Action". [Please see for example: ADP 3–0, Unified Land Operations]
Relevant answer
Answer
Dear Carlos,
Thank for your enlightening answer. 
The concept "Freedom of action" is quite close to "freedom of choice" you mentioned.
In the defence literature it is used as an ability to act as one wants [as I interpret it]. For example in [ADP 3–0, Unified Land Operations] is stated: "The sustainment warfighting function is the related tasks and systems that provide support and services to ensure freedom of action, extend operational reach, and prolong endurance. The endurance of Army forces is primarily a function of their sustainment. Sustainment determines the depth and duration of Army operations. It is essential to retaining and exploiting the initiative."
My main motivation is actually to find a formal definition to the concept so that we may construct a mathematical definition to it and consequently, apply mathematical analysis. I noted your research interest is quite similar to mine, only in an other field of science.
  • asked a question related to Decision Theory
Question
2 answers
I am researching the trust model in WSNs and doing the emulation for the model.I can't find some matlab codes about the reputation-based framework for sensor networks(RFSN).It uses  a Bayesian formulation and a beta contribution.Could you help me?
Relevant answer
Answer
Excuse me,sir.Could you say clearly?I don't understand.
  • asked a question related to Decision Theory
Question
15 answers
I am planning to conduct research on competitive traits and its effect on competitive states. I would appreciate if someone could recommend me some instrument to evaluate pessimistic trait and cognitive bias consequences. Thank you in advance
Relevant answer
Answer
Maybe, the prospect theory could be a useful input for cognitive bias. In Daniel Kahneman's book (Thinking, fast and slow) I found lots of experiments in which cognitive biases have been analyzed.
(I am sorry, maybe I have not answered to your question)
  • asked a question related to Decision Theory
Question
5 answers
I have a project on function approximation by fuzzy decision trees and I want to compare my results with some other methods improved by fuzzy logic.
Relevant answer
Answer
You can read my papers about infinite fuzzy logic controllers (Fuzzy Sets and Systems, 1995, InterStat 2003) if you want work with the fuzzy approximations of Lebesque functions or fuzzy probability)
  • asked a question related to Decision Theory
Question
12 answers
Please let me know of all free softwares you know about Electre methods.
Relevant answer
These methods can be implemented in an excel sheet. Maybe you can find at google "electre filetype:xls" "electre filetype:xlsx" or "electre filetype:ods"
  • asked a question related to Decision Theory
Question
14 answers
The Saaty rating scale is rather nonlinear, but aggregation approach is definitely linear. Is the AHP a linear or a nonlinear method? I think it is a linear method (e.g. Zarghami and Szidarovszky).
Zarghami M. and Szidarovszky F. (2011). Multicriteria Analysis, Springer, pp. 33-39.
Relevant answer
Answer
AHP  is nonlinear with several linear steps.
  • asked a question related to Decision Theory
Question
6 answers
Does anybody have any suggestions for what I should read about in connection with case-based decision theory? This is a totally new area to me and any information about the theory would be much appreciated.
Relevant answer
Answer
Definitely a theory if case based decision theory. However this book has been pushed some years ago. I do not know if there is some significant research on this topic in the last years.
  • asked a question related to Decision Theory
Question
13 answers
Are MCDM and MADM synonyms? What are the differences?
Relevant answer
Answer
MCDM is a general term and it is divided in two subsets as MADM (multi attribute decision making) and MODM (multi objective decision making). Hence there is no different between MCDM and MADM. all MADM methods like TOPSIS, AHP, PROMETHEE, etc are known as MCDM methods.
  • asked a question related to Decision Theory
Question
6 answers
I am considering the relation between a player and his agent or agents in definition of the game.
Relevant answer
Answer
In game theory there is no formal difference between player and agent. Some authors prefer the term "player" and others may prefer the term "agent". In general, if your audience is formed by game theorists, the term "player" is fine, because it makes clear the kind of decision maker you are referring to. If your audience is not formed by game theorists, the term "agent" may be more appropriate in order to avoid misunderstandings (unless your are talking about actual tabletop games).
  • asked a question related to Decision Theory
Question
3 answers
I'm looking for data from prisoner's dilemma experiments in which participants played only one round of the game. A closely related experiment, which I found, is Goeree, Holt and Laury (J Pub Econ 2002) where participants play ten one-shot games without feedback between games (hence, no learning effects).
Relevant answer
Answer
Correct
  • asked a question related to Decision Theory
Question
8 answers
One of the tenets of multiattribute value theory is that each attribute (criterion) must be preferencial independent from each other. There are however specific cases where this assumption do not hold. In these cases, one can proceed by building a value function based on the set of attributes that are preference dependent. For instance, the visual quality of a forest depends on attributes as the size of the trees, the density of the forest stand, the diversity of species, and the diversity of distinct heights. There are preference dependencies among these attributes. How can I assess a value function for the objective "maximize the visual quality of a forest" based on these attributes?
Relevant answer
Answer
When I come to cases like this, what I do is to "Merge" the two criteria into only one and constuct a scale on the meaningfull impact levels. An example could be: If "Tall Trees" should get a high score if the forest is "Dense" but a low score if the forest is Sparse then, the two criteria (Tree size and Forest Density) can be merged in only one criteria with meanigfull impact levels like "1)Tall Trees - Dense Forest; 2)Tall Trees- Sparse Forest; 3)Small Trees - Dense Forest and 4) Small Trees- Sparse Forest). This is the only solution I know for No Preferential Independence.
  • asked a question related to Decision Theory
Question
26 answers
Alice & Bob enter a game where each have a necktie and they call an independent judge to decide who has the better looking necktie.
The judge takes the better necktie and awards it to the other player. Alice reasons that entering the game is advantageous: although there is a possible maximal loss of one necktie, the potential winning state is two neckties with one that is judged superior. However, the apparent paradox is that Bob can follow the same reasoning, therefore how can the game be simultaneously advantageous to both players?
How can we resolve this dilemma? What are the implications and applications?
[Historical note: I did not invent this question. It was first stated in 1930 by the Belgian mathematician Maurice Kraitchik.]
Relevant answer
Answer
Nice question and nice response Chris.
I'd like to attempt a different way to formulate the problem, if I may, and then attempt a clear illustration of how to resolve it. The interesting thing for me here is that the 'payoff' is actually negative, and so as framed I will attempt a purely economic solution.
Suppose Alice and Bob each have a Necktie, and these cost x_a and x_b. It is more likely that the more expensive one will be judged to be more 'beautiful', and we assign a probability distribution function to this such that the probability that Alice's hat will be preferred is f_a (x_a - x_b), where f_a is some monotonic function from 0 to 1 where f_a(0) = 0.5, g_a(-\infinity) = 0 and f_a(\infinity) = 1.
The expected payoff for Alice is now
$ = -f_a x_a + (1-f) x_b
Alice's strategy as regards fitting out her hat then follows from taking derivatives, i.e.
d$/dx_a = -f - (x_a + x_b) df/dx_a
As f, df/dx_a, x_a and x_b are all greater than 0 (unless the hat was a liability!), the expectation value of payoff is strictly negative, and increases with increasing x_a. Hence the only rational strategy for Alice is to minimise the cost of her hat, which of course increases the probability of losing.
I think that should be right. Note that it is quite an interesting function of the payoff here, and I think that modifying f could give a more interesting more general solution.
  • asked a question related to Decision Theory
Question
1 answer
As described in the papers: "Action Recognition And Prediction For Driver Assistance Systems Using Dynamic Belief Networks" and "Enrichment of Qualitative Beliefs for Reasoning under Uncertainty"
Relevant answer
Answer
DBN are a way to use factorisation in complex Markov Chain model.
See :
Murphy, Kevin (2002). Dynamic Bayesian Networks: Representation, Inference and Learning. UC Berkeley, Computer Science Division.
  • asked a question related to Decision Theory
Question
6 answers
Laboratory economic experiments
Relevant answer
Answer
Overconfidence is actually tricky to measure, ex-ante, and how to measure it will depend on which definition of overconfidence you want to use. E.g., if you are interested in overconfidence defined as believing your information is more precise than it is, then you could simply elicit each participant's belief distribution for an objectively-known random event (e.g., state lottery outcome) and compare the elicited distribution to the actual distribution using some measure of dispersion (e.g., variance) for statistical testing. If you are interested in a more intuitive measure of overconfidence, then you could match pairs of participants to compete on a task where performance is orthogonal to ability so that, objectively there is a 50/50 chance (or, whatever probability you want to design) of winning. Then you could elicit participant's beliefs about the chances of winning. Labeling individuals reporting values sufficiently higher than 50/50 as overconfident would be warranted here, I think, where the definition of "sufficiently" is up to the experimenter.