Science topic

# Decision Theory - Science topic

A theoretical technique utilizing a group of related constructs to describe or prescribe how individuals or groups of people choose a course of action when faced with several alternatives and a variable amount of knowledge about the determinants of the outcomes of those alternatives.

Questions related to Decision Theory

For example, I have Y~N(mean, variance), then the expected loss for each parameter, mean, E[L(mean,d*)], and variance E[L(variance,d*)]. I want to have a measure that integrates both expected losses.

Such a system can be a

*analytical platform connected to the***Business Intelligence***database system, where information from the Internet is collected, collected, processed and analyzed, including comments from Internet users entered into***Big Data***portals.***social media**On the basis of this data, analytics reports are created in the

*system describing changes in interest, consumer preferences for specific products and services, as well as changes in the company's brand assessment that offers a specific***Business Intelligence***offer to the market.***product or service**These reports can be very tangible in the business management process, including they can

*in the field of production planning as well as the distribution process, sales organization in the form via the***support decision-making***, in the form of***Internet***.***e-commerce**Do you agree with me on the above matter?

In the context of the above issues, the following question is valid:

**How to build a decision support system in the field of selling on the Internet, online store, e-commerce?**Please reply

I invite you to the discussion

Thank you very much

*The issues of the use of information contained in Big Data database systems for the purposes of conducting Business Intelligence analyzes are described in the publications:*

I invite you to discussion and cooperation.

Best wishes

For this paper: A linguistic distribution behavioral multi-criteria group decision making model integrating extended generalized TODIM and quantum decision theory

How can I access the MATLAB code of this article?

How can I replace my data in this article?

It seems that there is, more or less, some sort of consensus on academic standards. Who is responsible for drawing the guidelines that shape the way academia functions? Who do you think puts the standards for research publishing in influential journals?

Recommendations by ordinary researchers? Decisions by elite researchers? Do policy makers have a say in this? What connects these academic decision makers, whether individuals or institutions, and governs them?

I would appreciate your views. Thanks!

Hello,

I am studying how WOM can affect the purchase decision of the customer for a specific industry. The purchase decision theory already exists and I am the studying if WOM can effect the purchase decision, as well I have created a hypothesis. For data collecting, I am only using a questionnaire my research is quantitative.

So, for the approach of the research should be deductive or abductive? Should remove the hypothesis to avoid any confusion? In addition, for the philosophy which kind of philosophy would be more suitable?

What is the enhancement of the MCDM methods towards air pollution? Can we adaptive decision theory to support air pollution problems?

Greetings, Researchers. On the perspective, that there are several political models in science, technology and innovation. Please, I need a literature that mensure impacts of this politicals on decision perspective of the researcher. Since, the influences of the media: politicians, social and academics define what and how to research. fact, which in the last order defines the technological trajectory.

Is Entropy Shanon a good technique for weighting in Multi-Criteria Decision-Making?

As you know we use Entropy Shanon for weighting criteria in multi-criteria decision-making.

I think it is not a good technique for weighting in real world because:

It just uses decision matrix data.

If we add some new alternatives, weights change.

If we change period of time weights change.

For example we have 3 criteria: price, speed, safety

In several period of time weights of criteria vary

For example if our period of time is one month"

This month may price get 0.7 (speed=0.2, safety=0.1)

Next month may price get 0.1 (speed=0.6, safety=0.3)

It is against reality! What is your opinion?

Can

**numbers**(the**Look then Leap Rule OR the Gittins Index**) be used to help a person decide when to**stop looking**for the most suitable career path and LEAP into it instead or is the career situation**too complicated**for that?^^^^^^^^^^^^^^^^^

Details:

Mathematical answers to the question of optimal stopping in general (When you should stop looking and leap)?

**Gittins Index , Feynman's restaurant problem**(not discussed in details)

**Look then Leap Rule (secretary problem, fiancé problem):**

**(**

*√n ,*n/**e**

**, 37%)**

How do apply this rule to career choice?

1- Potential ways of application:

A- n is

**Time**.Like what Michael Trick did https://goo.gl/9hSJT1 . Michael Trick A CMU Operations Research professor who applied this to his decide the best time for his marriage proposal., though he seems to think that this is a failed approach.

In our case, should we do it by age 20-70= 50 years --- 38 years old is where you stop looking for example? Or Should we multiply 37% by 80,000 hours to get a total of 29600 hours of career "looking"?

B- n is the number of

**available options**. Like the secretary problem.If we have 100 viable job options, we just look into the first 37? If we have 10, we just look into the first 4? If we are still in a stage of our lives where we have thousands of career paths?

2- Why the situation is more complicated in the career choice situation:

A- You can want a career and pursue it and then fail at it.

B- You can mix career paths. If you take option c, it can help you later on with option G. for example, if I went as an IRS, the irs will help me later on if I decide to become a writer so there's overlap between the options and a more dynamic relationship. Also the option you choose in selection #1 will influence the likelihood of choosing other options in Selection 2 (For example, if in 2018 I choose to work at an NGO, that will influence my options if I want to do a career transition in 2023 since that will limit my possibility of entering the corporate world in 2023).

C- You need to be making money so "looking" that does not generate money is seriously costly.

D- The choice is neither strictly sequential nor strictly simultaneous.

E- Looking and leaping alternates over a lifetime not like the example where you keep looking then leap once.

Is there a practical way to measure how the probability of switching back and forth between our career options affects the optimal exploration percentage?

F- There is something between looking and leaping, which is testing the waters. Let me explain. "Looking" here doesn't just mean "thinking" or "self-reflection" without action. It could also mean trying out a field to see if you're suited for it. So we can divide looking into "experimentation looking" and "thinking looking". And what separates looking from leaping is commitment and being settled. There's a trial period.

How does this affect our job/career options example since we can theoretically "look" at all 100 viable job positions without having to formally reject the position? Or does this rule apply to scenarios where looking entails commitment?

G-

***You can return to a career that you rejected in the past. Once you leap, you can look again.*"But if you have the option to go back, say by apologizing to the first applicant and begging them to come work with you, and you have a 50% chance of your apology being accepted, then the optimal explore percentage rises all the way to 61%.*" https://80000hours.org/podcast/episodes/brian-christian-algorithms-to-live-by/

*3- A Real-life Example:

Here are some of my major potential career paths:

1- Behavioural Change Communications Company 2- Soft-Skills Training Company, 3- Consulting Company, 4-Blogger 5- Internet Research Specialist 6- Academic 7- Writer (Malcolm Gladwell Style; Popularization of psychology) 8- NGOs

As you can see the options here overlap to a great degree. So with these options, should I just say "ok the root of 8 is about 3" so pick 3 of those and try them for a year each and then stick with whatever comes next and is better?!!

I'm teacher and suffering a lot to complete my MS. I need to write an MS level research thesis. I can work in Decision Making (Preference relations related research work), Artificial Intelligence, Semigroups or Γ-semigroups, Computing, Soft Computing, Soft Sets, MATLAB related project etc. Kindly help me. I would be much grateful to you for this. Thanks.

I am looking for a book or another reference which has different examples of decision theory in building construction. for example, some examples that show different actions and their outcomes regarding to an uncertain problem (e.g. existing of hazardous materials in an existing building.

I would be highly appreciated if anyone can help me.

Thanks,

Mansour

**Is the canonical unit 2 standard probability simplex, t**he convex hull of the equilateral triangle in the three dimensional Cartesian plane whose vertices are (1,0,0) , (0,1,0) and (0,0,1) in euclidean coordinates, closed under all and only all convex combinations of probability vector

that is the set of all non negative triples/vectors of three real numbers that are non negative and sum to 1, ?

Do any unit probability vectors, set of three non negative three numbers at each pt, if conceveid as a probability vector space, go missing; for example

**<p1=0.3, p2=0.2, p3=0.5>**may not be an element of the domain if the probability simplex in barry-centric /probabilty coordinate s a function of p1, p2, p3 .**where y denotes p2, and z denotes p3, is not constructed appropriately?**

and the pi entries of each vector, p1, p2 p3 in <p1, p2,p3> p1+p2+p3=1 pi>=0

in the x,y,z plane where x =m=1/3 for example, denotes the set of probability vectors whose first entry is 1/3 ie < p1=1/3, p2, p3> p2+p3=2/3 p1, p2, p3>=0; p1=1/3 +p2+p3=1?

p1=1/3, the coordinates value of all vectors whose first entry is x=p1=m =1/3 ie

**Does using absolute barry-centric coordinates rule out this possibility? That vector going missing?**

where <p1=0.3, p2=0.2, p3=0.5> is the vector located at p1, p2 ,p3 in absolute barycentric coordinates.

Given that its convex hull, - it is the smallest such set such that inscribed in the equilateral such that any subset of its not closed under all convex combinations of the vertices

**(I presume that this means all and only triples of non negative pi that sum to 1 are included, and because any subset may not include the vertices etc). so that the there are no vectors with negative entries**every go missing in the domain in the , when its traditionally described in three coordinates, as the

**convex hull of three standard unit vectors**(1,0 0) (0,0,1 and (0,1,0), the equilateral triangle in Cartesian coordinates x,y,z in three dimensional euclidean spaces whose vertices are Or can this only be guaranteed by representing in this fashion.Refers to the measurement of subjective utility or its neuroeconomic brother, subjective value. Ideally in isolated laboratory settings, i.e. no situational factors involved.

I would expect different heuristics or biases observed in such DM tasks due to the more abstract nature of public goods (PG), the issue of value appropriation, and perhaps stronger influence of emotions (or other factors).

I'm looking for reactions to the nature of the choice object that are much more pronounced for or unique to PG .

I think scope insensitivity is one such thing but there should be more.

However, I'm having a hard time locating good articles for this question. Do you know any?

Decision theory used to solving complex problem of business. How?

Iis there any discrete analogue m of

**star convexity at 0' star convexity at 0',?****That is, in the same way that midpoint convexity can be seen to be a discrete analogue of convexity (and entails it under certain regularity assumptions, continuity etc)?**

For example, where

**F:[0,1] \to [0,1**],**star convexity at 0**:

"\for all x \in dom (F)=[0,1],:(\forall t \in [0,1]) :"F(t

*** x_0 +(1-t) * x )<= t * F(x_0)+(1-t)* F(x) where x_0 =0 "****Generally with F(0)=0 t it becomes :**

**For all x \in dom (F)=[0,1],:(\forall t \in [0,1]) F(tx)<=tF(x)**

**(it is convexity restricted so that the first argument is some specific minimum value usually x0=0**

**'**

**such as '\mid-star convexity at x0=0' for F:[0,1] \to [0,1] F(0)=0'**

**mid-star convexity at x0=0 \for all x \in dom (F)=[0, F(1/2x)<=1/2F(x) which under certain regularity assumptions ensure the star convexity of the function or that its a retract \forall x in dom(F)=[0,1]**

**F(x)<=x which given F(1)=1 is generally entailed by star convexity**

**( i call this mid-star convexity with F(0)=0 and x0=0\in dom(F) for F:[0,1]\to [0,1]**

**strictly monotone increasing bi-jection F:[0,1]\to [0,1]**

**-where F is absolutely continuous**

**-F at1/2/3 times continuously differentiable**

**with F(1)=1 and F(0)=0 F(1/2)=1/2**

**I presume not as not all star convex functions are continuous to begin with?**

State dependent additivity and state independent additivity? ;

akin to more to

**cauchy additivity**versus local ko**lmorgov additivity/normalization of subjective credence/utility,**in a simplex representation of subjective probability or utility ranked by objective probability distinction? Ie in the 2 or more unit simplex (at least three atomic outcomes on each unit probability vector, finitely additive space) where every events is ranked globally within vectors and between distinct vectors by < > and especially '**='****i [resume that one is mere representability and the other unique-ness**

**the distinction between the trival**

**(1)x+y+z x,y,z mutually exclusive and exhaustive F(x)+F(y)+F(z)=1**

**(2)or F(x u y) = F(x)+F(y) xu y ie F(A V B)=F(A)+F(B)=F(A)+ F(B) A, B disjoint**

**disjoint on samevector)**

**(3)F(A)+F(AC)=1 disjoint and mutually excluisve on the same uni**t vector

and more like this or the properties below something more these

to these (3a, 3b, 3C) which are uniqueness properties

**forall x,y events in the simplex**

**(3.A) F(x+y)=F(x)+F(y) cauchy addivity(same vector or probability state, or not)**

**This needs no explaining**

**aritrarily in the simplex of interest (ie whether they are on the same vector or not)**

**or(B) x+y =z+m=F(z)+F(m) (any arbitary two or more events with teh same objective sum must have the same credence sum, same vector or not) disjoint or not (almost jensens equality)**

**or (C)F(1-x-y)+F(x)+F(y)=1 *(any arbitrary three events in the simplex, same vector or not, must to one in credence if they sum to one in objective chance)**

**(D) F(1-x)+F(x)=1 any arbitary two events whose sum is one,in chance must sum to 1 in credence same probability space,/state/vector or not**

**global symmetry (distinct from complement additivity) it applies to non disjoint events on disitnct vectors to the equalities in the rank. 'rank equalities, plus complement addivitity' gives rise to this in a two outcome system, a**

. It seems to be entailed by global modal cross world rank, so long as there at least three outcome, without use of mixtures, unions or tradeoffs. Iff ones domain is the entire simplex

that is adding up function values of sums of evenst on distinct vectors to the value of some other event on some non commutting (arguably) probability vector

F(x+y)=F(x)+F(y)

In the context of certain probabilistic and/or utility unique-ness theorems,where one takes one objective probability function and tries to show that any other probability function, given ones' constraints, must be the same function.

In order to get a homogeneous population by inspecting two conditions and filtering the entire population (all possible members) according these two conditions, then used the all remaining filtered members in the research, Is it still population? or it is a sample ( what is called?).

working on mathematical equation by adding other part to it then find the solution and applying it on the real world. can we generalize its result to other real world?

What is the name for the identities

**(2) and (3) i**n the functional analysis literature F-1(is the inverse function?where (1) F

**:[0,1] to [0,1]**and F**is strictly monotonic increasing**where

**F(0)=0,**F(1/2)=1/2 and F(1)=1,**2)\forall (x)\in (F) F(1-x)+F(x)=1**

**(3)\forall (p)in codom(F)F-1(1-p) +F-1(p)=**1; these are the equality bi-conditional (expressed in (2) and (3) , the equality cases of (4), bi-conditional, because it applies to the inverse function so they are expressed as

**forall (x, x1)\in dom(F)=[0,1],[x+y =1] (iff) [F(x)+F(y)=1]**

**forall (p, p1), in IM(F)\subseteq[0,1];[p+p =1] iff [ F-1(p1) +F-1(p)=1], F-1 is the inverse function and thus F-1(p) F-1(1-p) are elements of dom(F)=[0,1]**

x+y=1 iff F(x)+F(y)=1

see, the attached paper 'order indifference and rank dependent probabilities around page 392, its the biconditional form of segal calls a symmetric probability transformation function

I presume that if in addition F satisfies (4)\forall x in [0,1]=dom(F); F(1/2x)=1/2F(x)

That such a function will be identity function, as F(x)=x for all dyadic rationals and some rationals and F is strictly monotone increasing and agrees with F(x) over a dense set note that

given midpoint convexity at 1 and 0

I presume that if in addition

**@1\forall x in [0,1]=dom(F) F(1/2x)<=1/2F(x)**

**\@0 forall x in [0,1] F(1/2+x/2)<=1/2+F(x)/2**

That these equations collapse into equal-ties

.F(1/2x)=1/2F(x)

F(1/2+x/2)=1/2+F(x)/2

**given the symmetry equation;(2) F(1-x)+F(x)=1**, an

**dd(1) F:[0,1]to [0,1] and F(0)=0**(which gives F(1/2)=1/2, F(1)=1, follows from F(0)=0,), it follows then F(x)=x for all dyadics rationals in [0,1]

where F(1)=1 and F(0)= and F strictly monotonic increasing as above

and some rationals and F becomes odd at all dyadic points in [0,1]

n

**I am not sure if (3) i**s required but then given F is strictly monotone increasing (it should be applied by injectivity and (2) . In any case I presume F would collapse into F(x)=x.

What is the general form of a function that merely satisfies F(1)=1 F(0)=0 F(1/2)=1/2 and is strictly mo-none increasing and continuous and satisfies the inequalities?

@1\forall x in [0,1]=dom(F) F(1/2x)<=1/2F(x)

\@0 forall x in [0,1]=dom(F) F(1/2+x/2)<=1/2+F(x)/2

The function also satisfies

(4)\forall x,y\in dom (F) x+y>1 \leftrightarrow F(x) +F(y)>1,

\forall x,y\in dom (F) x+y<1 \leftrightarrow F(x) +F(y)<1,

\forall p,p2\in codom(F) p+p1>1 \leftrightarrow F-1(p) +F(p2)>1

\forall p,p2\in codom(F) p+p1<1 \leftrightarrow F-1(p) +F(p2)<1

]

Is there a distinction between strong or complete qualitative probability orders which are considered to be strong representation or total probability relations neither of which involve in-com parables events, use the stronger form of Scott axiom (not cases of weak, partial or intermediate agreements) and both of whose representation is considered 'strong '

of type (1)P>=Q iff P(x)>= P(y)

versus type

(2) x=y iff P(x)=P(x)

y>x iff P(y)>Pr(x) and

y<x iff P(y)<Pr(x)

The last link speaks about by worry about some total orders that only use

totallity A<=B or B<=A without a trichotomy https://antimeta.wordpress.com/category

/probability/page/3/

where they refer to:

f≥g, but we don’t know whether f>g or f≈g. S

However, as it stands, this dominance principle leaves some preference relations among actions underspecified. That is, if f and g are actions such that f strictly dominates g in some states, but they have the same (or equipreferable) outcomes in the others, then we know that f≥g, but we don’t know whether f>g or f≈g. So the axioms for a partial ordering on the outcomes, together with the dominance principle, don’t suffice to uniquely specify an induced partial ordering on the acti

.

The both uses a total order over

**totality**

**A <=B or B >=A**

**l definition of equality and anti-symmetry, A=B iff A<=B and B>=A**

**A<= B iff [A< B or A=B] iff not A>B**

**A>=B iff [A>B or A=B]iff not A<B**

**where A>B equiv B<A,and**

**A>=B equiv B<=A iff (A<B)**

**where = is an equivalence relation, symmetric, transitive and reflexive**

**<=.=> are reflexive transitive, negative transitive,complementary and total**

**, whilst <, > are irreflexive and ass-ymetric,**

**transitive**

**A<B , B<C implies A>C**

**A<B B=C implies A>C**

**A<B, A<=B implies A>C**

**and negatively transitive**

**and complementary**

**A>B iff ~A<~B**

**<|=|>, are mutually exclusive.**

and where equality s, is an equivalence class not denoting identity or in-comparability but generally equality in rank (in probability) whilst the second kind uses negatively transitive weakly connected strict weak orders,r <|=|>,

**weak connected-ness not (A=B) then A<B or A> B**

whilst the second kind uses both trichotomous strongly connected strict total orders, for <|=|>,.

**(2) trichotomoy A<B or A=B or A>B are made explicit, where the relations are mutually exclusive and exhaustive in (2(**

**(3) strong connectected. not (A=B) iff A<B or A> B, and**

**and satisfy the axioms of A>= emptyset, \Omega > emptyset , \Omega >= A**

**scotts conditions and the separability and archimedean axioms and monotone continuity if required**

In the first kind <= |>= is primitive which makes me suspect, whilst in the second <|=|> are primitive.

Please see the attached document.And whether trich-otomoy fails

in the first type, which appears a bit fuzzier yet totality holds in both case A>=B or B<=B where

What is unclear is whether there is any canonical meaning to weak orders (as opposed total pre-orders, or strict weak orders) .

In the context of qualitative probability this is sometimes seen as synonymous with a complete or total order. , as opposed to a partial order which allows for incomparable s, its generally a partial order, which allows for comparable equalities but between non identical events usually put in the same equivalence class (ie A is as probable as B, when A=B, as opposed, one and the same event, or 'who knows/ for in-comparability) Fihsburn hints at a second distinction where A may not be as likely as B, and it must be the case

not A>B and not A< B yet not A=B is possible in the second yet

A>= B or A<=B must hold

which appears to say that you can quasi -compare the events (can say that A less then or more probable, than B ,but not which of the two A<B, A=B, , that is which relation it specifically stands in

but yet one cannot say that A>B or A<B

)

and satisfy definitions

and A<=B iff A<B or A=B iff B>=A, iff ~A>=~B, where this mutually exclusive to A<B equiv ~B>~A

A>=B iff A>B or A<=B

iff iff B>=A where this mutually exclusive to A>B equiv ~B<~A

and both (1) and (2) using as a total ordering over >= |<=

(1)totalityA<= B or B<=A

(2)equality in rank and anti-symmetric biconditional A=B iff A<=B and B>=A where = is an equivalence relation, symmetric, transitive and reflexive

(2) A<=B iff A<B or A=B, A>=B iff A>B or A<=B

(3) and satisfy the criterion that >|<|>=|<=, are

complementary, A>B iff ~B<~A

transitive and negatively transitive,

where A<B iff B<A and where , =, <|> are mutually exclusive,

The difference between the two seem to be whether A>=B and A<= B is equivalent to A=B; or where in the first kind, it counts as strongly respresenting the structure even if A>=B comes out A>B because one could not specify whether A>B or A=B yet you could compare them in the sense that under <= one can say that its either less or equal in probability or more than or equal, but not precisely which of the two it is.

either some weakening of anti-symmetry of the both and the fact that the first kind use

whilst the less ambiguous orders trich-otomous orders use not (A=B) iff A<B or A> B; generally trichotomy is not considered, when it comes to using satisfying scotts axiom , in its strongest sense, for strict aggreement

and I am wondering whether the trich-otomous forms which appear to be required for real valued or strictly increasing probability functions are slightly stronger, when it comes to dense order but require a stronger form of scotts axiom, that involves <. > and not just <=.

but where in (1) these <=|>= relation is primitive and trich-otomoy is not explicit, nor is strong connected-ness whilst in (2)A neq B iff A>B or A<B

>|=|< is primitive and both

(1) totality A<= B or B<=A

(2) A<B or A=B or A>B are made explicit, where the relations are mutually exclusive and exhaustive in (2(

and (2) trichotomy hold and are modelled as strict total trichotomous orders,

as opposed to a weakly connected strict weak order, with an associated total pre-order, or what may be a total order,

, or at least are made explicit. I get the impression that the first kind as deccribed by FIshburn 1970 considers a weird relation that does not involve incomparables, and is consided total but A>=B and B<=A but one cannot that A is as likely as B, or that its fuzzy in the sense

that one can say that B is either less than or equal in probability to A, or conversely, but if B<= A one cannot /need not whether say A=B or A<B,

not A=B] iff A<B or A>B

and strongly connected in the second.

where A=B iff A<=B and B>=A in both cases

where <= is transitive , negative transitive, complementary, total, and reflexive

A>=B or B<=A

are considered complete

and

y

How can I construct a multi-attribute utility function for attributes that I cannot prove utility independence for?

Thanks.

in the case of decision-making process of energy retrofit actions while choosing the best retrofit intervention, which one should be kept under consideration? MODM or MADM?

In neutrosophic sets all three measures (Truth, Falsehood, indeterminacy) are independent, how one effects another in decision making. For Example: In case of intuitionistic fuzzy sets, if membership of an element increases, then, certainly the sum of other two measures (non-membership and hesitation) will decrease.)

In many instant it has been said cutting a dendrogram at a certain level gives a set of clusters. Cutting at another level gives another set of clusters. How would you pick where to cut the dendrogram?

Is there something we could consider an optimal point? I have also wondered about this problem, but (unfortunately) haven't found any convincing answers yet.

So is that correct to say “there is no definitive answer since cluster analysis is essentially an exploratory approach; the interpretation of the resulting hierarchical structure is context-dependent and often several solutions are equally good from a theoretical point of view”. Please help me.

For both Bayesian and Frequentist expected loss, is the parameter an index of the data to which to make decisions on, or a state of nature?

Are there examples where a loss function is mapped using a vector of real observations to show what the parameter looks like?

I am putting together a decision matrix - 3D cube based on three factors. The three observable factors are measured and the coordinate opens a cell, that gives the decision. A trivial example is if you had a pet and you worked out there are three main indicators of what it wanted; wagging of tail, excitement and what it brings to you. You want to leave a simple model so when you are away anyone looking after the pet can observe these three and understand what the pet wants. Eg. moderate tail wagging, high excitement and bringing you a leash - means it want to go for a walk. My question is there any work done on this type of model?

Suppose I want to give my robot a specific goal state to achieve, but the robot must also maximize some reward function while perusing the goal. What if the shortest sequence of actions to achieve the goal is also the most costly according to the reward function? Or what if the most rewarding sequence of actions is the longest possible? Is there some work which tackles this problem?

Many authors deal with individual differences, but there is inconsistency on what constructs are individual differences. Especially in decision making style research, where some researchers study individual differences in decision making style, while others include decision making style as an individual difference.

I'm trying to do the IIA test for an unlabeled choice experiment in Stata. However it drops an error indicating : "Hausman IIA /Small-Hsiao IIA test requires at least 3 dependent categories"

My dependent variable is <<choice>> which takes the value of 1 if the one of the alternatives, A, B or C is chosen and 0 if not.

What can I do?

Can anyone help with accessible literature or article suggestions on counterpower and sociocracy -. in English (preferably in the radical pedagogy framework). So far I've come to Peter Wohlleben /The Secret Lives Of Trees), but since I can't read in German, it's not very helpful. And on the counter power topic I got Tim Gee (just as a source - woiuld love to get his book, if anyone has it in pdf).

Thank you all soooo much in advance,

Maruša :)

P. s.: How does this requesting work? I' ve requested two articles already, but noone seems to respond. :-/

When there are large number of attributes (say 20+), ensuring that they are either independent of each other or slightly dependent is very difficult. However, the dependency among attributes is tried to be avoided when building MCDA hierarchy. So before generating final ranking, does PROMETHEE automatically takes care of this issue through positive, negative and net outranking flows?

I am doing an AHP but have a regular questionnaire because I have multiple alternatives (up to 40) Using an AHP questionnaire would be cumbersome and perhaps too complicated for the respondents and would also be time consuming and might consistency problems. So I'm wondering if there's any justification for using regular scales in this circumstance.

I try to find the link between normative decision theory (NDT) and decision support system (DSS) domain. I'm confused. Both domains use a magical word "decision".

However, NDT is mostly concentrated on analyzing the decisions with respect to the outcomes or consequences, minimizing the lose function or maximizing outcomes.

DSS on the other side seems to be limited to the classification. Predict the decision on the basis of historical data, where the lose function is constructed over good/bad predictions. Many times the training data for DSS are the sets labeled by domain experts.

Is it true that DSS use term "decision" in a sense of "inference" or "judgment" in namespace of NDT?

I begin with some general question. Is the normative decision theory in primary form applied to real problems? I can hardly find examples of real payoff matrices among toy examples.

Back to main question. I would like to represent in a form of payoff matrix such a problem: incident commander after arriving at the fire ground has such alternatives: 1. Gathering further information; 2. Evacuating of people; 3. Extinguishing the fire.

Candidates to states of nature: fire will extinguish itself; people will evacuate thyself.

How to construct the payoff matrix for this problem. Should be the states of nature composed as a combination of the candidates' values:

State 1: won't extinguish itself, won't evacuate thyself;

State 2: will extinguish itself, won't evacuate thyself;

State 3: won't extinguish itself, will evacuate thyself;

State 4: will extinguish itself, will evacuate thyself;

Let assume that candidates are non mutually exclusive and independent.

What are the key aspects that differentiate normative and prescriptive models? The prescriptive models is something between normative and descriptive models. However, they have strong roots in normative theory. How to clearly distinguish these two models?

I know two types of them:

**wsm**(weighted sum method) and**desirability function**method .are there other methods?I think about onsite decision support for incident commanders.

In my opinion currently the descriptive and naturalistic models are exploited. Why not prescriptive or normative?

If we consider a human as a decision maker, the factors underlying the use of descriptive or naturalistic models are: inability to comprehend and process in analytical way all the information, course of actions, consequences and costs of alternative activities in mental and time pressure environment.

If we consider a computer system as a decision maker, the factors are:

- lack of information - we can not ask firefighters to insert data into computer system because they don't have such time;
- poor sensory layer for recognition of phenomena or victims - there is no so far sensors in building which enable to track fire dynamics, people localization and their physical state;
- huge uncertainty in modeling and foreseeing the fire and people behavior, reaction of the building to the fire exposure, changes in ventilation, extinguishing effects and many others.

What do you think about this problem?

I am looking for the methods like ELECTRE IV or MAXIMIN, and for papers where the problem of the criteriaincomparability is considered.

I'm looking for theories or models which try to combine or to unify the theory of Alfred Schütz with common theories of explaining action via decision models like Rational Choice or Bounded Rationality models. The Frame Selection Theory of Hartmut Esser and Clemens Kroneberg is well known to me but I wonder whether there are similar but independent attempts.

Seek your advise on how to use this method. Thank you.

I was thinking about the different decision making methods under certain and uncertain conditions. My specific question is that:

As you know, we have many MCDM tools like AHP- ANP- TOPSIS- VIKOR- PROMOTHEE- MOORA- SIR and many other methods and all of them have been developed to fuzzy, type-2 fuzzy, intuitionistic fuzzy and Grey environments. Which one is really more applicable under uncertain situations? fuzzy ? type-2 fuzzy? Intuitionistic fuzzy? or Grey environment for a decision making method? I know each uncertain logic has its applications but sometimes a tiny different in collected data may causes different results by each methods.

All the ideas and comments are appreciated. Hope that all of the experts take an action to this question by following or leaving their valuable comments.

I need a quick way to get participants to think/act as if they have made their own choice, while actually have their choice correspond to their assigned condition. In other words, I am looking for a way to get them to "choose" their assigned condition.

I am considering offering multiple choices (out of 4) and telling them that their choice has to match a random selection in order for the task to begin. But wondering if there is a better, more efficient way to do this.

In intertemporal choice paradigms, I would like to build a discount utility function for each participant in my study based on a couple of intertemporal decisions performed by each participant. Is there any software that can easily perform such calculations?

Is matlab the most appropriate software for doing this?

I have a weighted supermatrix and I am trying to convert it into a limit matrix.

Weighted supermatrix can be transformed into the limit supermatrix

by raising itself to powers until the matrix converges

How it can be performed ?

The Analytic Hierarchy Process - AHP (Saaty 1980) is a multicriteria tool considered to be relevant to nearly any ecosystem management application that requires the evaluation of multiple participants or complex decision-making processes are involved (Schmoldt & Peterson 1997, Schmoldt et al. 2001, Reynolds & Hessburg 2005).

A need to consult an example of a form used to be filled by experts in a given area of knowledge in order to perform a pairwise comparison between environmental criteria that are useful to define the soil suitability of a region (e.g., soils, slope, aspect, clima,...). Two factors are compared using the rating scale which ranges from 1 to 9 with respect to their relative importance. Than we obtain the weights for each criteria that will be used in the map algebra.

I conducted AHP using 3 pairwise comparisons. Unfortunately the CR comes out as 0.302. A balanced scale using principal eigen vectors also results in a CR of 0.22. Is there any way to be able to move forward with these results?

I am looking for some top and mathematical references in Bayesian analysis and Bayesian decision making. Books and Tutorial articles mostly. Thank you

In hypothesis testing we use linear parameters. I am trying to work on factoring irrationality by using non linear modeling. I would like to factor the effect of correlation of the cause variable in the result.

Is this possible ? Is there any papers on this ?

Does anyone know what are the common group tasks that people use in their experiments? Tasks where performance can be easily evaluated objectively? I found in literature Michigan State University Distributed Dynamic Decision Making (MSU-DDD), but could not find the modified version for research. Does anyone have this game or know other games that I can use in research? Thanks!

I have used AHP, TOPSIS and Fuzzy TOPSIS in my research work. I would like to know the reliability of TOPSIS and its variations.

The ﬁrst axiomatic accounts of preference for ﬂexibility and freedom of choice are due to Koopmans (1962) and Kreps (1979), who assumed that a Decision Maker always enjoys having more alternatives available. After that, e.g. Puppe (1996) refined the idea and distinguished the essential alternatives in an opportunity set as those whose exclusion “would reduce an agent’s freedom”.

Most applications I know of consider social choice problems that are relevant to economics theory. What other fields have seen applications of those concepts? I'm particularly interested in corporate decision-making and engineering design.

References:

T. C. Koopmans, “On ﬂexibility of future preference,” Cowles Foundation for Research in Economics, Yale University, Cowles Foundation Discussion Papers 150, 1962.

D. M. Kreps, “A representation theorem for ”preference for ﬂexibility”,”

Econometrica, vol. 47, no. 3, pp. pp. 565–577, 1979

C. Puppe, “An Axiomatic Approach to 'Preference for Freedom

of Choice'” Journal of Economic Theory, vol. 68, no. 1, pp. 174–

199, January 1996

The reason for my question is that so many other terms in the defence refer to the "Freedom of Action". [Please see for example: ADP 3–0, Unified Land Operations]

I am researching the trust model in WSNs and doing the emulation for the model.I can't find some matlab codes about the reputation-based framework for sensor networks(RFSN).It uses a Bayesian formulation and a beta contribution.Could you help me?

I am planning to conduct research on competitive traits and its effect on competitive states. I would appreciate if someone could recommend me some instrument to evaluate pessimistic trait and cognitive bias consequences. Thank you in advance

I have a project on function approximation by fuzzy decision trees and I want to compare my results with some other methods improved by fuzzy logic.

Please let me know of all free softwares you know about Electre methods.

The Saaty rating scale is rather nonlinear, but aggregation approach is definitely linear. Is the AHP a linear or a nonlinear method? I think it is a linear method (e.g. Zarghami and Szidarovszky).

Zarghami M. and Szidarovszky F. (2011). Multicriteria Analysis, Springer, pp. 33-39.

Does anybody have any suggestions for what I should read about in connection with case-based decision theory? This is a totally new area to me and any information about the theory would be much appreciated.

Are MCDM and MADM synonyms? What are the differences?

I am considering the relation between a player and his agent or agents in definition of the game.

I'm looking for data from prisoner's dilemma experiments in which participants played only one round of the game. A closely related experiment, which I found, is Goeree, Holt and Laury (J Pub Econ 2002) where participants play ten one-shot games without feedback between games (hence, no learning effects).

One of the tenets of multiattribute value theory is that each attribute (criterion) must be preferencial independent from each other. There are however specific cases where this assumption do not hold. In these cases, one can proceed by building a value function based on the set of attributes that are preference dependent. For instance, the visual quality of a forest depends on attributes as the size of the trees, the density of the forest stand, the diversity of species, and the diversity of distinct heights. There are preference dependencies among these attributes. How can I assess a value function for the objective "maximize the visual quality of a forest" based on these attributes?

Alice & Bob enter a game where each have a necktie and they call an independent judge to decide who has the better looking necktie.

The judge takes the better necktie and awards it to the other player. Alice reasons that entering the game is advantageous: although there is a possible maximal loss of one necktie, the potential winning state is two neckties with one that is judged superior. However, the apparent paradox is that Bob can follow the same reasoning, therefore how can the game be simultaneously advantageous to both players?

How can we resolve this dilemma? What are the implications and applications?

[Historical note: I did not invent this question. It was first stated in 1930 by the Belgian mathematician Maurice Kraitchik.]

As described in the papers: "Action Recognition And Prediction For Driver Assistance Systems Using Dynamic Belief Networks" and "Enrichment of Qualitative Beliefs for Reasoning under Uncertainty"

Conference Paper Action Recognition and Prediction for Driver Assistance Syst...