Decision Theory - Science topic
A theoretical technique utilizing a group of related constructs to describe or prescribe how individuals or groups of people choose a course of action when faced with several alternatives and a variable amount of knowledge about the determinants of the outcomes of those alternatives.
Questions related to Decision Theory
For example, I have Y~N(mean, variance), then the expected loss for each parameter, mean, E[L(mean,d*)], and variance E[L(variance,d*)]. I want to have a measure that integrates both expected losses.
Such a system can be a Business Intelligence analytical platform connected to the Big Data database system, where information from the Internet is collected, collected, processed and analyzed, including comments from Internet users entered into social media portals.
On the basis of this data, analytics reports are created in the Business Intelligence system describing changes in interest, consumer preferences for specific products and services, as well as changes in the company's brand assessment that offers a specific product or service offer to the market.
These reports can be very tangible in the business management process, including they can support decision-making in the field of production planning as well as the distribution process, sales organization in the form via the Internet, in the form of e-commerce.
Do you agree with me on the above matter?
In the context of the above issues, the following question is valid:
How to build a decision support system in the field of selling on the Internet, online store, e-commerce?
I invite you to the discussion
Thank you very much
The issues of the use of information contained in Big Data database systems for the purposes of conducting Business Intelligence analyzes are described in the publications:
I invite you to discussion and cooperation.
It seems that there is, more or less, some sort of consensus on academic standards. Who is responsible for drawing the guidelines that shape the way academia functions? Who do you think puts the standards for research publishing in influential journals?
Recommendations by ordinary researchers? Decisions by elite researchers? Do policy makers have a say in this? What connects these academic decision makers, whether individuals or institutions, and governs them?
I would appreciate your views. Thanks!
I am studying how WOM can affect the purchase decision of the customer for a specific industry. The purchase decision theory already exists and I am the studying if WOM can effect the purchase decision, as well I have created a hypothesis. For data collecting, I am only using a questionnaire my research is quantitative.
So, for the approach of the research should be deductive or abductive? Should remove the hypothesis to avoid any confusion? In addition, for the philosophy which kind of philosophy would be more suitable?
Greetings, Researchers. On the perspective, that there are several political models in science, technology and innovation. Please, I need a literature that mensure impacts of this politicals on decision perspective of the researcher. Since, the influences of the media: politicians, social and academics define what and how to research. fact, which in the last order defines the technological trajectory.
Is Entropy Shanon a good technique for weighting in Multi-Criteria Decision-Making?
As you know we use Entropy Shanon for weighting criteria in multi-criteria decision-making.
I think it is not a good technique for weighting in real world because:
It just uses decision matrix data.
If we add some new alternatives, weights change.
If we change period of time weights change.
For example we have 3 criteria: price, speed, safety
In several period of time weights of criteria vary
For example if our period of time is one month"
This month may price get 0.7 (speed=0.2, safety=0.1)
Next month may price get 0.1 (speed=0.6, safety=0.3)
It is against reality! What is your opinion?
Can numbers (the Look then Leap Rule OR the Gittins Index) be used to help a person decide when to stop looking for the most suitable career path and LEAP into it instead or is the career situation too complicated for that?
Mathematical answers to the question of optimal stopping in general (When you should stop looking and leap)?
Gittins Index , Feynman's restaurant problem (not discussed in details)
Look then Leap Rule (secretary problem, fiancé problem): (√n , n/e , 37%)
How do apply this rule to career choice?
1- Potential ways of application:
A- n is Time .
Like what Michael Trick did https://goo.gl/9hSJT1 . Michael Trick A CMU Operations Research professor who applied this to his decide the best time for his marriage proposal., though he seems to think that this is a failed approach.
In our case, should we do it by age 20-70= 50 years --- 38 years old is where you stop looking for example? Or Should we multiply 37% by 80,000 hours to get a total of 29600 hours of career "looking"?
B- n is the number of available options. Like the secretary problem.
If we have 100 viable job options, we just look into the first 37? If we have 10, we just look into the first 4? If we are still in a stage of our lives where we have thousands of career paths?
2- Why the situation is more complicated in the career choice situation:
A- You can want a career and pursue it and then fail at it.
B- You can mix career paths. If you take option c, it can help you later on with option G. for example, if I went as an IRS, the irs will help me later on if I decide to become a writer so there's overlap between the options and a more dynamic relationship. Also the option you choose in selection #1 will influence the likelihood of choosing other options in Selection 2 (For example, if in 2018 I choose to work at an NGO, that will influence my options if I want to do a career transition in 2023 since that will limit my possibility of entering the corporate world in 2023).
C- You need to be making money so "looking" that does not generate money is seriously costly.
D- The choice is neither strictly sequential nor strictly simultaneous.
E- Looking and leaping alternates over a lifetime not like the example where you keep looking then leap once.
Is there a practical way to measure how the probability of switching back and forth between our career options affects the optimal exploration percentage?
F- There is something between looking and leaping, which is testing the waters. Let me explain. "Looking" here doesn't just mean "thinking" or "self-reflection" without action. It could also mean trying out a field to see if you're suited for it. So we can divide looking into "experimentation looking" and "thinking looking". And what separates looking from leaping is commitment and being settled. There's a trial period.
How does this affect our job/career options example since we can theoretically "look" at all 100 viable job positions without having to formally reject the position? Or does this rule apply to scenarios where looking entails commitment?
G- * You can return to a career that you rejected in the past. Once you leap, you can look again.
"But if you have the option to go back, say by apologizing to the first applicant and begging them to come work with you, and you have a 50% chance of your apology being accepted, then the optimal explore percentage rises all the way to 61%." https://80000hours.org/podcast/episodes/brian-christian-algorithms-to-live-by/
*3- A Real-life Example:
Here are some of my major potential career paths:
1- Behavioural Change Communications Company 2- Soft-Skills Training Company, 3- Consulting Company, 4-Blogger 5- Internet Research Specialist 6- Academic 7- Writer (Malcolm Gladwell Style; Popularization of psychology) 8- NGOs
As you can see the options here overlap to a great degree. So with these options, should I just say "ok the root of 8 is about 3" so pick 3 of those and try them for a year each and then stick with whatever comes next and is better?!!
I'm teacher and suffering a lot to complete my MS. I need to write an MS level research thesis. I can work in Decision Making (Preference relations related research work), Artificial Intelligence, Semigroups or Γ-semigroups, Computing, Soft Computing, Soft Sets, MATLAB related project etc. Kindly help me. I would be much grateful to you for this. Thanks.
I am looking for a book or another reference which has different examples of decision theory in building construction. for example, some examples that show different actions and their outcomes regarding to an uncertain problem (e.g. existing of hazardous materials in an existing building.
I would be highly appreciated if anyone can help me.
Refers to the measurement of subjective utility or its neuroeconomic brother, subjective value. Ideally in isolated laboratory settings, i.e. no situational factors involved.
I would expect different heuristics or biases observed in such DM tasks due to the more abstract nature of public goods (PG), the issue of value appropriation, and perhaps stronger influence of emotions (or other factors).
I'm looking for reactions to the nature of the choice object that are much more pronounced for or unique to PG .
I think scope insensitivity is one such thing but there should be more.
However, I'm having a hard time locating good articles for this question. Do you know any?
In order to get a homogeneous population by inspecting two conditions and filtering the entire population (all possible members) according these two conditions, then used the all remaining filtered members in the research, Is it still population? or it is a sample ( what is called?).
working on mathematical equation by adding other part to it then find the solution and applying it on the real world. can we generalize its result to other real world?
in the case of decision-making process of energy retrofit actions while choosing the best retrofit intervention, which one should be kept under consideration? MODM or MADM?
In many instant it has been said cutting a dendrogram at a certain level gives a set of clusters. Cutting at another level gives another set of clusters. How would you pick where to cut the dendrogram?
Is there something we could consider an optimal point? I have also wondered about this problem, but (unfortunately) haven't found any convincing answers yet.
So is that correct to say “there is no definitive answer since cluster analysis is essentially an exploratory approach; the interpretation of the resulting hierarchical structure is context-dependent and often several solutions are equally good from a theoretical point of view”. Please help me.
In neutrosophic sets all three measures (Truth, Falsehood, indeterminacy) are independent, how one effects another in decision making. For Example: In case of intuitionistic fuzzy sets, if membership of an element increases, then, certainly the sum of other two measures (non-membership and hesitation) will decrease.)
For both Bayesian and Frequentist expected loss, is the parameter an index of the data to which to make decisions on, or a state of nature?
Are there examples where a loss function is mapped using a vector of real observations to show what the parameter looks like?
I am putting together a decision matrix - 3D cube based on three factors. The three observable factors are measured and the coordinate opens a cell, that gives the decision. A trivial example is if you had a pet and you worked out there are three main indicators of what it wanted; wagging of tail, excitement and what it brings to you. You want to leave a simple model so when you are away anyone looking after the pet can observe these three and understand what the pet wants. Eg. moderate tail wagging, high excitement and bringing you a leash - means it want to go for a walk. My question is there any work done on this type of model?
Suppose I want to give my robot a specific goal state to achieve, but the robot must also maximize some reward function while perusing the goal. What if the shortest sequence of actions to achieve the goal is also the most costly according to the reward function? Or what if the most rewarding sequence of actions is the longest possible? Is there some work which tackles this problem?
Many authors deal with individual differences, but there is inconsistency on what constructs are individual differences. Especially in decision making style research, where some researchers study individual differences in decision making style, while others include decision making style as an individual difference.
I'm trying to do the IIA test for an unlabeled choice experiment in Stata. However it drops an error indicating : "Hausman IIA /Small-Hsiao IIA test requires at least 3 dependent categories"
My dependent variable is <<choice>> which takes the value of 1 if the one of the alternatives, A, B or C is chosen and 0 if not.
What can I do?
Can anyone help with accessible literature or article suggestions on counterpower and sociocracy -. in English (preferably in the radical pedagogy framework). So far I've come to Peter Wohlleben /The Secret Lives Of Trees), but since I can't read in German, it's not very helpful. And on the counter power topic I got Tim Gee (just as a source - woiuld love to get his book, if anyone has it in pdf).
Thank you all soooo much in advance,
P. s.: How does this requesting work? I' ve requested two articles already, but noone seems to respond. :-/
When there are large number of attributes (say 20+), ensuring that they are either independent of each other or slightly dependent is very difficult. However, the dependency among attributes is tried to be avoided when building MCDA hierarchy. So before generating final ranking, does PROMETHEE automatically takes care of this issue through positive, negative and net outranking flows?
I am doing an AHP but have a regular questionnaire because I have multiple alternatives (up to 40) Using an AHP questionnaire would be cumbersome and perhaps too complicated for the respondents and would also be time consuming and might consistency problems. So I'm wondering if there's any justification for using regular scales in this circumstance.
I try to find the link between normative decision theory (NDT) and decision support system (DSS) domain. I'm confused. Both domains use a magical word "decision".
However, NDT is mostly concentrated on analyzing the decisions with respect to the outcomes or consequences, minimizing the lose function or maximizing outcomes.
DSS on the other side seems to be limited to the classification. Predict the decision on the basis of historical data, where the lose function is constructed over good/bad predictions. Many times the training data for DSS are the sets labeled by domain experts.
Is it true that DSS use term "decision" in a sense of "inference" or "judgment" in namespace of NDT?
I begin with some general question. Is the normative decision theory in primary form applied to real problems? I can hardly find examples of real payoff matrices among toy examples.
Back to main question. I would like to represent in a form of payoff matrix such a problem: incident commander after arriving at the fire ground has such alternatives: 1. Gathering further information; 2. Evacuating of people; 3. Extinguishing the fire.
Candidates to states of nature: fire will extinguish itself; people will evacuate thyself.
How to construct the payoff matrix for this problem. Should be the states of nature composed as a combination of the candidates' values:
State 1: won't extinguish itself, won't evacuate thyself;
State 2: will extinguish itself, won't evacuate thyself;
State 3: won't extinguish itself, will evacuate thyself;
State 4: will extinguish itself, will evacuate thyself;
Let assume that candidates are non mutually exclusive and independent.
What are the key aspects that differentiate normative and prescriptive models? The prescriptive models is something between normative and descriptive models. However, they have strong roots in normative theory. How to clearly distinguish these two models?
I know two types of them: wsm (weighted sum method) and desirability function method .are there other methods?
I think about onsite decision support for incident commanders.
In my opinion currently the descriptive and naturalistic models are exploited. Why not prescriptive or normative?
If we consider a human as a decision maker, the factors underlying the use of descriptive or naturalistic models are: inability to comprehend and process in analytical way all the information, course of actions, consequences and costs of alternative activities in mental and time pressure environment.
If we consider a computer system as a decision maker, the factors are:
- lack of information - we can not ask firefighters to insert data into computer system because they don't have such time;
- poor sensory layer for recognition of phenomena or victims - there is no so far sensors in building which enable to track fire dynamics, people localization and their physical state;
- huge uncertainty in modeling and foreseeing the fire and people behavior, reaction of the building to the fire exposure, changes in ventilation, extinguishing effects and many others.
What do you think about this problem?
I am looking for the methods like ELECTRE IV or MAXIMIN, and for papers where the problem of the criteriaincomparability is considered.
I'm looking for theories or models which try to combine or to unify the theory of Alfred Schütz with common theories of explaining action via decision models like Rational Choice or Bounded Rationality models. The Frame Selection Theory of Hartmut Esser and Clemens Kroneberg is well known to me but I wonder whether there are similar but independent attempts.
I was thinking about the different decision making methods under certain and uncertain conditions. My specific question is that:
As you know, we have many MCDM tools like AHP- ANP- TOPSIS- VIKOR- PROMOTHEE- MOORA- SIR and many other methods and all of them have been developed to fuzzy, type-2 fuzzy, intuitionistic fuzzy and Grey environments. Which one is really more applicable under uncertain situations? fuzzy ? type-2 fuzzy? Intuitionistic fuzzy? or Grey environment for a decision making method? I know each uncertain logic has its applications but sometimes a tiny different in collected data may causes different results by each methods.
All the ideas and comments are appreciated. Hope that all of the experts take an action to this question by following or leaving their valuable comments.
I need a quick way to get participants to think/act as if they have made their own choice, while actually have their choice correspond to their assigned condition. In other words, I am looking for a way to get them to "choose" their assigned condition.
I am considering offering multiple choices (out of 4) and telling them that their choice has to match a random selection in order for the task to begin. But wondering if there is a better, more efficient way to do this.
In intertemporal choice paradigms, I would like to build a discount utility function for each participant in my study based on a couple of intertemporal decisions performed by each participant. Is there any software that can easily perform such calculations?
Is matlab the most appropriate software for doing this?
I have a weighted supermatrix and I am trying to convert it into a limit matrix.
Weighted supermatrix can be transformed into the limit supermatrix
by raising itself to powers until the matrix converges
How it can be performed ?
The Analytic Hierarchy Process - AHP (Saaty 1980) is a multicriteria tool considered to be relevant to nearly any ecosystem management application that requires the evaluation of multiple participants or complex decision-making processes are involved (Schmoldt & Peterson 1997, Schmoldt et al. 2001, Reynolds & Hessburg 2005).
A need to consult an example of a form used to be filled by experts in a given area of knowledge in order to perform a pairwise comparison between environmental criteria that are useful to define the soil suitability of a region (e.g., soils, slope, aspect, clima,...). Two factors are compared using the rating scale which ranges from 1 to 9 with respect to their relative importance. Than we obtain the weights for each criteria that will be used in the map algebra.
I conducted AHP using 3 pairwise comparisons. Unfortunately the CR comes out as 0.302. A balanced scale using principal eigen vectors also results in a CR of 0.22. Is there any way to be able to move forward with these results?
I am looking for some top and mathematical references in Bayesian analysis and Bayesian decision making. Books and Tutorial articles mostly. Thank you
In hypothesis testing we use linear parameters. I am trying to work on factoring irrationality by using non linear modeling. I would like to factor the effect of correlation of the cause variable in the result.
Is this possible ? Is there any papers on this ?
Does anyone know what are the common group tasks that people use in their experiments? Tasks where performance can be easily evaluated objectively? I found in literature Michigan State University Distributed Dynamic Decision Making (MSU-DDD), but could not find the modified version for research. Does anyone have this game or know other games that I can use in research? Thanks!
The ﬁrst axiomatic accounts of preference for ﬂexibility and freedom of choice are due to Koopmans (1962) and Kreps (1979), who assumed that a Decision Maker always enjoys having more alternatives available. After that, e.g. Puppe (1996) refined the idea and distinguished the essential alternatives in an opportunity set as those whose exclusion “would reduce an agent’s freedom”.
Most applications I know of consider social choice problems that are relevant to economics theory. What other fields have seen applications of those concepts? I'm particularly interested in corporate decision-making and engineering design.
T. C. Koopmans, “On ﬂexibility of future preference,” Cowles Foundation for Research in Economics, Yale University, Cowles Foundation Discussion Papers 150, 1962.
D. M. Kreps, “A representation theorem for ”preference for ﬂexibility”,”
Econometrica, vol. 47, no. 3, pp. pp. 565–577, 1979
C. Puppe, “An Axiomatic Approach to 'Preference for Freedom
of Choice'” Journal of Economic Theory, vol. 68, no. 1, pp. 174–
199, January 1996
The reason for my question is that so many other terms in the defence refer to the "Freedom of Action". [Please see for example: ADP 3–0, Unified Land Operations]
I am researching the trust model in WSNs and doing the emulation for the model.I can't find some matlab codes about the reputation-based framework for sensor networks(RFSN).It uses a Bayesian formulation and a beta contribution.Could you help me?
I am planning to conduct research on competitive traits and its effect on competitive states. I would appreciate if someone could recommend me some instrument to evaluate pessimistic trait and cognitive bias consequences. Thank you in advance
I have a project on function approximation by fuzzy decision trees and I want to compare my results with some other methods improved by fuzzy logic.
The Saaty rating scale is rather nonlinear, but aggregation approach is definitely linear. Is the AHP a linear or a nonlinear method? I think it is a linear method (e.g. Zarghami and Szidarovszky).
Zarghami M. and Szidarovszky F. (2011). Multicriteria Analysis, Springer, pp. 33-39.
Does anybody have any suggestions for what I should read about in connection with case-based decision theory? This is a totally new area to me and any information about the theory would be much appreciated.
I'm looking for data from prisoner's dilemma experiments in which participants played only one round of the game. A closely related experiment, which I found, is Goeree, Holt and Laury (J Pub Econ 2002) where participants play ten one-shot games without feedback between games (hence, no learning effects).
One of the tenets of multiattribute value theory is that each attribute (criterion) must be preferencial independent from each other. There are however specific cases where this assumption do not hold. In these cases, one can proceed by building a value function based on the set of attributes that are preference dependent. For instance, the visual quality of a forest depends on attributes as the size of the trees, the density of the forest stand, the diversity of species, and the diversity of distinct heights. There are preference dependencies among these attributes. How can I assess a value function for the objective "maximize the visual quality of a forest" based on these attributes?
Alice & Bob enter a game where each have a necktie and they call an independent judge to decide who has the better looking necktie.
The judge takes the better necktie and awards it to the other player. Alice reasons that entering the game is advantageous: although there is a possible maximal loss of one necktie, the potential winning state is two neckties with one that is judged superior. However, the apparent paradox is that Bob can follow the same reasoning, therefore how can the game be simultaneously advantageous to both players?
How can we resolve this dilemma? What are the implications and applications?
[Historical note: I did not invent this question. It was first stated in 1930 by the Belgian mathematician Maurice Kraitchik.]
As described in the papers: "Action Recognition And Prediction For Driver Assistance Systems Using Dynamic Belief Networks" and "Enrichment of Qualitative Beliefs for Reasoning under Uncertainty"
Conference Paper Action Recognition and Prediction for Driver Assistance Syst...