Forecasting without historical data: Bayesian probability models utilizing expert opinions
... Zadeh (1965) introduced the idea of a fuzzy set. A fuzzy setà in a space X 3 However, it should be noted that probabilistic approaches can be adopted even when we do not possess any data, either with a non-informative Bayesian prior or using a prior generated by experts, see, e.g., Driver and Alemi (1995). For a discussion of the relationship between fuzzy sets and probability measures see, e.g., Prade (1989) and(1993). ...
... The situation is different when the weights of the players can be described by fuzzy variables, e.g. it is not assumed that all the members of a particular party behave in the same way. Assume (using the data regarding the current Polish parliament) that the weights (number of seats) of PiS at the beginning of its term of office and at the end of 2017 are given by the triangular fuzzy numbers A(P iS) = (235, 4, 0) andÃ(P iS) = (237, 6, 0), respectively 8 . This means that the number of voters from PiS: a) immediately after the election is between 231 and 235, such that the most likely value is 235, b) at the end of 2017 is between 231 and 237, such that the most likely value is 237. ...
... This is a particular case of a two-party system, which, in practice, does not exist anywhere in Europe.8 This is a very conservative assumption that the ruling party never loses its majority. ...
... Being able to predict and forecast without the availability of historical data is crucial in some domains. For instance, Driver and Alemi (1995) apply a Bayesian method to obtain forecasts without historical data, based on expert opinions, to the case of medical malpractice litigation. In this context, no historical data is available on the patients having 2.2. ...
We propose statistical methods combining the Bayesian approach and deep learning for forecasting individual electrical consumption. This work is done in partnership with EDF. Two types of methodologies are developed: one relying on Bayesian neural networks, the other using deep learning for dimensionality reduction prior to clustering. Bayesian (non deep) models are then applied to the clusters. Firstly, we present a methodology to estimate a multi target regression model in high dimension with neural networks. It is applied to the prediction of individual load curves of non residential customers. Secondly, we present a Bayesian transfer learning approach adapted to panel data. The methodology is applied to forecasting the individual end-of-month consumption of residential customers, with short historical data, for specific clusters of customers. Those clusters are built using neural networks.
... According to Driver [31], construction of the Bayes probability model requires four steps and the first step is "decide on events to forecast". The first step in the evaluation of any rainfall threshold is to identify the rainfall episodes that triggered the historical landslides, here referred as "triggering rainfall". ...
In this paper a rainfall threshold and a Bayesian probability model are presented for the landslide occurrence of shallow landslides in Ha Giang city and the sur-roundings, Vietnam. The model requires the data on daily rainfall combined with the actual dates of landslide occurrences. Careful study on the database is a prerequisite for the paper. For this reason, selecting the input data was carried out carefully to ensure the reliable results of the study. The daily rainfall data covering a time span of 57 years was collected from a unique rain gauge station of National Centre for Hydro-meteorological Forecasting of Vietnam (from 1957 to 2013) and a landslide database with some landslides (37 of total of 245 landslides) that containing dates of occurrence, was prepared from historical records for the period 1989 to 2013. Rainfall thresholds were generated for the study area based on the relationship between daily and antecedent rainfall of the landslide events. The results shows that 3-day antecedent rainfall (with the rainfall threshold was established: RT 40.8 – 0.201R3ad) gives the best fit for the existing landslides in the landslide database. The Bayesian probability model for one-dimensional case was established based on 26 landslides for the period 1989 to 2009, daily rainfall data with the same time and the values of probability varies from 0.03 to 0.44. Next, the Bayesian probability model for two-dimensional case was generated based on 11 landslides, rainfall intensity and duration in three months (May, June and July) of 2013 and the values of probability ranges from 0.08 to 0.67, and computed values of conditional landslide probability P(A|B) from two-dimensional case of Bayesian approach is clearly controlled by rainfall intensity > 40 mm with rainfall duration > 0.3 day.
... We describe an evidence aggregation model that facilitates risk-based prioritization of surveillance and augments previous models of disease freedom (Audige et al., 2001;Martin et al., 2007a,b). The Bayesian method, using likelihood ratios (LRs) to describe contextual evidence, has foundations in the decision sciences, e.g., to predict program or treatment success or model expert guidance for organizational change (Von Winterfeldt and Edwards, 1986;Gustafson et al., 1992Gustafson et al., , 1993Gustafson et al., , 2003Driver and Alemi, 1995;Bosworth et al., 1999). LRs are also used to represent the accuracy of diagnostic tests (Gallagher, 1998;Fosgate et al., 2006) or risk factors (Gustafson et al., 1998(Gustafson et al., , 2005 in animal health evaluation. ...
The ability to combine evidence streams to establish disease freedom or prioritize surveillance is important for the evaluation of emerging diseases, such as viral hemorrhagic septicemia virus (VHSV) IVb in freshwater systems of the United States and Canada. Waterways provide a relatively unconstrained pathway for the spread of VHSV; and structured surveillance for emerging disease in open systems has many challenges. We introduce a decision framework for estimating VHSV infection probability that draws from multiple evidence streams and addresses challenges associated with the assessment of emerging disease. Using this approach, historical and risk-based evidence, whether empirical or expert-derived, supplement surveillance data to estimate disease probability. Surveillance-based estimates of VHSV prevalence were described using beta distributions. Subjective likelihood ratios (LRs), representing contextual risk, were elicited by asking experts to estimate the predicted occurrence of risk factors among VHSV-affected vs. VHSV-unaffected watersheds. We used the odds form of Bayes' theorem to aggregate expert and surveillance evidence to predict the risk-adjusted posterior probability of VHSV-infection for given watersheds. We also used LRs representing contextual risk to quantify the time value of past surveillance data. This evidence aggregation model predicts disease probability from the combined assessment of multiple sources of information. The method also provides a flexible framework for iterative revision of disease freedom status as knowledge and data evolve.
Studies of the psychology of hindsight have shown that reporting the outcome of a historical event increases the perceived likelihood of that outcome. Three experiments with a total of 463 paid volunteers show that similar hindsight effects occur when people evaluate the predictability of scientific results—they tend to believe they "knew all along" what the experiments would find. The hindsight effect was reduced, however, by forcing Ss to consider how the research could otherwise have turned out. Implications for the evaluation of scientific research by lay observers are discussed. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Investigated how often people are wrong when they are certain that they know the answer to a question. Five studies with a total of 528 paid volunteers suggest that the answer is "too often." For a variety of general-knowledge questions, Ss first chose the most likely answer and then indicated their degree of certainty that the answer they had selected was, in fact, correct. Across several different question and response formats, Ss were consistently overconfident. They had sufficient faith in their confidence judgments to be willing to stake money on their validity. The psychological bases for unwarranted certainty are discussed in terms of the inferential processes whereby knowledge is constructed from perceptions and memories. (15 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
From the subjectivist point of view (de Finetti, 1937) a probability is a degree of belief in a proposition whose truth has not been ascertained. A probability expresses a purely internal state; there is no “right” or “correct” probability that resides somewhere “in reality” against which it can be compared. However, in many circumstances, it may become possible to verify the truth o£ falsity of the proposition to which a probability was attached. Today, we assess the probability of the proposition“it will rain tomorrow”. Tomorrow, we go outside and look at the rain gauge to see whether or not it has rained. When verification is possible, we can use it to gauge the adequacy of our probability assessments.
Survival after diagnosis of acquired immunodeficiency syndrome (AIDS) differs greatly depending upon the type of complications the patient experiences. A detailed understanding of this variation in survival is an essential component of health care planning so that demands for hospital beds and other health care resources can be more realistically anticipated. We developed two severity indices for predicting the prognosis of AIDS patients using additive and multiplicative multi-attribute utility models. A panel of physicians described the scoring of each index. On 97 randomly selected patient profiles, we compared both severity scores to the clinical assessments of the expert panel. The multiplicative index was more accurate than the additive index. We describe the multiplicative scoring system for AIDS severity.
Decision analysis was used to study negotiations in the health care context. This paper found that analytical methods could answer several important questions related to complex negotiations, including whether contracts promote the interest of both parties, whether a decision aid could better meet the priorities of both parties, and whether one negotiator is more successful than the other in repeated negotiations. The paper concluded that micro-health care negotiations can be traced and studied with existing mathematical theories of negotiation.
This research evaluated four methods of eliciting subjective likelihood ratio estimates. The methods differed in terms of amount and structure of interaction permitted between estimators. These processes were individual estimates, and three group processes: a Talk-Estimate process approximating an interacting group, an Estimate-Feedback-Estimate process as an approximation of a Delphi group, an Estimate-Talk-Estimate process as combination of nominal and interacting groups.In this study the Estimate-Talk-Estimate group process was superior in approaching correct estimates in this judgmental task. This is consistent with the long research tradition which favors group as opposed to individual problem-solving in judgmental situations.The individual Estimate process and the Estimate-Feedback-Estimate technique performed about equally well with respect to both error and variability. If anything, written feedback appeared to lead to a reduction in the quality of estimates.Finally, the relatively poor results from the Talk-Estimate process are consistent with other studies which have pointed out dysfunctions of interacting group processes for judgmental tasks.
The age—old question of the generalizability of the results of experiments that are conducted in artificial laboratory settings to more realistic inferential and decision-making situations is considered in this paper. Conservatism in probability revision provides an example of a result that (1) has received wide attention, including attention in terms of implications for real-world decision making, on the basis of experiments conducted in artificial settings and (2) is now apparently thought by many to be highly situational and not at all a ubiquitous phenomenon, in which case its implications for real-world decision making are not as extensive as originally claimed. In this paper conservatism is considered in some detail within the context of the generalizability question. In a more general vein, we discuss some of the difficulties inherent in experimentation in realistic settings, suggest possible procedures for avoiding or at least alleviating such difficulties, and make a plea for more realistic experiments.
This paper reports the results of a study to develop and pilot test a system for screening potential suicide attemptors. The system includes a computer interview of patients complaining of suicidal thoughts and Bayesian processing (using subjective probability estimation) of the results of that interview. The results suggest that the system may significantly imporve the health field's ability to identify suicide attemptors.