Conference Paper

Measuring the Uncertainty of Environmental Good Preferences with Bayesian Deep Learning

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Conference Paper
Full-text available
In the last decade, with availability of large datasets and more computing power, machine learning systems have achieved (super)human performance in a wide variety of tasks. Examples of this rapid development can be seen in image recognition, speech analysis, strategic game planning and many more. The problem with many state-of-the-art models is a lack of transparency and interpretability. The lack of thereof is a major drawback in many applications, e.g. healthcare and finance, where rationale for model's decision is a requirement for trust. In the light of these issues, explainable artificial intelligence (XAI) has become an area of interest in research community. This paper summarizes recent developments in XAI in supervised learning, starts a discussion on its connection with artificial general intelligence, and gives proposals for further research directions.
Article
Full-text available
Stated preference approaches, such as contingent valuation, focus mainly on the estimation of the mean or median willingness to pay (WTP) for an environmental good. Nevertheless, these two welfare measures may not be appropriate when there are social and political concerns associated with implementing a payment for environmental services (PES) scheme. In this paper the authors used a Bayesian estimation approach to estimate a quantile binary regression and the WTP distribution in the context of a contingent valuation PES application. Our results show that the use of other quantiles framed in the supermajority concept provides a reasonable interpretation of the technical nonmarket valuation studies in the PES area. We found that the values of the mean WTP are 10–37 times higher than the value that would support a supermajority of 70 per cent of the population.
Article
Full-text available
In this paper, we present a new approach to value the willingness to pay to reduce road noise annoyance using an artificial neural network ensemble. The model predicts, with precision and accuracy, a range for willingness to pay from subjective assessments of noise, a modelled noise exposure level, and both demographic and socio-economic conditions. The results were compared to an ordered probit econometric model in terms of the performance mean relative error and obtained 85.7% better accuracy. The results of this study show that the applied methodology allows the model to reach an adequate generalisation level, and can be applicable as a tool for determining the cost of transportation noise in order to obtain financial resources for action plans.
Article
Full-text available
We introduce a new, efficient, principled and backpropagation-compatible algorithm for learning a probability distribution on the weights of a neural network, called Bayes by Backprop. It regularises the weights by minimising a compression cost, known as the variational free energy or the expected lower bound on the marginal likelihood. We show that this principled kind of regularisation yields comparable performance to dropout on MNIST classification. We then demonstrate how the learnt uncertainty in the weights can be used to improve generalisation in non-linear regression problems, and how this weight uncertainty can be used to drive the exploration-exploitation trade-off in reinforcement learning.
Article
Full-text available
Five ecosystem services that could be restored along a 45-mile section of the Platte river were described to respondents using a building block approach developed by an interdisciplinary team. These ecosystem services were dilution of wastewater, natural purification of water, erosion control, habitat for fish and wildlife, and recreation. Households were asked a dichotomous choice willingness to pay question regarding purchasing the increase in ecosystem services through a higher water bill. Results from nearly 100 in-person interviews indicate that households would pay an average of 21permonthor21 per month or 252 annually for the additional ecosystem services. Generalizing this to the households living along the river yields a value of 19millionto19 million to 70 million depending on whether those refusing to be interviewed have a zero value or not. Even the lower bound benefit estimates exceed the high estimate of water leasing costs (1.13million)andconservationreserveprogramfarmlandeasementscosts(1.13 million) and conservation reserve program farmland easements costs (12.3 million) necessary to produce the increase in ecosystem services.
Article
Full-text available
This note examines the effects of climate variability on natural-resources management in East Africa. The bimodal rainfall regime in much of East Africa brings rainy seasons from March to May and October to December with greater interannual variability from October to December. We discuss the impacts of rainfall extremes in 1961 and 1997 and explore three examples of natural-resources management in the context of rainfall variability: inland fisheries in East and southern Africa; fluctuations in the level of Lake Victoria; and lake-shore communities around Lake Kyoga in Uganda. The discussion reflects the complexity of linkages between climate, environment and society in the region and highlights implications for natural-resources management. These range from benefits due to improved seasonal rainfall forecasting to reduce the damage of extremes, to improved understanding of existing climate-society interactions to provide insights into the region's vulnerability and adaptive capacity in relation to future climate change.
Article
Deep learning has recently achieved great success in many visual recognition tasks. However, the deep neural networks (DNNs) are often perceived as black-boxes, making their decision less understandable to humans and prohibiting their usage in safety-critical applications. This guest editorial introduces the thirty papers accepted for the Special Issue on Explainable Deep Learning for Efficient and Robust Pattern Recognition. They are grouped into three main categories: explainable deep learning methods, efficient deep learning via model compression and acceleration, as well as robustness and stability in deep learning. For each of the three topics, a survey of the representative works and latest developments is presented, followed by the brief introduction of the accepted papers belonging to this topic. The special issue should be of high relevance to the reader interested in explainable deep learning methods for efficient and robust pattern recognition applications and it helps promoting the future research directions in this field.
Article
This review covers the core concepts and design decisions of TensorFlow. TensorFlow, originally created by researchers at Google, is the most popular one among the plethora of deep learning libraries. In the field of deep learning, neural networks have achieved tremendous success and gained wide popularity in various areas. This family of models also has tremendous potential to promote data analysis and modeling for various problems in educational and behavioral sciences given its flexibility and scalability. We give the reader an overview of the basics of neural network models such as the multilayer perceptron, the convolutional neural network, and stochastic gradient descent, the most commonly used optimization method for neural network models. However, the implementation of these models and optimization algorithms is time-consuming and error-prone. Fortunately, TensorFlow greatly eases and accelerates the research and application of neural network models. We review several core concepts of TensorFlow such as graph construction functions, graph execution tools, and TensorFlow’s visualization tool, TensorBoard. Then, we apply these concepts to build and train a convolutional neural network model to classify handwritten digits. This review is concluded by a comparison of low- and high-level application programming interfaces and a discussion of graphical processing unit support, distributed training, and probabilistic modeling with TensorFlow Probability library.
Article
The report on global land use and agriculture comes amid accelerating deforestation in the Amazon. The report on global land use and agriculture comes amid accelerating deforestation in the Amazon. A cowboy drives cattle at a farm in the Brazilian rainforest
Book
A Primer on Nonmarket Valuation is unique in its clear descriptions of the most commonly used nonmarket valuation techniques and their implementation. Individuals working for government agencies, attorneys involved with natural resource damage assessments, graduate students, and others will appreciate the non-technical and practical tone of this book. The first section of the book provides the context and theoretical foundation of nonmarket valuation, along with practical data issues. The middle two sections of the Primer describe the major stated and revealed nonmarket valuation techniques. For each technique, the steps involved in implementation are laid out and described. Both practitioners of nonmarket valuation and those who are new to the field will come away from these methods chapters with a thorough understanding of how to design, implement, and analyze a nonmarket valuation study.
Book
This is a practical book with clear descriptions of the most commonly used nonmarket methods. The first chapters of the book provide the context and theoretical foundation of nonmarket valuation along with a discussion of data collection procedures. The middle chapters describe the major stated- and revealed-preference valuation methods. For each method, the steps involved in implementation are laid out and carefully explained with supporting references from the published literature. The final chapters of the book examine the relevance of experimentation to economic valuation, the transfer of existing nonmarket values to new settings, and assessments of the reliability and validity of nonmarket values. This book is relevant to individuals in many professions at all career levels. Professionals in government agencies, attorneys involved with natural resource damage assessments, graduate students, and others will appreciate the thorough descriptions of how to design, implement, and analyze a nonmarket valuation study.
Conference Paper
Recurrent neural networks (RNNs) stand at the forefront of many recent developments in deep learning. Yet a major difficulty with these models is their tendency to overfit, with dropout shown to fail when applied to recurrent layers. Recent results at the intersection of Bayesian modelling and deep learning offer a Bayesian interpretation of common deep learning techniques such as dropout. This grounding of dropout in approximate Bayesian inference suggests an extension of the theoretical results, offering insights into the use of dropout with RNN models. We apply this new variational inference based dropout technique in LSTM and GRU models, assessing it on language modelling and sentiment analysis tasks. The new approach outperforms existing techniques, and to the best of our knowledge improves on the single model state-of-the-art in language modelling with the Penn Treebank (73.4 test perplexity). This extends our arsenal of variational tools in deep learning.
Article
The aim of this study was to estimate the value of noise pollution generated by transportation using a discrete choice survey. This paper reports the main findings of a contingent valuation of road traffic noise in Quito, Ecuador. In this sense, it was conducted a social survey in Quito in order to identify the respondents' noise perception, and their willingness to pay in order to reduce the annoyance caused by road traffic noise. The respondents' road noise exposure levels were obtained through an RSL-90 acoustic model. The econometric model succeeded 81,43% of the willingness to pay for the validation dataset. This study contributes toward assessing the environmental costs of transport in an Andean city within a policymaking context.
Article
This paper we outline the 'choice experiment' approach to environmental valuation. This approach has its roots in Lancaster's characteristics theory of value, in random utility theory and in experimental design. We show how marginal values for the attributes of environmental assets, such as forests and rivers, can be estimated from pair-wise choices, as well as the value of the environmental asset as a whole. These choice pairs are designed so as to allow efficient statistical estimation of the underlying utility function, and to minimise required sample size. Choice experiments have important advantages over other environmental valuation methods, such as contingent valuation and travel cost-type models, although many design issues remain unresolved. Applications to environmental issues have so far been relatively limited. We illustrate the use of choice experiments with reference to a recent UK study on public preferences for alternative forest landscapes. This study allows us to perform a convergent validity test on the choice experiment estimates of willingness to pay.
Article
Since the work of Bishop and Heberlein, a number of contingent valuation experiments have appeared involving discrete responses which are analyzed by logit or similar techniques. This paper addresses the issues of how the logit models should be formulated to be consistent with the hypothesis of utility maximization and how measures of compensating and equivalent surplus should be derived from the fitted models. Two distinct types of welfare measures are introduced and then estimated from Bishop and Heberlein's data.
Article
Variational methods have been previously explored as a tractable approximation to Bayesian inference for neural networks. However the approaches proposed so far have only been applicable to a few simple network architectures. This paper introduces an easy-to-implement stochastic variational method (or equivalently, minimum description length loss function) that can be applied to most neural net-works. Along the way it revisits several common regularisers from a variational perspective. It also provides a simple pruning heuristic that can both drastically re-duce the number of network weights and lead to improved generalisation. Exper-imental results are provided for a hierarchical multidimensional recurrent neural network applied to the TIMIT speech corpus.
Book
Probability Theory and Classical Statistics.- Basics of Bayesian Statistics.- Modern Model Estimation Part 1: Gibbs Sampling.- Modern Model Estimation Part 2: Metroplis-Hastings Sampling.- Evaluating Markov Chain Monte Carlo Algorithms and Model Fit.- The Linear Regression Model.- Generalized Linear Models.- to Hierarchical Models.- to Multivariate Regression Models.- Conclusion.
Article
Two features distinguish the Bayesian approach to learning models from data. First, beliefs derived from background knowledge are used to select a prior probability distribution for the model parameters. Second, predictions of future observations are made by integrating the model's predictions with respect to the posterior parameter distribution obtained by updating this prior to take account of the data. For neural network models, both these aspects present diiculties | the prior over network parameters has no obvious relation to our prior knowledge, and integration over the posterior is computationally very demanding. I address the problem by deening classes of prior distributions for network param-eters that reach sensible limits as the size of the network goes to innnity. In this limit, the properties of these priors can be elucidated. Some priors converge to Gaussian processes, in which functions computed by the network may be smooth, Brownian, or fractionally Brownian. Other priors converge to non-Gaussian stable processes. Interesting eeects are obtained by combining priors of both sorts in networks with more than one hidden layer.
Article
Bayesian methods are increasingly being used in the social sciences, as the problems encountered lend themselves so naturally to the subjective qualities of Bayesian methodology. This book provides an accessible introduction to Bayesian methods, tailored specifically for social science students. It contains lots of real examples from political science, psychology, sociology, and economics, exercises in all chapters, and detailed descriptions of all the key concepts, without assuming any background in statistics beyond a first course. It features examples of how to implement the methods using WinBUGS - the most-widely used Bayesian analysis software in the world - and R - an open-source statistical software. The book is supported by a Website featuring WinBUGS and R code, and data sets.
Article
It is difficult to quantify and value many of the benefits of education. This paper illustrates the use of contingent valuation to obtain more complete estimates of the economic value of difficult-to-measure benefits of preschool education for handicapped children and presents a general approach for the use of contingent valuation in cost-benefit analysis of educational programs. Data for the illustration were obtained by surveying parents of children with handicapping conditions enrolled in preschool special education programs in Iowa. The survey was conducted jointly by the Department of Economics and the Early Intervention Research Institute at Utah State University. Results indicated that the contingent valuation method produces plausible results which are consistent with basic predictions of economic theory. Implications for policymaking and directions for further research are discussed.
Article
After briefly reviewing some aspects of the history of Bayesian Analysis in Econometrics, five basic propositions regarding econometrics are put forward and discussed. Challenges relating to these propositions are issued. Then Bayesian estimation, prediction, control and decision procedures are discussed and a number of canonical econometric problems are described and analyzed to illustrate the power of the Bayesian approach in econometrics and other areas of science.
Article
A combination of a socio-acoustic survey on self-reported noise annoyance and a contingent valuation questionnaire is used to estimate willingness to pay for noise reduction for urban residents living in Copenhagen. It is found that the annoyance level has a significant effect on the stated WTP. Expected WTP per dB reduction is subsequently calculated by combining WTP for each annoyance level with the estimated dose-response function for the relationship between noise exposure and annoyance. It is found that the expected WTP for a one dB noise reduction is increasing with the noise level from e.g. 2 EUR at 55 dB to 10 EUR at 75 dB.
Article
The research agendas of psychologists and economists now have several overlaps, with behavioural economics providing theoretical and experimental study of the relationship between behaviour and choice, and hedonic psychology discussing appropriate measures of outcomes of choice in terms of overall utility or life satisfaction. Here we model the relationship between values (understood as principles guiding behaviour), choices and their final outcomes in terms of life satisfaction, and use data from the BHPS to assess whether our ideas on what is important in life (individual values) are broadly connected to what we experience as important in our lives (life satisfaction).
Article
The contingent valuation method (CVM) is a survey-based, hypothetical and direct method to determine monetary valuations of effects of health technologies. This comprehensive review of CVM in the health care literature points at methodological as well as conceptual issues of CVM and on willingness to pay as a measure of benefits compared with other measures used in medical technology assessment. Studies published before 1998 were found by searching computerised databases and former review literature. Studies were included, when performing CVM using original data and meeting qualitative criteria. Theoretical validity of CVM was sufficiently shown and there were several indications of convergent validity. No results on criterion validity and only a few on reliability were found. There was widespread use of different elicitation formats, which make comparisons of studies problematic. Direct questions were seen problematic. First bids used in bidding games influenced the monetary valuation significantly (starting point bias). There were indications that the range of bids of payment cards also affected the valuation (range bias). However, no strategic bias was found. The influence of different states of valuation (ex-ante, ex-post) and of payment methods, as well as the possible aggregation of the results of decomposed scenarios rather than more complex holistic scenarios, were rarely investigated. Further methodological analysis and testing seems to be necessary before CVM may be used in health care decision making. Important research topics are the connection of assessment of different elicitation methods and criterion validity as well as tests on reliability according to methodological issues. Concerning conceptual issues, the analysis of the influence of different states of evaluation and of the status of the respondents as diseased or non-diseased, as well as the aggregation of results of decomposed scenarios, proved to be topics of further research.
Modeling Stated preference for mobility-on-demand transit: a comparison of Machine Learning and logit models
  • Xilei Zhao Xiang Yan
  • Alan Yu
  • Pascal Van Hentenryck
Nikhil Ketkar and Jojo Moolayil. 2021. Introduction to pytorch
  • Nikhil Ketkar
  • Jojo Moolayil
  • Ketkar Nikhil
Valuing environmental preferences: theory and practice of the contingent valuation method in the US , EU, and developing countries
  • J Kenneth
  • Arrow
  • Arrow J
Ankit Patel, Minh Nguyen, and Richard Baraniuk. 2016. A probabilistic framework for deep learning
  • Ankit Patel
  • Minh Nguyen
  • Richard Baraniuk
  • Patel Ankit
Lingappan Venkatachalam. 2004. The contingent valuation method: a review
  • Lingappan Venkatachalam
  • Venkatachalam Lingappan
John Salvatier Thomas V Wiecki and Christopher Fonnesbeck
  • John Salvatier Thomas
  • V Wiecki
  • Christopher Fonnesbeck
Discrete choice experiments in health economics: a review of the literature
  • Mandy Esther W De Bekker-Grob
  • Karen Ryan
  • Gerard
  • de Bekker-Grob W
Roberto Roson Alvaro Calzadilla and Francesco Pauli. 2006. Climate change and extreme events: an assessment of economic implications
  • Alvaro Roberto Roson
  • Francesco Calzadilla
  • Pauli
Report of the NOAA panel on contingent valuation
  • Kenneth Arrow
  • Robert Solow
  • R Paul
  • Portney
  • Arrow Kenneth
Using contingent valuation in the design of payments for environmental services mechanisms: A review and assessment
  • Dale Whittington
  • Stefano Pagiola
  • Whittington Dale