David J. C. Mackay's research while affiliated with University of Cambridge and other places

Publications (111)

Article
Ticker is a probabilistic stereophonic single-switch text entry method for visually-impaired users with motor disabilities who rely on single-switch scanning systems to communicate. Such scanning systems are sensitive to a variety of noise sources, which are inevitably introduced in practical use of single-switch systems. Ticker uses a novel intera...
Article
Single-switch scanning systems allow nonspeaking individuals with motor disabilities to communicate by triggering a single switch (e.g., raising an eye brow). A problem with current single-switch scanning systems is that while they result in reasonable performance in noiseless conditions, for instance via simulation or tests with able-bodied users,...
Article
Full-text available
Exposure to ionizing radiation is ubiquitous, and it is well established that moderate and high doses cause ill-health and can be lethal. The health effects of low doses or low dose-rates of ionizing radiation are not so clear. This paper describes a project which sets out to summarize, as a restatement, the natural science evidence base concerning...
Article
To forge a strong climate accord in Paris, nations must agree on a common goal in everyone's self-interest, say David J. C. MacKay and colleagues.
Conference Paper
Speech Dasher is a novel text entry interface in which users first speak their desired text and then use the zooming interface Dasher to confirm and correct the recognition result. After several hours of practice, users wrote using Speech Dasher at 40 (corrected) words per minute. They did this using only speech and the direction of their gaze (obt...
Article
While the main thrust of the Discussion Meeting Issue on 'Material efficiency: providing material services with less material production' was to explore ways in which society's net demand for materials could be reduced, this review examines the possibility of converting industrial energy demand to electricity, and switching to clean electricity sou...
Article
Markov Chain Monte Carlo (MCMC) algorithms are routinely used to draw samples from distributions with intractable normalization constants. However, standard MCMC algorithms do not apply to doubly-intractable distributions in which there are additional parameter-dependent normalization terms; for example, the posterior over parameters of an undirect...
Conference Paper
Speech Dasher allows writing using a combination of speech and a zooming interface. Users first speak what they want to write and then they navigate through the space of recognition hypotheses to correct any errors. Speech Dasher's model combines information from a speech recognizer, from the user, and from a letter-based language model. This allow...
Article
The principle that the net energy delivered by a tidal pool can be increased by pumping extra water into the pool at high tide or by pumping extra water out of the pool at low tide is well known in the industry. On paper, pumping can potentially enhance the net power delivered by a factor of about four. However, pumping seems generally to be viewed...
Article
The arithmetic-coding-based communication system, Dasher, can be driven by a single switch. In MacKay et al. (2004), we proposed two versions of one-button Dasher, 'static' and 'dynamic', aimed at a the- oretical model of a user who can click with timing precision g, and who requires a recovery time D between clicks. While developing and testing th...
Article
DASHER is a human-computer interface for entering text using continuous or discrete gestures. Through its use of an internal language model, DASHER efficiently converts bits received from the user into text, and has been shown to be a competitive alternative to existing text-entry methods in situations where an ordinary keyboard cannot be used. We...
Article
We propose that a significant contribution to the power stroke of myosin and similar conformation changes in other biomolecules is the pressure of a single molecule (e.g. a phosphate ion) expanding a trap, a mechanism we call “ergodic pumping”. We demonstrate the principle with a toy computer model and discuss the mathematics governing the evolutio...
Article
Fountain codes are record-breaking sparse-graph codes for channels with erasures, such as the internet, where files are transmitted in multiple small packets, each of which is either received without error or not received. Standard file transfer protocols simply chop a file up into K packet-sized pieces, then repeatedly transmit each packet until i...
Article
We discuss how the arithmetic-coding-based communication system. Dasher, could be driven by discrete button presses. We describe several prototypes and predict their information rates.
Article
Sparse-graph codes appropriate for use in quantum error-correction are presented. Quantum error-correcting codes based on sparse graphs are of interest for three reasons. First, the best codes currently known for classical channels are based on sparse graphs. Second, sparse-graph codes keep the number of quantum interactions associated with the qua...
Conference Paper
The arithmetic-coding-based communication system, Dasher, can be driven by a one-dimensional continuous signal. A belt-mounted breath-mouse, delivering a signal related to lung volume, enables a user to communicate by breath alone. With practice, an expert user can write English at 15 words per minute. Dasher is a communication system based on a be...
Conference Paper
Go is an ancient oriental game whose complexity has defeated at- tempts to automate it. We suggest using probability in a Bayesian sense to model the uncertainty arising from the vast complexity of the game tree. We present a simple conditional Markov ran- dom eld model for predicting the pointwise territory outcome of a game. The topology of the m...
Conference Paper
What if eye trackers could be downloaded and used immediately with standard cameras connected to a computer, without the need for an expert to setup the system? This has already the case for head trackers, so why not for eye trackers?Using components off-the-shelf (COTS) for camera-based eye tracking tasks has many advantages, but it certainly intr...
Article
This note explains the method used by Davey and MacKay to set the non-zero entries in low-density parity-check codes over GF (q), and gives explicit prescriptions.
Article
If a finite number of rectangles, every one of which has at least one integer side, perfectly tile a big rectangle, then the big rectangle also has at least one integer side.
Article
In this paper, we develop a method for closely estimating noise threshold values for ensembles of binary linear codes on the binary symmetric channel. Our method, based on the "typical pairs" decoding algorithm pioneered by Shannon, completely decouples the channel from the code ensemble. In this, it resembles the classical union bound, but unlike...
Article
We present sparse graph codes appropriate for use in quantum error-correction. Quantum error-correcting codes based on sparse graphs are of interest for three reasons. First, the best codes currently known for classical channels are based on sparse graphs. Second, sparse graph codes keep the number of quantum interactions associated with the quantu...
Article
Models have been developed and used as tools to design a new 'made to measure' nickel base superalloy for power plant applications. In Part 1, Gaussian processes are used to model the tensile and creep rupture properties of superalloys as a function of their composition and processing parameters, making use of large databases on existing alloys. Th...
Book
Best known in our circles for his key role in the renaissance of low- density parity-check (LDPC) codes, David MacKay has written an am- bitious and original textbook. Almost every area within the purview of these TRANSACTIONS can be found in this book: data compression al- gorithms, error-correcting codes, Shannon theory, statistical inference, co...
Article
Full-text available
The nonnegative Boltzmann machine (NNBM) is a recurrent neural network model that can describe multimodal nonnegative data. Application of maximum likelihood estimation to this model gives a learning rule that is analogous to the binary Boltzmann machine. We examine the utility of the mean field approximation for the NNBM, and describe how Monte Ca...
Article
stitutions. This possibility motivates the use of runlength-limiting codes. 3. If many transitions occur close together (for example, 0101010), then substitution errors are more likely to occur. Pairs of transitions can be lost, so that 0101 is received as 0111 or 0011. For this reason, additional constraints may be included, forbidding the transmi...
Article
In 1948, Claude Shannon posed and solved one of the fundamental problems of information theory. The question was whether it is possible to communicate reliably over noisy channels, and, if so, at what rate. He defined a theoretical limit, now known as the Shannon limit, up to which communication is possible, and beyond which communication is not po...
Article
Process modelling represents an important area of research and application in aluminium industry. Many physically-based models have been proposed and validated. However, advanced statistical methods based on neural networks represent an alternative when more conventional models are non-existent or unsatisfactory. Two of these methods, Gaussian Proc...
Article
The creep rupture life and rupture strength of austenitic stainless steels have been expressed as functions of chemical composition, test conditions, stabilisation ratio, and solution treatment temperature. The method involved a neural network analysis of a vast and general database assembled from published data. The outputs of the model have been...
Conference Paper
Standard runlength-limiting codes — nonlinear codes defined by trellises — have the disadvantage that they disconnect the outer errorcorrecting code from the bit-by-bit likelihoods that come out of the channel. I present two methods for creating transmissions that, with probability extremely close to 1, both are runlength-limited and are codewords...
Article
Infrared magnitude–redshift relations relations) for the 3CR and 6C samples of radio galaxies are presented for a wide range of plausible cosmological models, including those with non-zero cosmological constant ΩΛ. Variations in the galaxy formation redshift, metallicity and star formation history are also considered. The results of the modelling a...
Article
an observable space x to an energy function E(x; w), parameterized by parameters w. The energy defines a probability P (x|w) = exp(-E(x; w))/Z(w).
Article
Many of the properties of austempered ductile iron depend on the austenite which is retained following the bainite reaction. A neural network model within a Bayesian framework has been created using published data to model the retained austenite content. The model allows the quantity of retained austenite to be estimated as a function of the chemic...
Article
When recording intracellularly, the resistance across the cell membrane must be monitored. However the resistance seen by the recording amplifier consists of the access resistance (between the electrode and the cell) in series with the cell resistance. When the stray capacitance is low, the time constant of the cell and that associated with the rec...
Article
Prospective authors are requested to submit new, unpublished manuscripts for inclusion in the upcoming event described in this call for papers.
Conference Paper
In this paper, we develop a method for closely estimating noise threshold values for ensembles of binary linear codes on the binary symmetric channel. Our method, based on the “typical pairs” decoding algorithm pioneered by Shannon, completely decouples the channel from the code ensemble. In this, it resembles the classical union bound, but unlike...
Article
Approximate inference by variational free energy minimization (also known as variational Bayes, or ensemble learning) has maximum likelihood and maximum a posteriori methods as special cases, so we might hope that it can only work better than these standard meth- ods. However, cases have been found in which degrees of freedom are 'pruned', perhaps...
Article
Gaussian processes are a promising nonlinear regression tool, but it is not straightforward to solve classification problems with them. In the paper the variational methods of Jaakkola and Jordan (2000) are applied to Gaussian processes to produce an efficient Bayesian binary classifier.
Conference Paper
Full-text available
Estimation of mechanical properties of C-Mn weld metals. Neural network prediction.
Article
Full-text available
Published experimental data on the tendency for 2.25Cr–1Mo to undergo impurity induced temper embrittlement have been analysed quantitatively. The results indicate strong effects owing to phosphorus, silicon, manganese, and molybdenum concentrations, but the influence of tin, antimony, and arsenic cannot be perceived because of the overwhelming eff...
Article
Propp and Wilson's method of coupling from the past allows one to efficiently generate exact samples from attractive statistical distributions (e.g., the ferromagnetic Ising model). This method may be generalized to non-attractive distributions by the use of summary states, as first described by Huber. Using this method, we present exact samples fr...
Article
Previous work presented models which can be used to estimate the yield and ultimate tensile strengths of ferritic steel welds. The present paper deals with properties that are much more difficult to predict: the elongation and Charpy impact toughness. While the models are found to be useful and emulate expectations from current physical metallurgy...
Article
The yield strength and ultimate tensile strength of ferritic steel weld metal have been expressed as functions of chemical composition, the heat input during welding, and the heat treatment given after welding is completed. The method involved a neural network analysis of a vast and fairly general database assembled from publications on weld metal...
Article
Recent changes in the design of steam turbine power plant have necessitated the replacement of bolted flanges with welded joints. The design process therefore requires a knowledge of the creep rupture strength of the weld metal consumed in the welding process. This paper presents a,method which can be used to estimate the creep rupture strength of...
Article
A Gaussian processes computer program has been used to model the mechanical properties of polycrystalline nickel-base superalloys as a function of their chemical composition and heat treatment. The models are able to reproduce well-known metallurgical trends, and to estimate the behavior of new alloys. On this basis, several new creep-resistant all...
Article
This paper reviews the Bayesian approach to learning in neural networks, then introduces a new adaptive model, the density network. This is a neural network for which target outputs are provided, but the inputs are unspecied. When a probability distribution is placed on the unknown inputs, a latent variable model is dened that is capable of discove...
Article
The creep rupture life of nickel-base superalloys has been predicted using a neural network model within a Bayesian framework. The rupture life was modelled as a function of some 42 variables, including temperature, chemical composition: Cr, Co, C, Si, Mn, P, S, Mo, Cu, Ti, Al, B, N, Nb, Ta, Zr, Fe, W, V, Hf, Re, Mg, ThO2, La, four steps of heat tr...
Article
I examine two approximate methods for computational implementation of Bayesian hierarchical models, that is, models that include unknown hyperparameters such as regularization constants and noise levels. In the evidence framework, the model parameters are integrated over, and the resulting evidence is maximized over the hyperparameters. The optimiz...
Article
Full-text available
A neural network model has been developed on the basis of published experimental data. This allows the creep rupture strength of bainitic and martensitic electric power plant steels with compositions based on Fe–2·25Cr–1Mo and Fe–(9–12)Cr to be estimated as a function of chemical composition, heat treatment and time at temperature. This model, toge...
Article
At what rate, in bits per generation, can the blind watchmaker cram information into a species by natural selection? And what is the maximum mutation rate that a species can withstand? We study a simple model of a reproducing population of N individuals with a genome of size G bits: variation is produced by mutation or by recombination and natural...
Article
Bell and Sejnowski (1995) have derived a blind signal processing algorithm for a non-linear feedforward network from an information maximization viewpoint. This paper first shows that the same algorithm can be viewed as a maximum likelihood algorithm for the optimization of a linear generative model. Second, a covariant version of the algorithm is...
Article
We study two families of error-correcting codes defined in terms of very sparse matrices. “MN” (MacKay-Neal (1995)) codes are recently invented, and “Gallager codes” were first investigated in 1962, but appear to have been largely forgotten, in spite of their excellent properties. The decoding of both codes can be tackled with a practical sum-produ...
Article
The present paper introduces the Gaussian process model for the empirical modelling of the formation of austenite during the continuous heating of steels. A previous paper has examined the application of neural networks to this problem, but the Gaussian process model is a more general probabilistic model which avoids some of the arbitrariness of ne...
Article
A traditional interpolation model is characterized by the choice of regularizer applied to the interpolant, and the choice of noise model. Typically, the regularizer has a single regularization constant ff, and the noise model has a single parameter fi. The ratio ff=fi alone is responsible for determining globally all these attributes of the interp...
Article
The hot torsion stress-strain curves of steels have been modelled using a neural network, within a Bayesian framework. The analysis is based on an extensive database consisting of detailed chemical composition, temperature and strain rate from new hot torsion experiments. Non-linear functions are obtained, describing the variation of stress-strain...
Article
The abilities of artificial neural networks and Gaussian processes to model the yield strength of nickel-base superalloys as a function of composition and temperature have been compared on the basis of simple well-known metallurgical trends (influence of Ti, Al, Co, Mo, W, Ta, of the Ti/Al ratio, gamma' volume fraction and the testing temperature)....
Article
Full-text available
We introduce a recurrent network architecture for modelling a general class of dynamical systems. The network is intended for modelling real-world processes in which empirical measurements of the external and state variables are obtained at discrete time points. The model can learn from multiple temporal patterns, which may evolve on different time...
Article
The process of rolling is very complicated and the number of parameters which determines the final properties can be quite large. It is extremely difficult therefore to develop a physical model for predicting various properties like yield and tensile strengths. In the present work, a neural network technique which can recognise complex relationship...
Article
Full-text available
A neural network technique trained within a Bayesian framework has been applied to the analysis of the yield strength, ultimate tensile strength, and percentage elongation of mechanically alloyed oxide dispersion strengthened ferritic steels. The database used was compiled using information from the published literature, consisting of variables kno...
Conference Paper
Binary low density parity check (LDPC) codes have been shown to have near Shannon limit performance when decoded using a probabilistic decoding algorithm. The analogous codes defined over finite fields GF(q) of order q>2 show significantly improved performance. We present the results of Monte Carlo simulations of the decoding of infinite LDPC codes...
Conference Paper
This paper describes a sequence of Monte Carlo methods: importance sampling, rejection sampling, the Metropolis method, and Gibbs sampling. For each method, we discuss whether the method is expected to be useful for high-dimensional problems such as arise in inference with graphical models. After the methods have been described, the terminology of...
Article
We describe the close connection between the now celebrated iterative turbo decoding algorithm of Berrou et al. (1993) and an algorithm that has been well known in the artificial intelligence community for a decade, but which is relatively unknown to information theorists: Pearl's (1982) belief propagation algorithm. We see that if Pearl's algorith...
Article
This paper will discuss how a Gaussian process, which describes a probability distribution over an infinite dimensional vector space, can be implemented with finite computational resources. It will discuss how the hyperparameters controlling a Gaussian process can be adapted to data. We will then study a variety of different ways in which Gaussian...
Article
Maximum a posteriori optimization of parameters and the Laplace approximation for the marginal likelihood are both basis-dependent methods. This note compares two choices of basis for models parameterized by probabilities, showing that it is possible to improve on the traditional choice, the probability simplex, by transforming to the ''softmax'' b...
Article
The lattice constants of the γ and γ′ phases of nickel base superalloys have been modelled using a neural network within a Bayesian framework. The analysis is based on datasets compiled from new experiments and the published literature, the parameters being expressed as a non-linear function of some eighteen variables which include the chemical com...
Article
Full-text available
Models of unsupervised, correlation-based (Hebbian) synaptic plasticity are typically unstable: either all synapses grow until each reaches the maximum allowed strength, or all synapses decay to zero strength. A common method of avoiding these outcomes is to use a constraint that conserves or limits the total synaptic strength over a cell. We study...
Article
Neural networks and Bayesian inference provide a useful framework within which to solve regression problems. However their parameterization means that the Bayesian analysis of neural networks can be difficult. In this paper, we investigate a method for regression using Gaussian process priors which allows exact Bayesian analysis using matrix manipu...
Article
The standard method for training Hidden Markov Models optimizes a point estimate of the model parameters. This estimate, which can be viewed as the maximum of a posterior probability density over the model parameters, may be susceptible to overfitting, and contains no indication of parameter uncertainty. Also, this maximummay be unrepresentative of...
Article
The authors report the empirical performance of Gallager's low density parity check codes on Gaussian channels. They show that performance substantially better than that of standard convolutional and concatenated codes can be achieved; indeed the performance is almost as close to the Shannon limit as that of turbo codes
Article
The design of welding alloys to match the ever advancing properties of newly developed steels is not an easy task. It is traditionally attained by experimental trial and error, modifying compositions and welding conditions until a satisfactory result is discovered. Savings in cost and time might be achieved if the trial process could be minimised....
Article
The authors report the empirical performance of Gallager's low density parity check codes on Gaussian channels. It is shown that performance substantially better than that of standard convolutional and concatenated codes can be achieved, indeed the performance is almost as close to the Shannon limit as that of Turbo codes
Article
Learning can be made more efficient if we can actively select particularly salient data points. Within a Bayesian learning framework, objective functions are discussed which measure the expected informativeness of candidate measurements. Three alternative specifications of what we want to gain information about lead to three different criteria for...
Article
this paper is illustrated in figure 6e. If we give a probabilistic interpretation to the model, then we can evaluate the `evidence' for alternative values of the control parameters. Over-complex models turn out to be less probable, and the quantity
Article
Probabilistic models for images are analysed quantitatively using Bayesian hypothesis comparison on a set of image data sets. One motivation for this study is to produce models which can be used as better priors in image reconstruction problems. The types of model vary from the simplest, where spatial correlations win the image are irrelevant, to m...
Article
A number of methods have been developed to analyze the response of the linear phased array radar. These perform remarkably well when the number of sources is known, but in cases where a determination of this number is required, problems are often encountered. These problems can be resolved by a Bayesian approach. Here, a linear phased-array consist...
Article
Three Bayesian ideas are presented for supervised adaptive classifiers. First, it is argued that the output of a classifier should be obtained by marginalising over the posterior distribution of the parameters; a simple approximation to this integral is proposed and demonstrated. This involves a `moderation' of the most probable classifier 's outpu...
Article
Ensemble learning by variational free energy minimization is a tool introduced to neural networks by Hinton and van Camp in which learning is described in terms of the optimization of an ensemble of parameter vectors. The optimized ensemble is an approximation to the posterior probability distribution of the parameters. This tool has now been appli...
Article
The formation of austenite during the continuous heating of steels was investigated using neural network analysis with a Bayesian framework. An extensive database consisting of the detailed chemical composition, Ac1 and Ac3 temperatures, and the heating rate was compiled for this purpose, using data from the published literature. This was assessed...
Article
Ensemble learning by variational free energy minimization is a framework for statistical inference in which an ensemble of parameter vectors is optimized rather than a single parameter vector. The ensemble approximates the posterior probability distribution of the parameters. In this paper I give a review of ensemble learning using a simple example...
Article
Data on the occurrence of solidification cracking in low alloy steel welds have been analysed using a classification neural network based on a Bayesian framework. It has thereby been possible to express quantitatively the effect of variables such as the chemical composition, welding conditions, and weld geometry, on the tendency for solidification...
Article
We discuss a hierarchical probabilistic model whose predictions are similar to those of the popular language modelling procedure known as `smoothing'. A number of interesting differences from smoothing emerge. The insights gained from a probabilistic view of this problem point towards new directions for language modelling. The ideas of this paper a...
Article
The fatigue crack growth rate of nickel base superalloys has been modelled using a neural network model within a Bayesian framework. A 'committee' model was also introduced to increase the accuracy of the predictions. The rate was modelled as a function of some 51 variables, including stress intensity range AK, log ΔK, chemical composition, tempera...
Article
Bayesian probability theory provides a unifying framework for data modeling. In this framework, the overall aims are to find models that are well matched to the data, and to use these models to make optimal predictions. Neural network learning is interpreted as an inference of the most probable parameters for the model, given the training data. The...
Article
I examine two approximate methods for computational implementation of Bayesian hierarchical models, that is, models which include unknown hyperparameters such as regularization constants. In the ‘evidence framework’ the model parameters are integrated over, and the resulting evidence is maximized over the hyperparameters. The optimized hyperparamet...
Article
Charpy impact toughness data for manual metal arc and submerged arc weld metal samples have been analysed using a neural network technique within a Bayesian framework. In this framework, the toughness can be represented as a general empirical function of variables that are commonly acknowledged to be important in influencing the properties of steel...
Article
Bayesian probability theory provides a unifying framework for data modelling. In this framework the overall aims are to find models that are well-matched to the data, and to use these models to make optimal predictions. Neural network learning is interpreted as an inference of the most probable parameters for the model, given the training data. The...
Article
In this paper I describe the implementation of a probabilistic regression model in BUGS. BUGS is a program that carries out Bayesian inference on statistical problems using a simulation technique known as Gibbs sampling. It is possible to implement surprisingly complex regression models in this environment. I demonstrate the simultaneous inference...
Article
Several authors have studied the relationship between hidden Markov models and `Boltzmann chains' with a linear or `time-sliced' architecture. Boltzmann chains model sequences of states by defining statestate transition energies instead of probabilities. In this note I demonstrate that, under the simple condition that the state sequence has a manda...
Article
This paper studies the task of inferring a binary vector s given noisy observations of the binary vector t=As modulo 2, where A is an M×N binary matrix. This task arises in correlation attack on a class of stream ciphers and in the decoding of error correcting codes. The unknown binary vector is replaced by a real vector of probabilities that are o...
Article
this paper is illustrated in figure 6e. If we give a probabilistic interpretation to the model, then we can evaluate the `evidence' for alternative values of the control parameters. Over-complex models turn out to be less probable, and the quantity
Article
An algorithm is derived for inferring a binary vector s given noisy observations of As module 2, where A is a binary matrix. The binary vector is replaced by a vector of probabilities, optimised by free energy minimisation. Experiments on the inference of the state of a linear feedback shift register indicate that this algorithm supersedes the Meie...
Article
. I define a latent variable model in the form of a neural network for which only target outputs are specified; the inputs are unspecified. Although the inputs are missing, it is still possible to train this model by placing a simple probability distribution on the unknown inputs and maximizing the probability of the data given the parameters. The...
Article
Protein alignments are commonly characterized by the probability vectors over amino acids in each column of the alignment. This paper develops various models for the probability distribution of these probability vectors. First a simple Dirichlet distribution is used, then a mixture of Dirichlets. Finally a componential model employing a `density ne...
Article
The 1993 energy prediction competition involved the prediction of a series of building energy loads from a series of environmental input variables. Non-linear regression using ‘neural networks’ is a popular technique for such modeling tasks. Since it is not obvious how large a time-window of inputs is appropriate, or what preprocessing of inputs is...

Citations

... However, this neither implies Theorem 3.8 nor is it implied by Theorem 3.8. For example, an extended RS code is an MDS code but not a cyclic code while an (7,4) We now prove here that if a linear code has property M then its dual code also has property M. By a straightforward manipulation of the McWilliams identities [74,Chapter 5,(52)] one can show the following relationship between the PWEs of a code and its dual code [26] (c.f., Theorem 6.5): ...
... A game for children with motor impairments [33,34,36] demonstrates Nomon's applicability in real life. Nel et al. [43] extended Nomon's noise model to develop a communication method for single-switch users who are also visually-impaired. ...
... Ionizing radiation consists of high-energy electromagnetic waves that stem from natural sources like the sun and manmade sources like nuclear reactors [33]. Ionizing radiation is ubiquitous in the environment and can generate DNA and tissue damage at high levels. ...
... At this point, the current population of samples needs to be re-sampled according to Equation (15). In order to determine the number of each sample that should be kept, the following calculation is used: ...
... The success or failure of these negotiations depends on how they are designed (1). Particularly, in the context of international climate change policy, it has been hypothesized that negotiating a uniform common commitment would be more successful in achieving cooperation than negotiating individual or complex common commitments (2)(3)(4)(5). Yet, a proof-of-concept for this important claim is lacking. Using a laboratory experiment with human subjects and a game theoretical analysis, we fill the gap-and provide strong support for the conjecture. ...
... There are many variants of the NN approach but an approach called Bayesian neural network, formulated by Mackay [7,8] and Neal [9,10], has great merits in terms of precision and confidence of prediction though it often involves complex algorithms and calculations. Bayesian NN was applied to property modelling in steel and superalloys [11][12][13][14][15][16]; most of the works are based on Gaussian approximation framework proposed by Mackay [7,8]. ...
... In this study, the proposed model is based on Gaussian processes regression (GPR), a Bayesian algorithm that has been successfully used to solve nonlinear prediction problems. Bailer-Jones et al. [55] were among the first to utilize this method in the domain of metallurgy, where they presented a Gaussian process model for the empirical modeling of austenite formation during the continuous heating of steels. More recently, this method has been used by several authors to predict material properties [45,56,57]. ...