Chapter

# A conditional weak law of large numbers

DOI: 10.1007/BFb0084177

- [Show abstract] [Hide abstract]

**ABSTRACT:**LetX 1,X 2,... be i.i.d. random elements (the states of the particles 1,2,...). Letf be an d-valued, measurable function (an observable) and letB Rdbe a convex Borel set. DenoteS n=f(X1)+f(X2)+...+f(Xn). Using large-deviation theory, it may be shown that, under certain regularity conditions, there exists a point B (the dominating point of B) so that, givenS n/n B, actually Sn/n B in probability as n. Having this conditional weak law of large numbers as our starting point, we consider physical systems of independent particles, especially the ideal gas. Given an observed energy level, we derive convergence results for empirical means, empirical distributions, and microcanonical distributions. Results are obtained for a closed system with a fixed number of particles as well as for an open particle system in the space (a Poisson random field). Our approach is elementary in the sense that we need not refer to the abstract level II theory of large deviations. However, the treatment is not restricted to the so-called discrete ideal gas, but we consider the continuous ideal gas.International Journal of Theoretical Physics 05/1990; 29(6):621-635. · 1.09 Impact Factor - [Show abstract] [Hide abstract]

**ABSTRACT:**The procedure of maximizing entropy (or equivalently, of minimizing information) has been construed as a rule for changing one's degrees of belief in the light of new evidence: Given a prior probability distribution and new evidence in the form of constraints on the posterior probability distribution, choose your posterior from among those that satisfy the constraints, so as to minimize the information relative to the given prior. This rule of maximum entropy inference can be thought of as a generalization of conditionalization. If the prior probability of A is nonzero, then the posterior that comes from it by conditionalization of A is the one among the set of possible posteriors which satisfy the constraint that pr(A)=l which has the minimum information relative to the prior. Some have taken the view that the maximum entropy/minimum information procedure constitutes a generally valid rule for updating subjective probability, from which the special case derives its license. 2Synthese 01/1985; 63(1):55-74. · 0.70 Impact Factor

Data provided are for informational purposes only. Although carefully collected, accuracy cannot be guaranteed. The impact factor represents a rough estimation of the journal's impact factor and does not reflect the actual current impact factor. Publisher conditions are provided by RoMEO. Differing provisions from the publisher's actual policy or licence agreement may be applicable.