Science topic

# Statistical Physics - Science topic

Explore the latest questions and answers in Statistical Physics, and find Statistical Physics experts.

Questions related to Statistical Physics

Dear all,

I have a technical question regarding the self-diffusion coefficient of water in an equilibrium state using Einstein relation in molecular dynamics simulation. If we consider an equilibrated medium of water/polymer, water molecules have Brownian motion as a result of thermal fluctuations. So their self-diffusion movements, related to the Einstein relation between diffusion coefficient and mobility are fully accounted for. But in addition to thermal fluctuations, an equilibrium fluid system has pressure fluctuations. At any instant, the pressure on one side of a volume element is not the same as the pressure on the opposite surface of the volume element, and the volume element will move as a whole in the direction of lower pressure. These pressure fluctuations are not included in the simulations. In macroscopic (but linear, i.e., small forces and flows) flow conditions, they would give rise to a flow described by the linearized Navier-Stokes equation. Isn't this correct? how does Einstein relation consider it? is it logical to use Einstein relation in this situation? Can you discuss it briefly?

Thanks a lot

Dear All,

Coagulating (aggregating, coalescing) systems surround us. Gravitational accretion of matter, blood coagualtion, traffic jams, food processing, cloud formation - these are all examples of coagulation and we use the effects of these processes every day.

From a statistical physics point of view, to have full information on aggregating system, we shall have information on its cluster size distribution (number of clusters of given size) for any moment in time. However, surprisingly, having such information for most of the (real) aggregating systems is very hard.

An example of the aggregating system for which observing (counting) cluster size distribution is feasible is the so-called electrorheological fluid (see https://www.youtube.com/watch?v=ybyeMw1b0L4 ). Here, we can simply observe clusters under the microscope and count the statistics for subsequent points in time.

However, simple observing and counting fails for other real systems, for instance:

- Milk curdling into cream - system is dense and not transparent, maybe infra-red observation could be effective?
- Blood coagulation - the same problem, moreover, difficulties with accessing living tissue, maybe X-ray could be used but I suppose that resolution could be low; also observation shall be (at least semi-) continuous;
- Water vapor condensation and formation of clouds - this looks like an easy laboratory problem but I suppose is not really the case. Spectroscopic methods allow to observe particles (and so estimate their number) of given size but I do not know the spectroscopic system that could observe particles of different (namely, very different: 1, 10, 10^2, ..., 10^5, ...) sizes at the same time (?);
- There are other difficulties for giant systems like cars aggregating into jams on a motorway (maybe data from google maps or other navigation system but not all of the drivers use it) or matter aggregating to form discs or planets (can we observe such matter with so high resolution to really observe clustering?).

I am curious what do you think of the above issues.

Do you know any other systems where cluster size distributions are easily observed?

Best regards,

Michal

Once I obtain the Ricatti equations to solve the moments equation I can't find the value of the constants. How can I obtain the value of these constants? Have these values already been reported for the titanium dioxide?

Imagine there is a surface, with points randomly spread all over it. We know the surface area S, and the number of points N, therefore we also know the point density "p".

If I blindly draw a square/rectangle (area A) over such surface, what is the probability it'll encompass at least one of those points?

P.s.: I need to solve this "puzzle" as part of a random-walk problem, where a "searcher" looks for targets in a 2D space. I'll use it to calculate the probability the searcher has of finding a target at each one of his steps.

Thank you!

Hello Dear colleagues:

it seems to me this could be an interesting thread for discussion:

I would like to center the discussion around the concept of Entropy. But I would like to address it on the explanation-description-ejemplification part of the concept.

i.e. What do you think is a good, helpul explanation for the concept of Entropy (in a technical level of course) ?

A manner (or manners) of explain it trying to settle down the concept as clear as possible. Maybe first, in a more general scenario, and next (if is required so) in a more specific one ....

Kind regards !

I'm writing my dissertation about economic dynamics of inequality and i'm going to use econophysics as a emprical method.

Tragically, in 1906, Boltzmann committed suicide and many believe that the statistical mechanics was the cause. He provided the current definition of entropy, interpreted as a measure of statistical disorder of a system His student, Paul Ehrenfest, carrying on Boltzmann's work, died similarly in 1933. William James, in 1909, found dead in his room probably due to suicide. Bridgman, the statistical physics pioneer, committed suicide in 1961. Gilbert Lewis, took cyanide in 1987after not getting a Nobel prize.

The occupation number of bosons can be any number from zero to infinity, guiding us to the Bose-Einstein statistics. On the other hand, for example, a classical wave can be considered a superposition of any number of sine or cosine waves. Isn't it similar to say the occupation number of a classical wave can be any number from zero to infinity and utilizing Bose-Einstein statistics for classical waves in particular and classical fields in general?

In information theory, the entropy of a variable is the amount of information contained in the variable. One way to understand the concept of the amount of information is to tie it to how difficult or easy it is to guess the value. The easier it is to guess the value of the variable, the less “surprise” in the variable and so the less information the variable has.

Rényi entropy of order q is defined for q ≥ 1 by the equation,

S = (1/1-q) log (Σ p^q)

As order q increases, the entropy weakens.

Why we are concerned about higher orders? What is the physical significance of order when calculating the entropy?

Please don't answer because U(T,V) don't have S entropy as argument!!!!!!

May I ask a question on thermodynamic? We know that U(V,T) (caloric eq. of state) and S(P,V) (thermodynamic eq of state) can both be derived from thermodynamic potentials (U F G H) and the fundamental relations. However, U(V,T) doesn't hold full thermodynamic info of the system as U(S,V) does, yet S(P,V) also holds full thermodynamic info of the system.

In which step in derivation to get U(T,V) from U(S,V) lost the thermodynamic info? (the derivation is briefly：1. derive U=TdS+ PdV on V, 2. replace the derivative using Maxwell eq. and 3. finally substitute ideal gas eq or van der waal eq)

Why the similar derivation to get S(P,V) retain full thermodynamic info?

Even if we only have U(T,V), can't we get P using ideal gas eq, then calculate the S by designing reversible processes from (P0,V0,T0) to (P',V',T')? If we can still get S, why U(T,V) doesn't have full thermodynamic info?

Maxwell Boltzman distribution is n

_{i}/g_{i}=e^{-(ε}_{i}^{−µ)/kT}. In quantum mechanical case, +/-1 is added at the end of kT. (+) sign is for Fermi-Dirac distribution and (-) is for Bose-Einstein distribution. I want to know what is the physical significance of these signs and how can we relate this to classical (Boltzman) distribution.Dear all:

I hope this question seems interesting to many. I believe I'm not the only one who is confused with many aspects of the so called physical property 'Entropy'.

This time I want to speak about

**Thermodynamic Entropy**, hopefully a few of us can get more understanding trying to think a little more deeply in questions like these.The Thermodynamic Entropy is defined as:

**Delta(S) >= Delta(Q)/(T**. This property is only properly_{2}-T_{1})**defined for (macroscopic)systems**which are**in Thermodynamic Equilibrium**(i.e. Thermal eq. + Chemical Eq. + Mechanical Eq.).So

**my question is**:In terms of numerical values of S (or perhaps better said, values of Delta(S). Since we know that only changes in Entropy can be computable, but not an absolute Entropy of a system, with the exception of one being at the Absolute Zero (0K) point of temperature):

Is easy, and straightforward to compute the changes in Entropy of, lets say; a chair, or a table, our your car, etc. since all these objects can be considered macroscopic systems which are in Thermodynamic Equilibrium. So, just use the

**Classical definition of Entropy**(the formula above)**and the Second Law of Thermodynamics**, and that's it.But, what about Macroscopic objects (or systems), which are

**not in Thermal Equilibrium ?**Maybe, we often are tempted to think about the Entropy of these Macroscopic systems (which from a macroscopic point of view they seem to be in Thermodynamic Equilibrium, but in reality, they have still**ongoing physical processes which make them not to be in complete thermal equilibrium**) as the definition of the classical thermodynamic Entropy.what I want to say is:

**What would be the limits of the classical Thermodynamic definition of Entropy**, to be used in calculations for systems that**seem to be in Thermodynamic Equilibrium but they aren't**really? perhaps this question can also be extended to the so called regime of Near Equilibrium Thermodynamics.Kind Regards all !

I had my BSc in Physics, MSc and PhD in Energy System Engineering. I worked as a assistant lecturer and a lecturer for three years teaching different maths courses, statistics and Physics. I had to move from where I was lecturing before because of my family and now I am in search of a Lecturing or Post-Doc opportunity. Most of the job adverts I see are more specific on a particular field. I am beginning to wonder if my diversification is a disadvantage.

Also, if there is a post-doc or lecturing opportunity at your university, I won't mind applying.

Considering that mean field theory approaches have been used for neuronal dynamics, and that renormalization group theory has been used in other networks to describe their properties, I wanted to know whether it is useful or interesting to describe the behavior of a neuronal system based on its critical exponents. Thank you in advance.

Let's just say we're looking at the classical continuous canonical ensemble of a harmonic oscillator, where:

H = p^2 / 2m + 1/2 * m * omega^2 * x^2

and the partition function (omitting the integrals over phase space here) is defined as

Z = Exp[-H / (kb * T)]

and the average energy can be calculated as proportional to the derivative of ln[Z].

Equipartion theorem says that each independent coordinate must contribute R/2 to the systems energy, so in a 3D system, we should get 3R. My question is does equipartion break down if the frequency is temperature dependent?

Let's say omega = omega[T], then when you take the derivative of Z to calculate the average energy. If omega'[T] is not zero, then it will either add or detract from the average kinetic energy and therefore will disagree with equipartition. Is this correct?

These 'entropies' depend upon a parameter, which can be varied between two limits. In those limits they reduce to the Shannon-Gibbs and Hartley-Boltzmann entropies. If such entropies did exist they could be derived from the maximum-entropy formalism where the Lagrange multiplier would be identified as the parameter. Then, like all the other Lagrange multipliers, the parameter would have to be given a thermodynamic interpretation as an intensive variable which would be uniform and common to all systems, like the temperature and chemical potential. The Renyi and Havdra-Charvat entropies cannot be derived from the maximum-entropy formalism. Thus, there can be no entropy that can be parameter dependent, and whose parameter would be different for different systems.

Statistical physics uses thermostat idea to describe small energy variations in a big system. Can thermostat be a set or real oscillators with linear interaction with statistical system?

One of the central themes in Dynamical Systems and Ergodic Theory is that of recurrence, which is a circle of results concerning how points in measurable dynamical systems return close to themselves under iteration. There are several types of recurrent behavior (exact recurrence, Poincaré recurrence, coherent recurrence , ...) for some classes of measurability-preserving discrete time dynamical systems. P. Johnson and A. Sklar in [Recurrence and dispersion under iteration of Čebyšev polynomials. J. Math. Anal. Appl. 54 (1976), no. 3, 752-771] regard the third type („ coherent recurrence” for measurability-preserving transformations) as being of at least equal physical significance, and this type of recurrence fails for Čebyšev polynomials. They also found that there is considerable evidence to support a conjecture that no (strongly) mixing transformation can exhibit coherent recurrence. (This conjecture has been proved by R. E. Rice in [On mixing transformations. Aequationes Math. 17 (1978), no. 1, 104-108].)

Suggest the model and methodology to estimate the diffusion coefficient of Fission Products in nuclear fuel in scenario like breach of clad etc.

I see lots of papers dealing with application of statistical physics to financial systems. But what are the basic models? There is little point in defining a model, solving it, and finding a answer. Can anybody give me a good starting point?

We know the ergodic definition and know the ergodic mappings. But what is the ergodic process?

There are similar resummations in statistical physics: see

Hi all,

To calculate residence time from potential of mean force (PMF), we use stable state picture. Here a reaction state, product state are defined. This is done from radial distribution function. The time taken to move from reaction state to product state is designated as t and residence time is given by,

1-P(t) = e^{-t/tau}, tau is the residence time,

P(t) is the probability that it moves from reaction state to product state,

t= time taken to move from reaction state to product state. How to calculate P(t)?

Dear Research-Gaters,

It might be a very trivial question to you : 'What does the term 'wrong dynamics ' actually mean ?'. I have heard that term often times, when somebody presented his/her, her/his results. As it seems to me, the term 'wrong dynamics' is an argument, which is often applicable to bring up arguments that a simulation result might be not very useful. But what does that argument mean in physical quantities ? It that argument related to measures such as correlation functions, e.g. velocity autocorrelation, H-bond autocorrelation or radial distribution functions ? Can 'wrong dynamics' be visualized in terms of a too fast decay in any of those correlation functions in comparison with other equilibrium simulations, or can it simply be measured by deviations of the potential energies, kinetic energies and/or the root-mean square deviation from the starting structure ? At the same time, thermodynamical quantities such as free-energies might not be affected by the term 'wrong dynamics'. Finally, I would like to ask what the term 'wrong dynamics' means, if I used non-equilibrium simulations which are actually completely non-Markovian, i.e. history-independent and out-of equilibrium (Metadynamics, Hyperdynamics). Thank you for your answers. Emanuel

How can I find open source model of landslides related disasters. I want learn something about the process of developing such kind of model including statistical or physical based models.

Dear All

What is the best/simplest sampling method in Monte Carlo Simulation (MCS)? Do different sampling methods significantly differ in computational time of MCS?What is the best stopping criterion for MCS?

Kind Regards

Ahmad

For magnetic systems, Rushbrooke inequality is a direct consequence of the thermodynamic relation between C

_{H}, C_{V}and isothermal susceptibility, their positivity, and the definition of the critical exponent alpha as [controlling the behavior of C_{H}as function of the reduced distance from the critical temperature..In the case of fluid system, the usual definition of alpha refers to the constant volume specific heat (C

_{V}).However, the role played by C

_{V}in the thermodynamic relation between C_{P}, C_{V}and isothermal compressibility is not the same as C_{H}. Some additional hypothesis has to be made in order to derive the R. inequality for fluid systems or am I missing something trivial ?Brain research utilizes diverse measurement techniques which probe diverse spatial scales of neural activity. The majority of human brain research occurs at macroscopic scales, using techniques like functional magnetic resonance imaging (fMRI) and electroencephalography (EEG), while microscopic electrophysiology and imaging studies in animals probe scales down to single neurons. A major challenge in brain research is to reconcile observations at these different scales of measurement. Can we identify principles of neural network dynamics that are consistent across different observational length scales?

In recent experimental studies at different scales of observations, power-law distributed observables and other evidence suggest that the cerebral cortex operates in a dynamical regime near a critical point. Scale-invariance - a fundamental feature of critical phenomena - implies that dynamical properties of the system are independent of the scale of observation (with appropriate scaling). Thus, if the cortex operates at criticality, then we expect self-similar dynamical structure across a wide-range of spatial scales. Renormalization group is a mathematical tool that is used to study the scale invariance in equilibrium systems and recently, in dynamical systems with non-equilibrium critical steady-state. In the context of neural dynamics, renormalization group ideas suggest that the dynamical rules governing the large-scale cortical dynamics may be the same as dynamics at smaller spatial scales (with appropriate coarse graining procedures).

I came up this question because I see a difference when simulating the same A+B<-> C type reaction using Copasi and a particle-based stochastic simulator. Copasi is for well-mixed system, doesn't considering diffusion rate of reactants and solves ODEs to get steady state. I used reaction kinetics rate constant from literature, and noticed a difference in steady state in these outputs of two simulators. I wonder theoretically, whether such a discrepancy should exist, in other words, whether a well-mixed system steady state would be affected by reactants diffusion speed? Thanks.

I am looking to combine Monte Carlo and Molecular dynamics in a simulation. How they can be combined? In general, how to keep the time evolution of the system correctly

Santo

It is interesting by the determination of the atomic pressure in solids.

I read some paper about the viral stress which was introduced by Lutsko (J. Appl. Phys. 64 (3), 1988) using the local momentum flux:

d

**p**(**r**)/dt = -**div**s(**r**)where

**p**(**r**) is the momentum and**s**(**r**) the stressFollowing the calculus, it is not clear for me if Lutsko uses the Lagrange of Euler description but I supposed a Lagrangien description. But in this case, I am not sure of the physical meaning of

**s**(**r**). This point has been discussed by Zhou (Proc. R. Soc. Lond. A (2003) 459, 2347–2392) and in this website :In the absence of volumic forces, in continuum mechanics, the Newton's law is:

\rho d

^{2}**u**(**r**)/dt^{2}= - div**s**(**r**)with

**u(r)**and**s**(**r**) the displacement and the Cauchy stress. This equation is valid in the Euler description.I am confused about the right way to get the atomic stress.

Does someone know about that point ? How can I determine the atomic stress properly ?

Thank you for your answers.

Feynman-Kac formula points out an SDE corresponds to a FPE. Given an SDE, I can simulate numerically by following the equation, then I can plot the histogram of the simulated single particle. However, FPE provides probabilistic evolution which bases on large amount of particles. Do these two results match? Or whether or not the ergodicity guarantees the matching?

I am trying to symbolically evaluate the expectation of a complicated multivariate random variable expression. The multivariate distribution is Gaussian which is easily (but abstractly) specified

e.g.function f(x,y,z)=xyz-xy-xz-yz, where x,y,z are all belong to (0,1),when the f takes its extremum, we can get x=y=z. Then is this property suitable for every

functions of symmetric polynomial?

When system is described in terms of transfer function it's widely known that closed loop transfer function is: Y(s)/X(s)=G(s)/1+G(s)H(s). I'm looking for equivalent equation but in state space representation.

A noise term, eta is added to the Cahn-Hillard Equation which is as follows.

d(phi)/dt=a1 (nabla^2) (phi) -a2 (phi) -a4 (phi^3) + eta

and eta is usually defined as

<eta(x,t) eta(x',t')> = A diracdelta(x-x') diracdelta(t-t')

which suggests that the noise terms are not correlated in time and space.

However, I am confused on how to implement this in the original equation. Do we just use a random number generator?

Many thanks.

Why we can't use KAM theory to obtain the recurrence time?

Why we need to calculate inverse participation ration in our system,what it tells physically about the system,and why we should calculate it?Is there any specific reason to calculate this quantity physically?

some items use 5-point Likert scale, some 7-point Likert scale, etc., and some items are in semantic differential scale

Thouless formula works in quasi dimensional case or in any dimensions.

I am trying in case of quasi 1-dimensional case and localization length is not coming exact.

The thermal rate coefficient can be obtained from the reactive cross section (σ(E

_{coll})):k(T) = c(T)×∫P(T,E

_{coll})E_{coll}σ(E_{coll})dE_{coll}where E

_{coll}is the relative collision energy and c(T) is a constants at a given temperature and P(T,E_{coll}) is the statistical weight.In normal case Boltzmann statistic is used for the calculation of statistical weights. But Boltzmann statistic is valid when the temperature is high and the particles are distinguishable. At ultralow temperatures (T< 10K) we should use the appropriate quantum statistic (Fermi or Bose).

What kind of quantum statistic should be used in the collision of a

radical[spin = 1/2] + closed shell molecule (spin=0)

at ultralow temperatures?

What is the form of P(T,Ecoll) in this case?

In Galit Shmueli's "To Explain or Predict," https://www.researchgate.net/publication/48178170_To_Explain_or_to_Predict, on pages 5 and 6, there is a reference to Hastie, Tibshirani, and Freedman(2009), for statistical learning, which breaks expected square of the prediction error into the two parts of a variance of a prediction error and the square of the bias due to model misspecification. (Variance - bias tradeoff is discussed in Hastie, et.al. and other sources.)

An example of another kind of variance bias tradeoff that comes to mind would be the use of cutoff or quasi-cutoff sampling for highly skewed establishment surveys using model-based estimation (i.e., prediction from regression in such a cross-sectional survey of a finite population). The much smaller variance obtained is partially traded for a higher bias applied to small members of the population that should not be very much of the population totals (as may be studied by cross validation and other means). Thus some model misspecification will often not be crucial, especially if applied to carefully grouped (stratified) data.

[Note that if a BLUE (best linear unbiased estimator) is considered desirable, it is the estimator with the best variance, so bias must be considered under control, or you have to do something about it.]

Other means to tradeoff variance and bias seem apparent: General examples include various small area estimation (SAE) methods. -

Shrinkage estimators tradeoff increased bias for lower variance.

Are there other general categories of applications that come to mind?

Do you have any specific applications that you might share?

Perhaps you may have a paper on ResearchGate that relates to this.

Any example of any kind of bias variance tradeoff would be of possible interest.

Thank you.

Article To Explain or to Predict?

Hello, I am trying to carry out a spatio-temporal analysis of some data recorded at monthly intervals at different locations using gstat package of Edzer Pebesma.The data is recorded once a month for few years. The problem is that despite following all the instructions in many documents e.g. st.pdf by Edzer Pebesma and many variants such as Benedikt I am still unable to create the required spatio-temporal object. Can anybody help me in this regard? I will upload the data if somebody turns up to help. The data is simple reading of SO2 at 33 places and 12 readings per year for 4 years. What I want is an analysis like this link http://www.r-bloggers.com/spatio-temporal-kriging-in-r/.

I want know how to calculate the Maximal Lyapunov Exponent of a Hamiltonian system numerically. Then, yes, I want calculate the Maximal Lyapunov exponent of particles trajectories.

I'm using this paper as reference: http://chaos.utexas.edu/manuscripts/1085774778.pdf. The Maximal Lyapunov Exponent (MLE) is proportional to the logarithm of phase space separation. Then, for a chaotic system, we should have an straight line when plot the MLE vs time.

I found something like a log curve with saturation at MLE ~0.2 and a little 'noise' (figure). I investigated the behavior of this system with Poincaré sections, which shows chaos.

The behavior that I found in the figure, is due some computational problem, like finite size effects or time scale? How can I interpret this behavior?

ps. I'm limiting the separation of the particles do 10^-4 and starting the test system (a copy, except for a particle with a shifted position by 10^-2) after reference system relaxation.

Correlated sources of noise live in between regularity and randomness. How to define a bath in context?

Thanks in advance

In the 2D case, we cannot neglect high-order terms in the expansion, because they have the same canonical dimensions, therefore all terms are essential in the renormalization procedure (they are relevant). Thus, from that point of view, the phi^4-approximation breaks down in 2D.

I am studying the kappa-statistics, however I'm having difficulty expanding the kappa-exponential and writing it in a more compact (in terms of sums and/or product) form.

The basis for the phase field modeling is minimization of a functional presenting some thermodynamic potential (e.g., free energy), which is conserved in the system considered. Therefore, time evolution of the system described by the phase field is not the real kinetics, but just some pathway the system proceeds to the equilibrium state.

It is like using the Metropolis Monte Carlo for minimization of the energy of a system. The final state might correspond to some local or global minimum, but the way the system relaxes to it is not the real kinetics. The real kinetic pathway should be described by the kinetic Monte Carlo approach.

Therefore, the question of applicability of the phase field methods to non-equilibrium problems arises.

**Are these methods applicable for micro-structure evolution under irradiation?**I am aware about the large number of publications on the void and gas bubble growth under irradiation. However,

**I am interested in justification of this approach from the ground principles**, not just*"use it because the others do so"*.I would enjoy discussion on the topic, many thanks for your replies.

How do i plot a graph of hydrodynamic distribution function vs hydrodynamic radius? As i have read from the literature there are two ways. Either do a cumulant analysis or do a Laplace inversion. If suppose i do a cumulant analysis, i get mean size, polydispersity and skewness. So i have the x axis of the graph ( Hydrodynamic radius) how do i find the y axis(hydrodynamic radius distribution) from this cumulant fitted data?

Thanks for the help and time in advance.

Molecules of usual gas move stochastically. This stochasticity is a result of interaction between gas molecules (collisions of molecules). Let us imagine that molecules of the gas interacts by means of some force field. In this case the character of stochasticity of molecular motion changes. Let us take into account, that the wave function is a method of description of any ideal fluid motion (See “Spin and wave function as attributes of ideal fluid.”

*J. Math. Phys*.**40**, 256 -278, (1999). Electronic version http://gasdyn-ipm.ipmnet.ru/~rylov/swfaif4.pdf ). Is it possible to choose such a force field, that the gas with such interaction between molecules be described by the Klein-Gordon (or Schroedinger) equation? In other words, is it possible a classical description of quantum particles?I know about standard error, but not getting idea about the asymptotic standard error and how it is related to standard error.

It is an old Weis-Adler result that the Boole mapping R∋x→x-1/x∈R is a Lebesgue measure preserving and ergodic. What one can state about the mappings R²∋(x,y)→(x-1/y,y+1/x)∈R² and about R²∋(x,y)→(y-1/x,x+1/y)∈R² ?

The Conjecture was recently claimed in the note attached.

My experience with regression is primarily with regression through the origin, not necessarily with one regressor, but let us consider that. This is generally the case when examining establishment survey data, but do not limit yourself to such applications. As the size variable, the predictor x, becomes larger, one expects that the variance of y will become larger. Thus a coefficient of heteroscedasticity, gamma, may be used in regression weights of the form w = 1/x^2gamma, so that, for example, for the classical ratio estimator where gamma = 0.5, we have w = 1/x. As Ken Brewer (formerly, Australian National University) has theorized, and I and perhaps others have found experimentally, for an establishment survey, such gamma should generally be estimated to be between 0.5 and 1.0. (There are multiple methods for estimating gamma. See the papers at the links below.) This becomes an important part of the error structure.

But what of cases where data are not so limited to the first quadrant; cases where y may often be negative and so might the regressor, x? OLS is often used, and discussion may instead be on influential data points. I would like to hear some example applications of the natural occurrence, or non-occurrence, of heteroscedasticity under such circumstances. I suppose that various subject matter applications may influence whether or not there will be heteroscedasticity.

Note that just because heteroscedasticity is not generally considered in a given area of application, does not mean that it does not exist, and perhaps should be considered in the error structure.

What are your experiences; thoughts? Thank you.

Conference Paper Alternative to the Iterated Reweighted Least Squares Method ...

Espically in Bayesian method or Generalized least square method.

I know there is a technique to measure the second virial coefficient for the interaction of dilute colloidal particles in a mixed solvent as described by the pioneers like M. L. Kurnaz , J. V. Maher, etc; however, I need someone to expalin the technique in its simplest form because to some extent I am unfamiliar with the expressions and methods in this region. I will appreciate if someone helps me with this problem.

What is changing in transfer laws (transfer of energy, mass, momentum) when dealing with phenomena in short time (about few ms) ?

I observed that Fourier law, Fick law, Laplace law for bubbles, are not enough to describe reality when I try to simulate brief phenomena. It seems that all transport laws derive from Boltzmann equation.

My question is how to adapt these laws when I face a short time phenomenon.

I wanted to look at the analytical solutions to some quantum mechanical systems and follow along with the math step by step.

I was able to find (at least I think), just that for the H2+ ion, but that is a singular electron, i.e.: no electron correlation. Has this been solved for H2 or some other multi-electron system and if so could I be pointed to the relevant solutions.

Here `water' means a collection of the molecules which can actually `boil', i.e. able to feature a liquid-gas phase transition. I believe the answer should be a finite number rather than infinity; cf. the definite answer for an ideal gas from the publication below.

Can anybody recommend a statistical mechanics book that:

1) is suited for self-teaching in an undergrad level

2) includes solved problems

3) includes the math needed for each topic

What I have in mind is something similar to what "Quantum Chemistry" by Ira Levine does to QM: each of the chapters includes an intro explaining all the math you'll need for the chapter subject, considering the reader has only a good calculus background. I'm looking for something similar for statistical mechanics, aimed at those interested in molecular dynamics.

I'm going to perform a classical MD simulation on a graphene oxide nanosheet. Everything is ok when I run the simulation without considering coulomb interactions, but when I consider them, the volume diverges (constantly increases until infinity). What is going on here?

I heard this idea and I wanted to check, but there was no good source on the internet. A good reference would be appreciated

Recent (November 2013) public debate videos are up on youtube; for description, see: http://www.ece.tamu.edu/~noise/HotPI_2013/HotPI_2013.html

I am working with a network composed of polymeric Gaussian chains. I would like to use the replica formalism to study the deformation properties of the network. My network is deformed affinely.

I mean that most of the potentials (Lennard-Jones, DLVO etc..) have a spherical symmetry thus are not appropriate when getting close to a surface/interface. Indeed, the physics of a surface is not the same as the bulk.

The physical properties (for example, of a material) at the nanoscale are not the same as they are at the macroscopic scale. Thus, can we use thermodynamics and statistical physics to explain nanophysics properties? An example: the demonstration of the fluctuation-dissipation theorem uses temperature, but is it possible to define a temperature at the nanoscale? Another example: what about the ergodic assumption?

Suppose you have two non-stationary, interacting systems with weak coupling (or one spatially extended, e.g. brain, but with signals measured at different locations). Since systems are coupled, you obtain some degree of synchronization. Since systems are non-stationary, synchronization is time dependent. What parameters can be used to quantify this time-dependency of synchronization? What insight do they give regarding structure and dynamics of the system?

One could do that via the virial theorem and the radial correlation function, but I was wondering if there is something more efficient available.

Out of equilibrium thermodynamics study often changes in a structure with a long time scale. Could first order transitions give informations to out of equilibrium evolving structures? I mean: the time scale in a first order transition tends to zero (for example liquid solid transition), but the involved systems are often more simple. I would suggest scientists to study first order transitions as if they were out of equilibrium (during the duration of the transition).

I believe nearly all books concerning statistical mechanics cover Hamiltonian system. Then naturally Liouville's theorem and Boltzmann-Gibbs distribution are discussed. I don't see the connection between Hamiltonian system and Boltzmann-Gibbs distribution. The former describes a deterministic dynamics of one particle while the latter refers to statistical law of a large amount of particles which seems to be random.

Is there a relation between the convergence rate of a Monte Carlo simulation and the entropy of the simulated system ?

Sounds like a simple standard thing, I checked a couple of books and papers but just couldn't find much:

Where can I find a plot of magnetization vs temperature (or normalized interaction strength) for finite constant external field for the 2D or 3D Ising model? I heard there is a theory for infinitesimal external field strength but has there been some progress on larger fields?

Recently I began a project on MCMC and I think it may be helpful if some simple cases are given for me to do some simulations and analyses on. Could someone help introduce several simple, classical examples or models for me? Such as Ising model? How can I compare my results with others? To re-implement others' algorithms and simulate again?