Teaching the Principles of Statistical Dynamics

ArticleinAmerican Journal of Physics 74(2):123-133 · February 2006with26 Reads
Impact Factor: 0.96 · DOI: 10.1119/1.2142789 · Source: PubMed
Abstract

We describe a simple framework for teaching the principles that underlie the dynamical laws of transport: Fick's law of diffusion, Fourier's law of heat flow, the Newtonian viscosity law, and the mass-action laws of chemical kinetics. In analogy with the way that the maximization of entropy over microstates leads to the Boltzmann distribution and predictions about equilibria, maximizing a quantity that E. T. Jaynes called "caliber" over all the possible microtrajectories leads to these dynamical laws. The principle of maximum caliber also leads to dynamical distribution functions that characterize the relative probabilities of different microtrajectories. A great source of recent interest in statistical dynamics has resulted from a new generation of single-particle and single-molecule experiments that make it possible to observe dynamics one trajectory at a time.

Full-text

Available from: Ken A Dill
arXiv:cond-mat/0507388v1 [cond-mat.stat-mech] 16 Jul 2005
Teaching the Principles of Statistical Dynamics
Kingshuk Ghosh and Ken A. Dill
Department of Biophysics, University of California, San Francisco CA 94143
Mandar M. Inamdar, Effrosyni Seitaridou, and Rob Phillips
1
Division of Engineering and Applied Science and
1
Kavli Nanoscience Institute,
California Institute of Technology, Pasadena, CA 91125
We describe a simple framework for teaching the principles that underlie the dynamical laws of
transport: Fick’s law of diffusion, Fourier’s law of heat flow, the Newtonian viscosity law, and mass-
action laws of chemical kinetics. In analogy with the way th at the maximization of entropy over
microstates leads to the Boltzmann law and predictions about equilibria, maximizing a quantity
that E. T. Jaynes called “Caliber” over all the possible microtrajectories leads t o t hese dynamical
laws. The principle of Maximum Caliber also leads to dynamical distribution functions which
characterize the relative probabilities of different microtrajectories. A great source of recent interest
in statistical dynamics has resulted from a new generation of single-particle and single-molecule
experiments which make it possible to observe dynamics one trajectory at a time.
PACS numbers: 51.10.+d 05.40.-a 05.70.Ln
I. INTRODUCTION
We describe an approach for teaching the principles that underlie the dynamical laws o f transport: of particles
(Fick’s law of diffusion), energy (Fourier’s law o f heat flow), momentum (the Newtonian law for v iscosity),
1
and
mass-action laws of chemical kinetics.
2
Recent experimental advances now allow for studies of forces and flows at the
single-molecule and nanoscale level, represe ntative examples of which may be found in the references.
3,4,5,6,7,8,9,10
For
example, single-molecule methods have explored the packing of DNA inside viruses,
7
and the stretching of DNA a nd
RNA molecules.
9,10
Similarly, video microscopy now allows for the analysis of trajectories of individual submicron
size colloidal particles
11
, and the measurement of sing le -channel currents has enabled the kinetic studies of DNA
translocation through nanopores.
3,6
One of the next frontiers in biology is to understand the “small numb e rs” problem: how does a biological cell
function, given that most of its proteins and nucleotide polymers ar e present in numbers much s maller than Avogadro’s
number?
12
For example, one of the most important molecules, a cell’s DNA, occurs in only a single copy. Also, it
is the flow of matter and energy through cells that makes it possible for organisms to maintain a relatively stable
form.
13
Hence, in order to function, cells always have to be in this state far from equilibrium. Thus, many problems of
current interest involve small systems tha t are out of equilibrium. Our interest here is two-fold: to teach our students
a physical foundation for the phenomenolog ic al macroscopic laws, which describe the properties of averaged fo rces and
flows, and to teach them about the dynamical fluctuatio ns, away from those average values, for systems containing
small numbers of particles.
In this article, we describe a very simple way to teach these pr inciples. We start from the “principle of Maximum
Caliber”, fir st described by E. T. Jaynes.
14
It aims to provide the s ame type of foundation for dynamics of many-
degree-of-fre e dom systems that the second law of thermody namics provides for equilibria of such systems. To illustrate
the principle, we use a slight variant of one of the oldest and simplest models in statistical mechanics, the Dog-Flea
Model, or Two-Urn Model.
15,16
Courses in dynamics often introduce Fick’s law, Fourie r’s law, and the Newtonian-fluid
model as phenomenological laws, rather than deriving them from some deepe r foundation. Here, instead, we describe a
simple unified perspective that we have found useful for teaching these laws from a founda tion in statistical dynamics.
In a nalogy with the role of microstates a s a basis for the properties of equilibria, we focus on microtrajectories as the
basis for predicting dynamics. One argument that might be leveled against the kind of fr amework presented here is that
in the cases considered here, it is not clear that it leads to anything different from what one obtains using conventiona l
nonequilibrium thinking. On the other hand, often, restating the same physical result in different language can provide
a better starting point for subsequent reasoning. This point was well articulated by Feynman in his Nobel lecture
17
who noted: “Theories of the known, which are described by different physical ideas may be equivalent in all their
predictions and are hence scientifically indistinguishable. However, they are not psychologically identical when trying
to move from that base into the unknown. For different views suggest different kinds of modifications which might be
made and hence are not equiva le nt in the hypotheses one generates from them in one’s attempt to understand what
is not yet understood.”
We begin with the main principle embodied in Fick’s Law. Why do particles and molecules in solution flow from
Page 1
2
regions of high concentration toward regions of low co nce ntration? To keep it simple, we consider one-dimensional
diffusion along a coordinate x. This is described by Fick’s first law of particle transport,
1,2
which says that the average
flux, hJi, is given in terms of the gradient of average concentration, hci/∂x, by
hJi = D
hci
x
(1)
where D is the diffusion coefficient. In order to clearly distinguish quantities that a re dynamical averages from those
that are not, we indicate the averag ed quantities explicitly by brackets, h. . .i. We describe the nature of this averaging
below, and the nature of the dynamical distribution functions over which the averages are ta ken. But first, we briefly
review the standard derivation of the diffusion equation. Combining Fick’s first law with particle conservation,
hci
t
=
hJi
x
, (2)
gives Fick’s se c ond law, a lso known as the diffusion equation:
hci
t
= D
2
hci
x
2
. (3)
Solving Eq. (3) subject to two boundary conditions and one initial condition gives both hc(x, t)i, the averag e con-
centration in time and space, and the average flux hJ(x, t)i, when no other forces are present. The generalization to
situations involving additional applied forces is the Smoluchowski equation.
2
A simple experiment shows the distinction be tween averaged quantities vs. individual microscopic realizations.
Using a microfluidics chip like that shown in Fig. (1a), it is possible to c reate a small fluid chamber divided into
two regions by c ontrol valves. The chamber is filled on one side with a solution containing a small concentration
of micron-scale colloidal particles. The other region contains just water. The three control valves on top of that
microfluidic chamber serve two purposes: The two outer ones are used for isolation s o that no particles can diffuse
in and out of the chamber, while the middle control valve provides the partition between the two regions. The time
evolution of the system is then monitored a fter the removal of the partition (see Fig. (1b)). The time-dependent
particle density is determined by dividing the chamber into a number of equal-sized boxes along the long direction
and by computing histograms of the numbe rs of particles in each slice as a function of time. This is a colloidal solution
analog of the gas diffusion experiments of classical thermodynamics. The corr e sponding theoretical model usually
used is the diffusion equation. Fig. (1c) shows the solution to the diffusion equation, as a function of time, for the
geometry of the microfluidics chip. The initial condition is a step function in concentration at x = 200µm at time
t = 0.
We use this simple experiment to illustrate one main point. The key distinction is that the theore tical curves are
very smooth, while there are very la rge fluctuations in the experimentally o bserved dynamics of the particle densities.
The fluctuations are large because the number of colloidal particles is small, tens to hundreds. The experimental data
shows tha t the particle conce ntration c(x, t) is a highly fluctuating quantity. It is not well described by the standard
smoothed curves that are calculated from the diffusion equation. Of course, when averaged over many trajectorie s
or when particles are at high concentrations, the experimental data should approach the smoothed curves that are
predicted by the clas sical diffusion equation. Our aim here is to derive Fick’s Law and other phenomenological
transport relations at a microscopic level, so that we can consider both the behaviors of average properties and the
fluctuations, i.e., the dynamical distribution functions, and to illustrate the Maximum Caliber approach.
II. THE EQUILIBRIUM PRINCIPLE OF MAXIMUM ENT ROPY
We ar e interested here in dynamics, not statics. However, our strategy follows so closely the Jaynes derivation of
the Boltzmann dis tribution law of equilibrium statistical mechanics,
2,18
that we first show the equilibrium treatment.
To derive the Boltzmann law, we start with a set of equilibrium microstates i = 1, 2, 3, . . . N that are relevant to the
problem at hand. We aim to compute the probabilities p
i
of those microstates in equilibrium. We define the entropy,
S, of the system as
S({p
i
}) = k
B
N
X
i=1
p
i
lnp
i
, (4)
Page 2
3
where k
B
is the Boltzmann constant. The equilibrium probabilities, p
i
= p
i
are those values of p
i
that cause the
entro py to be maximal, subject to two constraints:
N
X
i=1
p
i
= 1, (5)
which is a normalization condition that insures that the probabilities p
i
’s sum to one, and
hEi =
X
i
p
i
E
i
, (6)
which says that the energies, when averaged over all the microstates, sum to the macroscopica lly observable average
energy.
Using L agrange multipliers λ and β to enforce the first and second constraints, respectively, leads to an expression
for the values p
i
that maximize the entropy:
X
i
[1 lnp
i
λ βE
i
] = 0. (7)
The result is tha t
p
i
=
e
βE
i
Z
, (8)
where Z is the partition function, defined by
Z =
X
i
e
βE
i
. (9)
After a few thermodynamic arguments, the Lagrange multiplier β can be shown to be equal to 1/k
B
T .
18
This
derivation, first given in this simple form by Jaynes,
18
identifies the probabilities that are both co ns istent with the
observable average energy and that otherwise maximize the entropy. Jaynes justified this strategy on the grounds that
it would be the best prediction that an observer could make, given the observable, if the observer is ignorant of all else.
In this case, the observable is the average energy. While this derivation of the Boltzmann law is now quite popular,
its interpretation as a method of prediction, rather than as a method of physics, is contr oversial. Nevertheless, for
our purposes here, it does not matter whether we regard this as a description of physical systems or as a strategy for
making predictions.
Now, we switch from equilibrium to dynamics, but we use a s imilar strategy. We switch from the Principle of
Maximum Entropy to what Jaynes called the Principle of Maximum Caliber.
14
In particular, rather than focusing o n
the probability distribution p(E
i
) for the various microstates, we seek p[{σ
i
(t)}], where σ
i
(t) is the i
th
microscopic
trajectory of the system. Again we max imize an entropy-like quantity, obtained from p[{σ
i
(t)}], to obtain the predicted
distribution of microtrajectories. If there are no constraints, this maximization results in the prediction that all the
possible microtrajectories are equally likely during the dynamical process. In contrast, certain microtrajectories will
be favored if there are dynamical constraints, such as may be specified in terms of the average flux.
In the following section, we use the Maximum Caliber strategy to derive Fick’s Law using the Dog-Flea model,
which is one of the simplest models that contain the physics of interest.
III. FICK’S LAW FROM THE DOG-FLEA MODEL
We want to determine the diffusive time evolution of particles in a one-dimensional system. The key features of this
system are revealed by considering two columns of particles separated by a plane, as shown in Fig. (2). The left-hand
column (1) has N
1
(t) particles at time t and the right-hand column (2) has N
2
(t) particles. This is a simple variant of
the famous “Dog-Flea” model of the Ehrenfest’s introduced in 19 07.
15,16
Column (1) corresponds to Dog (1), which
has N
1
fleas on its back at time t, and column (2) corresponds to Dog (2), which has N
2
fleas at time t. In any time
interval between time t and t + t, any flea can either stay on its current dog, or it can jump to the other dog. This
model has been use d extensively to study the Boltzmann H-theorem and to understand how the time asymmetry of
diffusion process e s arises from an under lying time symmetry in the laws of motion.
15,16,19
Our model is used for a
slightly different purpose. In particular, our aim is to take a well-characterized problem like diffusion and to reveal
how the Principle of Max imum Caliber may be used in a concrete way. We follow the conventional definition of flux,
Page 3
4
of a number of particles transfer red per unit time, and per unit area. For simplicity, we take the cross- sectional ar e a
to be unity.
First, consider the equilibrium state of our Dog-Flea model. The total number of ways of partitioning the (N
1
+N
2
)
fleas (particles) is W (N
1
, N
2
),
W (N
1
, N
2
) =
(N
1
+ N
2
)!
N
1
!N
2
!
. (10)
The state of equilibrium is that for which the entro py, S = k
B
ln W is maximal. A simple calculation shows that the
entro py is maximal when the value N
1
= N
1
is as nearly equal to N
2
= N
2
as possible. In short, at equilibrium, both
dogs will have approximately the same number of fleas, in the absence of any bias.
Our focus here is on the dynamics on how the system reaches that state of equilibrium. We discretize time into a
series of intervals t. We define a dynamical quantity p, which is the probability that a particle (flea) jumps from one
column (dog) to the other in any time interval t. Thus, the probability that a flea stays on its dog during that time
interval is q = 1 p. We assume that p is independent of time t , and that all the fleas and jumps are independent of
each other.
In equilibrium statistical mechanics, the focus is on the microstates. Howe ver, for dynamics we focus on processes,
which, at the microscopic level, we call the microtrajectory. Characterizing the dynamics require s more than just
information about the microstates; we must also consider the processes. L e t m
1
represent the number of particles
that jump from column (1) to (2), and m
2
is the number of particles that jump from column (2) to (1), between time
t and t + t. There a re many possible different values o f m
1
and m
2
: it is possible that no fleas will jump during the
interval t, or that all the fleas will jump, or that the number of fleas jumping will b e in between those limits. Each
one of these different situations corresponds to a distinct microtrajectory of the system in this idealized dynamical
model. We need a principle to tell us what numbe r of fleas will jump during the time interval t at time t. Because
the dynamics of this model is so simple, the implementation of the caliber idea is reduced to nothing more than a
simple exercise in enumeration and counting using the binomial distribution.
A. The Dynamical Principle of Maximum Caliber
The pr obability, W
d
(m
1
, m
2
|N
1
, N
2
), that m
1
particles jump to the right and that m
2
particles jump to the left in
a discrete unit of time t, given that there are N
1
(t) and N
2
(t) fleas on the dogs at time t is
W
d
(m
1
, m
2
|N
1
(t), N
2
(t)) =
p
m
1
q
N
1
m
1
N
1
!
m
1
!(N
1
m
1
)!
|
{z }
W
d
1
p
m
2
q
N
2
m
2
N
2
!
m
2
!(N
2
m
2
)!
|
{z }
W
d
2
. (11)
W
d
is a count of microtrajectories in dynamics pro blems in the same vein that W counts microstates for equilibr ium
problems. In the same spirit that the Second Law of Thermodynamics says to maximize W to predict states o f
equilibrium, now for dynamics, we maximize W
d
over all the possible microtrajectories (i.e. over m
1
and m
2
) to
predict the fluxes of fleas between the dogs. This is the implementation of the Principle of Maximum Caliber w ithin
this s imple model. Maximizing W
d
over all the possible processes (different values of m
1
and m
2
) gives our prediction
(right flux m
1
= m
1
and left flux m
2
= m
2
) for the macroscopic flux that we should obs e rve in experiments.
Since the jumps of the fleas from each dog are indepe ndent, we find our predicted macroscopic dynamics by
maximizing W
d
1
and W
d
2
separately, or for convenience their logarithms:
ln W
d
i
m
i
N,m
i
=m
i
= 0, i = 1, 2. (12)
Note that applying Stirling’s approximation to Eq. (11) W
d
gives:
ln W
d
i
= m
i
ln p + (N
i
m
i
) ln q + N
i
ln N
i
m
i
ln m
i
(N
i
m
i
) ln(N
i
m
i
). (13)
We call C= ln W
d
the caliber. Maximizing C with respect to m gives
ln W
d
i
m
i
= ln p ln q ln m
i
+ ln(N
i
m
i
) = 0. (14)
This result may be simplified to yield
ln
m
i
N
i
m
i
= ln
p
1 p
, (15)
Page 4
5
which implies that the most probable jump number is s imply given by
m
i
= pN
i
. (16)
But since our probability distribution W
d
is nearly symmetric about the most probable value of flux, the average
number and the most probable number are approximately the sa me. Hence, the average net flux to the right will be,
hJ(t)i =
m
1
m
2
t
= p
N
1
(t) N
2
(t)
t
px
2
t
c(x, t)
x
which is Fick’s law, in this simple mo del, and where the diffusion constant is given by D = px
2
/t. We have
rewritten N
1
N
2
= cx.
Hence we have a simple explanation for why there is a net flux of pa rticles diffusing across a plane down a concen-
tration gradient: more microscopic trajectories lead downhill than uphill. It s hows that the diffusion constant D is a
measure of the jump r ate p. This s imple model does not make any assumptions that the system is “near-equilibrium”,
i.e., utilizing the Boltzmann distribution law, for example, and thus it indicates that Fick’s Law ought als o to apply
far from equilibrium. For example, we could have imagined that for very steep gradients, Fick’s Law might have been
only an approximation and that diffusion is more acc urately represented as a series expansion of higher derivatives of
the gradient. But, at least within the present model, Fick’s Law is a general result which emerges from counting up
microtrajectories. On the other hand, we would expect Fick’s law to break down when the particle density becomes so
high that the particles start interacting with each other thus spoiling the assumption of independent particle jumps.
B. Fluctuations i n Diffusion
Above, we have shown that the mo st probable number of fleas that jump fro m dog (1) to dog (2) between time t
and t + t is m
1
= pN
1
(t). The model also tells us that sometimes we will have fewer fleas jumping during that time
interval, and sometimes we will have more fleas. These variations are a reflection of the fluctuations resulting from
the system following different microscopic pathways.
We focus now on predicting the fluctuations. To illustrate, let us first make up a table of W
d
, the different numbers
of possible microtrajectories, taken over all the values of m
1
and m
2
(Table (I)). To keep this illustration simple, let
us consider the following particular case: N
1
(t) = 4 a nd N
2
(t) = 2. Let us also assume p = q = 1/2. Here, then, are
the multiplicities of all the possible routes of flea flow. A given entry tells how many microtrajectories correspond to
the given choice of m
1
and m
2
.
Notice first that the table confirms our previous discussion. The dynamical process for which W
d
is maximal (12
microtrajectories, in this case), occurs when m
1
= pN
1
= 1/2 × 4 = 2 , and m
2
= pN
2
= 1/2 × 2 = 1 . You can
compute the probability of that particular flux by dividing W
d
= 12 by the sum of entries in this table, which is
2
6
= 64 the total number of microtrajectories. The result, which is the fraction of all the possible microtrajectories
that have m
1
= 2 and m
2
= 1, is 0.18. We have chosen an example in which the particle numbers are very small,
so the fluctuations are large; they account for more tha n 80 percent of the flow. In systems having large numbers of
particles, the relative fluctuations are much s maller than this.
Now look a t the top right corner of this table. This entry says that there is a proba bility of 1/64 that both fleas on
dog (2) will jump to the left while no fleas will jump to the right, implying that the net flux, for that microtrajectory,
is actually backwards, relative to the conce ntration gradient. We call these “bad ac tor” microtrajectories. In those
cases, particles flow to increase the concentration gra dient, not decrease it. Traditionally, “Maxwell’s Demon” was an
imaginary microscopic b e ing that was invoked in similar situations in heat flow processes, i.e., where heat would flow
from a colder object to heat up a hotter one, albeit with low probability.
20
In particular, the Demon was s upposed
to capture the bad actor microtrajectories. At time t, there are 4 fleas on the left dog, and 2 on the right. At the
next instant in time, t + t, all 6 fleas are on the left dog, and no fleas are on the right-hand dog. Notice that this is
not a violation of the Second Law, which is a tendency towards maximum entropy, because the Second Law is only a
statement that the average flow must increase the entropy; it says nothing about the fluctuations.
Similarly, if you look at the bottom left co rner of the table, you see a regime of “superflux”: a net flux of 4 particles
to the right, whereas Fick’s Law predicts a net flow of o nly 2 particles to the right. This table illustrates that Fick’s
Law is only a description of the average or most probable flow, and it shows that Fick’s Law is not always exactly
correct at the microscopic le vel. However, such viola tio ns o f Fick’s Law are of low probability, a point that we will
make more quantitative below. Such fluctuations have been experimentally measur e d in small systems.
21
We can further elabo rate on the fluctuations by defining the “potencies” of the microtrajectories. We define the
potency to be the fraction of all the trajectorie s that lead to a substantial change in the macrostate. The potencies of
trajectories depend upon how far the system is from equilibrium. To see this, let us continue with our simple system
Page 5
6
having 6 particles. The total number of microscopic trajector ies available to this system at each instant in our discrete
time picture is 2
6
= 64. Suppose that at t = 0 all 6 of these particles are in dog (1). The total number of microsco pic
trajectories available to the system can be classified once again using m
1
and m
2
, where in this case m
2
= 0 since
there are no fleas on dog (2) (See Table. (II)).
What fraction of all microtrajectories changes the occupancies of both dogs by more than some threshold value,
say N
i
> 1? In this ca se, we find that 57 of the 64 microtrajectories ca use a change greater than this to the current
state. We call these potent trajectories.
Now, let us look at the potencies of the same system of 6 particles in a different situatio n, N
1
= N
2
= 3 when
the system is in macroscopic equilibrium (see Table. (III)). Here, only the trajectories with (m
1
, m
2
) pairs given by
(0, 2), (0, 3), (1, 3), (2, 0), (3, 0), and (3, 1) satisfy our criterion. Summing over all of these outcomes shows that just 14
of the 64 trajectories a re potent in this case.
There are two key observations conveyed by these arguments. First, for a sys tem far from equilibrium, the vast
majority of trajectories at that time t are potent, and move the system significantly away from its current macrostate.
Second, when the system is near equilibrium, the vast majority o f microtrajectories leave the macrostate unchanged.
Let us now generalize from the tables above, to see when fluctuations will be important.
1. Fluctuations and Potencies
A simple way to characterize the magnitude of the fluctuations is to look at the width of the W
d
distribution.
2
It is shown in standard statistics texts that for a binomial distribution such as ours, for which the mean and most
probable value b oth equal m
i
= Np
i
, the variance is σ
2
i
= N
i
pq. The variance characterizes the width. Moreover, if
N
i
is sufficiently large, a binomial distribution can be well-approximated by a Gaus sian distribution
P(m
i
, N
i
) =
1
2πN
i
pq
exp
(m
i
N
i
p)
2
2N
i
pq
, (17)
an approximation we find convenient since it leads to simple analytic results. However, this distribution function is
not quite the one we want. We a re interested in the distribution of flux, P (J) = P (m
1
m
2
), not the distribution o f
right-jumps m
1
or left-jumps m
2
alone, P(m).
However, due to a remarkable property of the Gaussian distribution, it is simple to compute the quantity we want.
If you have two Gaussian distributions, one with mean hx
1
i and variance σ
2
1
, and the other with mean hx
2
i and
variance σ
2
2
, then the distribution function, P (x
1
x
2
) for the difference will also be a Gaussia n distribution, having
mean hx
1
i hx
2
i and variance σ
2
= σ
2
1
+ σ
2
2
.
For our binomial distributions, the means are m
1
= pN
1
and m
2
= pN
2
and the variances are σ
2
1
= N
1
pq and
σ
2
2
= N
2
pq, so the distribution of the net flux, J = m
1
m
2
is
P (J) =
1
p
2π(pq N )
exp
(J p(N
1
N
2
))
2
2pqN
, (18)
where N = N
1
+ N
2
.
Figure (3) shows an example of the distributions of fluxes a t different times, using p = 0.1, and starting from
N
1
= 100, N
2
= 0. We update each time step using an averaging scheme, N
1
(t + t) = N
1
(t) N
1
(t)p + N
2
(t)p. The
figure shows how the mean flux is large at first and decays towards equilibrium, J = 0. This result could also have
been pre dicted from the diffusion equation. However, equally interesting are the wings of the distributions, which
show the deviations from the average flux, and these are not predictable from the diffusion equation. One measure
of the importance of the fluctuations in a given dynamical problem is the ratio of the standa rd deviation σ, to the
mean,
σ
2
J
=
Npq
(N
1
N
2
)p
. (19)
In the limit of large N
1
, the above reduces to,
σ
2
J
=
Npq
(N
1
N
2
)p
N
1/2
. (20)
In a typical bulk experiment, pa rticle numbers are large, of the order of Avog adro’s number 10
23
. In such cases, the
width of the flux distribution is exceedingly small and it becomes overwhelmingly probable that the mean flux will
Page 6
7
be governed by Fick’s law. However, within bio logical cells and in applications involving small numbers of particles,
the variance of the flux can become significant. It has been observed that both rotary and translational single motor
proteins sometimes transiently step backwards, relative to their main direction of motion.
22
As a measure of the fluctuations, we now calculate the variance in the flux. It follows from Eq. (18) that hJ
2
i = Npq,
where N = N
1
+ N
2
. Thus, we can represent the magnitude of the fluctuations as δ,
δ =
s
h(∆J)
2
i
hJi
2
=
Npq
pfN
1
f
r
q
p
N
1
,
where the quantity N = N
1
+ N
2
is the total number of fleas and f = (N
1
N
2
)/(N), the normalized concentration
difference. The qua ntity δ is also a measure of the degree of backflux. In the limit of large N, δ goes to zero. That is,
the noise diminishes with system size. However, even when N is large, δ can still be large (indicating the possibility
of backflux) if the concentration gradient, N
1
N
2
, is small.
Let us loo k now at our other measure of fluctuations, the potency. Trajector ies that are not potent should have
|m
1
m
2
| 0 which corresponds to a negligible change in the current state of the system as a re sult of a given
microtrajectory. In Fig. (4), the impotent microtrajectories are shown as the shaded band for which m
1
m
2
. To
quantify this, we define imp otent trajectories as those for which |m
1
m
2
| h, (h N ). In the Gaussian model,
the fraction of impotent trajectories is
Φ
impotent
Z
h
h
dJ
1
2πNpq
exp
(J (N
1
N
2
)p)
2
2Npq
(21)
=
1
2
erf
h + (N
1
N
2
)p
2Npq
+ erf
h (N
1
N
2
)p
2Npq

, (22)
and corresponds to summing over the subset o f trajectories that have a small flux. To keep it simple, we did a
computation, tak ing p = q = 1/2, and for which the ex pression for the probability distribution for the microscopic
flux m
1
m
2
is given by Eq. (18). The choice of h is arbitrary, so let us just choose h to be one standard deviation,
p
N/4. Fig. (5) shows potencies for various values of N
1
and N
2
. When the concentration gradient is large, most
trajectories are potent, leading to a statistically significant change of the macrostate, whereas when the concentration
gradient is small, most trajectories have little e ffect on the macrostate.
As another measure of fluctuations, let us now consider the “bad actors” (see Fig. (6)). If the average flux is in the
direction from dog (1) to dog (2), what is the probability you will observe flux in the opposite direction (bad actors)?
Using Eq. (18) for P (J), we get
Φ
badactors
Z
0
−∞
1
2πNpq
exp
(J (N
1
N
2
)p)
2
2Npq
(23)
=
1
2
1 erf
(N
1
N
2
)p
2Npq

, N
2
> N
1
(24)
which amounts to summing up the fraction of trajectories for which J 0. Figure (7) shows the fraction of bad
actors for p = q = 1/2. Bad actors are rar e when the concentration gradient is large, and highest w hen the gradient
is small. The discontinuity in the slope of the curve in Fig. (7) at N
1
/N = 0.5 is a reflectio n of the fact that the mean
flux abruptly changes sign at that va lue.
IV. FOURIER’S LAW OF HEAT FLOW
While particle flow is driven by concentration gradients, according to Fick’s law, hJi = D
c
x
, energy flow is driven
by temperature gradients, according to Fourier’s law
1
:
hJ
q
i = κ
T
x
.
Here, J
q
is the energy transferred per unit time and per unit cross-sectional area by heat flow and T/∂x is the
temper ature gradient that drives it, indicated here for the one-dimensional case. κ, the thermal conductivity,
1
plays
the role that the diffusion coefficient plays in Fick’s L aw.
To explore Fourier’s law, we return to the Dog-Flea model as describ ed in Sec. III. Now, columns (1) and (2) can
differ not only in their particle numbers, N
1
(t) and N
2
(t), but also in their temperatures, T
1
(t) and T
2
(t). To keep
Page 7
8
it simple here, we assume that each column is at thermal equilibrium and that each particle that jumps car ries with
it the average energy, hmv
2
/2i = k
B
T/2 from the column it left. Within this simple model, all energy is tra ns ported
by hot or cold molecules switching dogs. Although in general, heat can also flow by other mechanisms mediated by
collisions, for example, our aim here is just the simplest illustration of principle. The average heat flow at time t is
hJ
q
i =
m
1
t
(k
B
T
1
/2)
m
2
t
(k
B
T
2
/2) =
pk
B
t
[N
1
T
1
N
2
T
2
] (25)
where m
1
and m
2
are, as defined in Sec. IIIA the numbers of particles jumping from each column at time t. If the
particle numbers are identical, N/2 = N
1
= N
2
, then
hJ
q
i =
pk
B
N
t
(T
1
T
2
) = κ
T
x
,
which is Fourier’s Law for the average heat flux, within this two-column model. The model predicts that the thermal
conductivity is κ = (pk
B
Nx)/(∆t), which can be expressed in a more canonical form as κ = pk
B
nv
av
x, when
written in terms of the particle density n = N/x and the average velocity, v
av
= x/t. Our simple model gives
the same thermal conductivity as given by the kinetic theo ry of gases,
1
κ = (1/2)k
B
nv
av
l, where l is the mean free
path, if px in our model corresponds to l/2, half the mean- free path. Hence, this simple model captures the main
physica l features of heat flow, again by appealing to the idea of summing over the weighted microtrajectories available
to the system.
V. NEWTONIAN VISCOSITY
Another phenomenological law of gradient-driven transport is that of Newtonian viscosities,
1
τ = η
dv
y
dx
,
where τ is the shear stress that is applied to a fluid, dv
y
/dx is the resultant shear rate, and the proportionality
coefficient, η, is the viscosity of a Newtonian fluid. Whereas Fick’s law describes particle transport and Fourier’s law
describes energy transport, this Newtonian law describes the trans port (in the x-direction, from the top moving plate
toward the bottom fixed plate) of linear momentum that acts in the y-direction (parallel to the plates) (see Fig. (8)).
Returning to the Dog-Flea model of Sec. III, suppose each particle in column (1) carries momentum mv
y
1
along the
y-axis, and that m
1
particles hop from column (1) to (2) at time t, carrying with them some linear momentum. As
befo re, we consider the simplest model that every particle carries the same averag e momentum from the column it
leaves to its destination column.
The flux, J
p
, is the amo unt of y-axis momentum that is transported fr om one plane to the next in the x-direction,
per unit area:
hJ
p
i =
m
1
t
(mv
y
1
)
m
2
t
(mv
y
2
) =
pm
t
[N
1
v
y
1
N
2
v
y
2
].
If the number of particles is the same in both columns, N/2 = N
1
= N
2
, this s implifies to
hJ
p
i =
pmN
t
[v
y
1
v
y
2
] = η
v
y
x
,
which is the Newtonian law of viscosity in this two-column model. The viscosity is pre dicted by this model to be
η = (pmN x)/(∆t). Converting this to a more canonical form gives η = pmnv
av
x, where n = N/x is the pa rticle
density, and v
av
= x/t is the average velocity. This is equivalent to the value g iven by the kinetic theory of gases,
1
η = (1/3)mnlv
av
, if px from our model equals (1/3)l, one-third of the mean-free path length. Note that this simple
model based upon molecular motions will clearly not be applicable to complex fluids where the underlying molecules
possess internal structure.
VI. CHEMICAL KINETICS WITHIN THE DOG-FLEA MODEL
Let us now look at chemical rea c tio ns using the Dog-Flea model. Chemical kinetics can be modeled using the
Dog-Flea model when the fleas have preference for one dog over the other. Consider the reaction
A
k
f
k
r
B.
Page 8
9
The time-dep e ndent average concentrations, [A](t) and [B](t) are often described by chemical rate equations,
2
d[A]
dt
= k
f
[A] + k
r
[B]
d[B]
dt
= k
f
[A] k
r
[B] (26)
where k
f
is the average conversion rate of an A to a B, and k
r
is the average conversion rate of a B to an A. These
rate expressions describe only average rates; they do not tell us the distribution of rates. So me A’s will convert to B’s
faster than the average rate k
f
[A] predicts, and some will convert more slowly. Again, we use the Dog-Flea mo del as
a microscopic model for this process. We use it to consider both the average concentrations and the fluctuations in
concentrations.
Now, dog (1) represents chemical species A and dog (2) r epresents chemical species B. The net chemical flux from
1 to 2 is given by J
c
= m
1
m
2
. What is different about our model for these chemical processes than in our previous
situations is that now the intrinsic jump ra te from column 1 (species A), p
1
, is different than the jump rate from
column 2, p
2
. This simply reflects the fact that a for ward rate can differ from a backward rate in a chemical reaction.
Now, fleas have a different escape rate from each dog. Fleas escape from Dog (1) at rate p
1
and fleas escape from Dog
(2) at rate p
2
. Maximizing W
d
gives m
1
= N
1
p
1
and m
2
= N
2
p
2
, so the ave rage flux (which is the almost the same
as the most probable flux because of the approximately symmetric nature of the binomial distribution) at time t is,
hJi = N
1
p
1
N
2
p
2
= k
f
[A] k
r
[B],
which is just the standard mass-action rate law, expressed in terms of the mean concentrations. The mean values
satisfy detailed balance at equilibrium ( hJi=0 N
2
/N
1
= p
1
/p
2
= k
f
/k
r
).
More interesting than the well-known be havior of the mean chemical reaction rate is the fluctuational behavior.
For example, if the number of particles is small, then even when k
f
[A] k
r
[B] > 0, indicating an average conversion
of A’s to B’s, the reverse can happ e n occasionally instead. When will these fluctuations be large? As in Sec. III, we
first determine the probability distribution of the flux J. In this case, the probability distribution b e c omes:
P (J) =
1
p
2π(p
1
q
1
N
1
+ p
2
q
2
N
2
)
exp
(J (N
1
p
1
N
2
p
2
))
2
2(p
1
q
1
N
1
+ p
2
q
2
N
2
)
, (27)
Again, let us use this flux distribution function to consider the fluctuations in the chemical reaction. The relative
variance in the flux is
h(∆J)
2
i
(hJi)
2
=
N
1
p
1
q
1
+ N
2
p
2
q
2
N
1
p
1
N
2
p
2
.
As before, the main message is that when the system is not yet at equilibrium (i.e., the denominator is non-zero),
macroscopically large systems will have negligibly small fluctuations. The relative magnitude of fluctuations scales
approximately as N
(1/2)
. Let us also look at the potencies of microtrajectories as another window into fluctuations.
Using Eq. (22) with p
1
and p
2
gives the fraction of trajectories that are impotent as
Φ
impotent
Z
h
h
dJ
1
p
2π(N
1
p
1
q
1
+ N
2
p
2
q
2
)
exp
(J (N
1
p
1
N
2
p
2
))
2
2(N
1
p
1
q
1
+ N
2
p
2
q
2
)
(28)
=
1
2
erf
"
h + (N
1
p
1
N
2
p
2
)
p
2(N
1
p
1
q
1
+ N
2
p
2
q
2
)
#
+ erf
"
h (N
1
p
1
N
2
p
2
)
p
2(N
1
p
1
q
1
+ N
2
p
2
q
2
)
#!
, (29)
Using N
1
+ N
2
= N = 100, and p
1
= 0.1 and p
2
= 0.2, Φ
potent
= 1 Φ
impotent
as a function of N
1
/N is shown in
Fig. 9.
VII. DERIVING THE DYNAMICAL DISTRIBUTION FUNCTION FROM MAXIMUM CALIBER
Throughout this paper, we have used the binomial distribution function, W
d
, as the basis for our treatment of
stochastic dynamics . The Maximum Caliber idea says that if we find the value of W
d
that is maximal with respect to
the microscopic trajectories, this will give the ma c roscopically observable flux. Here, we now restate this in a more
general way, and in terms of the probabilities of the trajectories.
Page 9
10
Let P (i) b e the probability of a micro trajectory i during the interval from time t to t + t. A microtrajectory is
a s pecific set of fleas that jump; for example microtrajectory i = 27 might be the situation in which fleas number 4,
8, and 23 jump from dog (1) to (2). We take as a c onstraint the average number of jumps, hmi, the macroscopic
observable. The quantity m
i
= 3 in this case indicates that trajectory i involves 3 fleas jumping. We express the
caliber C as
C =
X
i
P (i) ln P (i) λ
X
i
m
i
P (i) α
X
i
P (i) (30)
where, λ is the Lagr ange multiplier that enforces the constraint of the average flux and α is the Lagrang e multiplier
that enforces the no rmalization condition that the P (i)’s sum to one. Maximizing the calib e r gives the populations
of the microtrajectories,
P (i) = exp(α λm
i
). (31)
Note that the probability P (i) of the i
th
trajectory depends only on the total number m
i
of the jumping fleas. Also,
all trajectories with the same m
i
have same probabilities. Now, in the same way tha t it is sometimes useful in
equilibrium statistical mechanics to switch from microstates to energy levels, we now express the population P (i) of a
given microtrajectory, instead, in terms of ρ(m) the fractio n of all the microtrajectories that involve m jumps during
this time interval,
ρ(m) = g(m)Q(m), (32)
where, g(m) = N !/[m!(N m)!] is the “density of trajectories” with flux m (in analogy with the dens ity of states
for equilibrium systems), and Q(m) is the probability P (i) of the microtr ajectory i with m
i
= m. In other words, i
denotes a microtrajectory (specific set of fleas jumping) while m denotes a microprocess (the number of fleas jumping).
The total number of i’s associated with a given m is precis e ly g(m). It can also be easily s een that,
X
i
P (i) =
N
X
m=0
g(m)Q(m) =
N
X
m=0
ρ(m) = 1 (33)
hmi =
X
i
m
i
P (i) =
N
X
m=0
mg(m)Q(m) =
N
X
m=0
(m). (34)
Thus, the distribution of jump-processes written in terms of the jump number, m, is
ρ(m) =
N!
m!(N m)!
exp(α) exp(λm). (35)
The Lagrange multiplier α can be eliminated by summing over all trajectories and requiring that
P
N
m
i
=0
ρ(m
i
) = 1,
i.e.,
exp(α) =
N
X
m
g(m)e
λm
=
N
X
m
N!
m!(N m)!
e
λm
= (1 + e
λ
)
N
. (36)
Combining Eqs . (35) and (36) gives
ρ(m) =
N!
m!(N m)!
exp(λm)
(1 + exp(λ))
N
. (37)
If we now let
p =
exp(λ)
1 + exp(λ)
, (38)
then we get
p
m
=
exp(λm)
(1 + exp(λ))
m
, (39)
Page 10
11
and
(1 p)
Nm
=
1
(1 + exp(λ))
Nm
. (40)
Combining Eqs . (37), (39), and (40) gives the s imple form
ρ(m) =
N!
m!(N m)!
p
m
(1 p)
(Nm)
,
that we have use d throughout this paper in Eq. (11).
VIII. SUMMARY AND COMMENTS
We have shown how to derive the phenomenological laws of nonequilibrium transport, including Fick’s law of
diffusion, the Fourier law of heat conduction, the Newtonian law of viscosity, and mass-action laws of chemical
kinetics, fro m a simple physical foundation that can be readily taught in elementary courses. We use the Dog-Flea
model, originated by the Ehrenfests, for describing how particles, energy, or momentum can be transported across a
plane. We combine that model with the Principle of Maximum Caliber, a dynamical ana log of the way the P rinciple
of Maximum Entropy is used to derive the laws of equilibrium. In particular, according to Maximum Entropy, you
maximize the entropy S(p
1
, p
2
, . . . , p
N
) with respect to the probabilities p
i
of N microstates, subject to constraints,
such as the requirement that the average energy is known. That gives the Boltzmann distribution law. Here, for
dynamics, we focus on microtrajectories, rather than microstates, and we ma ximize a dynamical entropy-like quantity,
subject to an average flux constraint. In this way, maximizing the c aliber is the dynamical equivalent of minimizing a
free energy for predicting eq uilibr ia. A particular value of this approa ch is that it also gives us fluctuation information,
not just averages. In diffusion, for example, sometimes the flux can be a little higher or lower than the average value
exp ected from Fick’s Law. These fluctuations ca n be pa rticularly imp ortant for biology and nanotechnology, where
the numbers of particles can be very small, and therefore where there can be significa nt fluctuations in rates, a round
the average.
Acknowledgments
It is a pleasure to acknowledge the helpful c omments and discussions with Dave Drabold, Mike Geller, Jan´e Kondev,
Stefan M¨uller, Hong Q ian, Darren Segall, Pierre Sens, Jim Sethna, Ron Siegel, Andrew Spakowitz, Zhen-Gang Wang,
and Paul Wiggins. We would also like to thank Sarina Bromberg for the help with the figures. KAD and MMI would
like to acknowledge support from NIH grant number R0 1 GM034993 . RP acknowledges support from NSF grant
number CMS-0301657, the Keck foundation, NSF NIRT grant number CMS-0404031, and NIH Director’s Pioneer
Award grant number DP1 OD000217 .
Electronic address: dill@maxwell.ucsf.edu
Electronic address: phillips@pboc.caltech.edu
1
F. Reif, Fundamentals of statistical and thermal physics (McGraw-Hill, NY, 1965).
2
K. Dill and S. Bromberg, Molecular driving f orces: statistical thermodynamics in chemistry and biology (Garland Science,
New York, 2003).
3
J. Kasianowicz, E. Brandin, D. Branton, and D . Deamer, Proc. Nat. Acad. Sci. USA 93, 13770 (1996).
4
H. P. Lu, L. Xun, and X. S. Xie, Science 282, 1887 (1998).
5
M. Reif, R. S. Rock, A. D. Mehta, M. S. Mooseker, R. E. Cheney, and J. A. Spudich, Proc. Nat. Acad. Sci. 97, 9482 (2000).
6
A. Meller, L. Nivon, and D. Branton, Phys. Rev. Lett. 86, 3435 (2001).
7
D. E. Smith, S. J. Tans, S. B. Smith, S. Grimes, D. L. Anderson, and C. Bustamante, Nature 413, 748 (2001).
8
H. Li, W. A. Linke, A. F. Oberhauser, M. Carrion-Vazquez, J. G. Kerkvliet, H. Lu, P. Marszalek, and J. M. Fernandez,
Nature 418, 998 (2002).
9
J. Liphardt, S. Dumont, S. B. Smith, I. Tinocho, and C. Bustamante, Science 296, 1832 (2002).
10
C. Bustamante, Z. Bryant, and S. B. Smith, Nature 421, 423 (2003).
11
E. R. Dufresne, D. Altman, and D. G. Grier, Europhys. Lett. 53, 264 (2001).
12
B. Alberts, D. Bray, A. Johnson, J. Lewis, M. Raff, K. Roberts, and P. Walter, Essential Cell Biology: An Introduction to
the Molecular Biology of the Cell (Garland Publishing, New York, 1997).
Page 11
12
13
D. Kondepudi and I. Prigogine, Modern thermodynamics:From heat engines to dissipative structures (John Wiley and sons,
1998).
14
E. T. Jaynes, in E. T. Jaynes: Papers on probabili ty, statistics and statistical physics, edited by R. D. Rosenkrantz (Kluver
academic publishers, 1980), chap. 14.
15
M. Klein, Physica 22, 569 (1956).
16
G. G. Emch and C. Liu, The logic of thermostatical physics (Springer-Verlag, NY, 2002).
17
R. P. Feynman, in Nobel Lectures in Physics: 1901–1995 (World Scientific Publishing Co., 1998).
18
E. T. Jaynes, in E. T. Jaynes: Papers on probabili ty, statistics and statistical physics, edited by R. D. Rosenkrantz (Kluver
academic publishers, 1957), chap. 1.
19
M. Kac, Amer. Math. Monthly 54, 369 (1947).
20
G. Gamow, One Two Three. ..Infinity (Dover Publications, NY, 1988).
21
G. M. Wang, E. M. Sevick, E. Mittag, D. J. Searles, and D. J. Evans, Phys. Rev. Lett. 89, 050601 (2002).
22
J. Howard, Mechanics of motor proteins and the cytoskeleton (Sinauer Associates Inc., 2001).
Page 12
13
FIG. 1: Colloidal free expansion setup to illustrate d iffusion involving small numbers of particles. (a) Schematic of experimental
setup (see text for details.) (b) Several snapshots from the experiment. (c) Normalized histogram of particle positions during
the experiment. The solution to the diffusion equation for the microfluidic “free expansion” experiment is superposed for
comparison.
Page 13
14
FIG. 2: Schematic of the simple dog-flea model. ( a) State of the system at time t, (b) a particular microtrajectory in which
two-fleas jump from the dog on the left and one jumps from the dog on the right, (c) occupancies of the dogs at time t + t.
Page 14
15
FIG. 3: Schematic of the distribution of uxes for different time points as t he system approaches equilibrium.
FIG. 4: Schematic of which trajectories are potent and which are impotent. The shaded region corresponds to the impotent
trajectories for which m
1
and m
2
are either equal or approximately equal and hence make relatively small change in the
macrostate. The unshaded region corresponds to potent trajectories.
Page 15
16
FIG. 5: Illustration of the p otency of the microtrajectories associated with different distributions of N particles on the two
dogs. The total number of particles N
1
+ N
2
= N = 100.
FIG. 6: Illustration of the notion of bad actors. Bad actors are essentially the microtrajectories which contribute net particle
motion which has the opposite sign from the macroflux.
Page 16
17
FIG. 7: The fraction of all possible trajectories that go against the direction of the macroflux, for N = 100. The fraction of
bad actors is highest at N
1
= N/2 = 50.
FIG. 8: Illustration of Newton’s law of viscosity. The fluid is sheared with a constant stress. The fluid velocity decreases
continuously from its maximum value at the top of the fluid to zero at the bottom. There is thus a gradient in the velocity
which can be related to the shear stress in the fluid.
Page 17
18
FIG. 9: The fraction of potent trajectories, Φ
potent
, as a function of N
1
/N for N
1
+ N
2
= N = 100, when p
1
= 0.1 and p
2
= 0.2
are not equal. The minimum value of potency does not occur at N
1
/N = 0.5, but at N
1
/N = 0.66. This value of N
1
/N also
corresponds t o its equilibrium value given by p
2
/(p
1
+ p
2
).
Page 18
19
m
1
m
2
0 1 2
0 1 2 1
1 4 8 4
2 6 12 6
3 4 8 4
4 1 2 1
TABLE I: Trajectory multiplicity table for the specific case where N
1
(t) = 4 and N
2
(t) = 2. Each entry in the table corresponds
to the total number of trajectories for the particular values of m
1
and m
2
.
m
1
m
2
0
0 1
1 6
2 15
3 20
4 15
5 6
6 1
TABLE II: Trajectory multiplicity table for the specific case N
1
(t) = 6 and N
2
(t) = 0 when the system is far from macroscopic
equilibrium.
m
1
m
2
0 1 2 3
0 1 3 3 1
1 3 9 9 3
2 3 9 9 3
3 1 3 3 1
TABLE III: Trajectory multiplicity table for the specific case N
1
(t) = 3 and N
2
(t) = 3 when the system is at the macroscopic
equilibrium.
Page 19
    • "One includes inertia (in Section 4) and the other fluctuations (in Section 5). The former is motivated by [8] [9] [10] and the latter by [11] [12] [13] [14] [15] [16]. In the inertia type lifts the extra state variables are velocities of the original state variables. "
    Full-text · Dataset · Dec 2014
    • "The amount of physical time ∆í µí±¡ that elapses between turns is determined by the rate constant í µí±˜ and the total number í µí± of marbles in the game. The number of marbles in box 1 í µí± 1 varies randomly at each turn of the game according to the following rule: Despite its apparent simplicity, the marble game can be used as a foundation for a conceptual framework that can be applied across the STEM disciplines as outlined below (Ghosh et al., 2006). 2 "
    [Show abstract] [Hide abstract] ABSTRACT: Recently there have been multiple calls for curricular reforms to develop new pathways to the science, technology, engineering and math (STEM) disciplines. The Marble Game answers these calls by providing a conceptual framework for quantitative scientific modeling skills useful across all the STEM disciplines. The approach actively engages students in a process of directed scientific discovery. In a "Student Assessment of their Learning Gains" (SALG) survey, students identified this approach as producing "great gains" in their understanding of real world problems and scientific research. Using the marble game, students build a conceptual framework that applies directly to random molecular-level processes in biology such as diffusion and interfacial transport. It is also isomorphic with a reversible first-order chemical reaction providing conceptual preparation for chemical kinetics. The computational and mathematical framework can also be applied to investigate the predictions of quantitative physics models ranging from Newtonian mechanics through RLC circuits. To test this approach, students were asked to derive a novel theory of osmosis. The test results confirm that they were able to successfully apply the conceptual framework to a new situation under final exam conditions. The marble game thus provides a pathway to the STEM disciplines that includes quantitative biology concepts in the undergraduate curriculum - from the very first class.
    Preview · Article · Oct 2012 · Biophysical Journal
    • "One includes inertia (in Section 4) and the other fluctuations (in Section 5). The former is motivated by8910 and the latter by111213141516. In the inertia type lifts the extra state variables are velocities of the original state variables. In the context of chemical reactions they are reaction fluxes y or alternatively reaction forces x. "
    [Show abstract] [Hide abstract] ABSTRACT: Dynamics of chemical reactions, called mass-action-law dynamics, serves in this paper as a motivating example for investigating geometry of nonlinear non-equilibrium thermodynamics and for studying the ways to extend a mesoscopic dynamics to more microscopic levels. The geometry in which the physics involved is naturally expressed appears to be the contact geometry. Two extensions are discussed in detail. In one, the reaction fluxes or forces are adopted as independent state variables, the other takes into account fluctuations. All the time evolution equations arising in the paper are proven to be compatible among themselves and with equilibrium thermodynamics. A quantity closely related to the entropy production plays in the extended dynamics with fluxes and forces as well as in the corresponding fluctuating dynamics the same role that entropy plays in the original mass-action-law dynamics.
    Full-text · Article · May 2012 · Physica D Nonlinear Phenomena
Show more