To read the full-text of this research, you can request a copy directly from the author.
... entering the scene and soon exiting from it: his basic texts Tolman 1920 andTolman 1927 are barely mentioned in Glasstone et al. 1941. Tolman perceived that the fundamental scientific revolution-the advent of quantum mechanicshis work outdated based on the Bohr-Sommerfeld's "old" quantum theory. ...
... Thus, one is facing the challenge proposed in one of Hilbert's famous problems regarding the state of mathematics; in particular, the sixth of Hilbert's problems, which concerns the application of mathematics to physics. In a recent update (Ruggeri 2017) it is argued that Tolman (1920Tolman ( , 1927 established the relationship between the logarithmic derivative of the rate constant with respect to β as the difference between the average energies of all reacting species and that of all molecules: according to the International Union of Pure and Applied Chemistry, this is now taken as the definition of activation energy E a (Laidler 1996) 3 Henry Eyring (1901Eyring ( -1981 was a Mexican chemical from the United States. He dealt with physical chemistry, giving his major contribution to the study of the speed of chemical reactions and reaction intermediates. ...
... The program to apply to chemical kinetics a "new science", statistical mechanics, was developed by Tolman from 1920(Tolman 1920). The quotation given above concludes his voluminous compendium(Tolman 1927) where he re-elaborates the article Tolman 1920 and other subsequent ones. We will see later that is optimism was later frustrated. ...
A survey on the principles of chemical kinetics (the “science of change”) is presented here. This discipline plays a key role
in molecular sciences, however, the debate on its foundations had been open for the 130 years since the Arrhenius equation
was formulated on admittedly purely empirical grounds. The great success that this equation has had in the development of
experimental research has motivated the need of clarifying its relationships with the foundations of thermodynamics on the
one hand and especially with those of statistical mechanics (the “discipline of chances”) on the other. The advent of quantum
mechanics in the Twenties and the scattering experiments by molecular beams in the second half of last century have
validated collisional mechanisms for reactive processes, probing images of single microscopic events; molecular dynamics
computational techniques have been successfully applied to interpret and predict phenomena occurring in a variety of environments:
the focus here is on a key aspect, the effect of temperature on chemical reaction rates, which in cold environments
show departure from Arrhenius law, so arguably from Maxwell–Boltzmann statistics. Modern developments use venerable
mathematical concepts arising from “criteria for choices” dating back to Jacob Bernoulli and Euler. A formulation of recent
progress is detailed in Aquilanti et al. 2017.
... This theorem can be used to show that the principle of equipartition of energy holds for Cartesian molecular models. 1,2 For the kinetic energy, this principle states that each momentum coordinate in the canonical phase space has ½kT of thermal energy on the average (where k is the Boltzmann constant and T is the thermodynamic temperature). This equipartition principle is used for initializing atomic velocities in Cartesian molecular dynamics simulations with a Boltzmann distribution of thermal energy. ...
... Let and denote the Hamiltonian and the Lagrangian respectively for a ndimensional dynamical system, with denoting the generalized coordinates and the conjugate momenta. The elements of p satisfy (1) For a temperature, T, the canonical ensemble partition function, , is defined as 1 (2) where h is Planck's constant, and the α i and γ i integration limits are determined by the geometry of the problem. The ensemble average of a function f(q,p) is (3) In particular, with y i and y j denoting a pair of phase space coordinates, the equipartition theorem 1,22 states that the ensemble average of the function is (4) where δ i=j denotes the Dirac delta function which is unity only when i = j and zero otherwise. ...
... The elements of p satisfy (1) For a temperature, T, the canonical ensemble partition function, , is defined as 1 (2) where h is Planck's constant, and the α i and γ i integration limits are determined by the geometry of the problem. The ensemble average of a function f(q,p) is (3) In particular, with y i and y j denoting a pair of phase space coordinates, the equipartition theorem 1,22 states that the ensemble average of the function is (4) where δ i=j denotes the Dirac delta function which is unity only when i = j and zero otherwise. A more detailed derivation of this result using integration by parts is included in the supplementary material. ...
The principle of equipartition of (kinetic) energy for all-atom Cartesian molecular dynamics states that each momentum phase space coordinate on the average has ½kT of kinetic energy in a canonical ensemble. This principle is used in molecular dynamics simulations to initialize velocities, and to calculate statistical properties such as entropy. Internal coordinate molecular dynamics (ICMD) models differ from Cartesian models in that the overall kinetic energy depends on the generalized coordinates and includes cross-terms. Due to this coupled structure, no such equipartition principle holds for ICMD models. In this paper we introduce non-canonical modal coordinates to recover some of the structural simplicity of Cartesian models and develop a new equipartition principle for ICMD models. We derive low-order recursive computational algorithms for transforming between the modal and physical coordinates. The equipartition principle in modal coordinates provides a rigorous method for initializing velocities in ICMD simulations thus replacing the ad hoc methods used until now. It also sets the basis for calculating conformational entropy using internal coordinates.
... (b) Reference [10]; Sect. 71, with definitions and preliminary discussions in Sects. ...
... (e) The results of Refs. [9][10][11] were erived for three-dimensional thermal molecular motions of denseideal-gas molecules, but also obtain [to within at most small numerical factors that can be neglected in order-of-magnitude calculations (exactly in Refs. [9][10][11])] for one-dimensional components thereof. ...
... [9][10][11] were erived for three-dimensional thermal molecular motions of denseideal-gas molecules, but also obtain [to within at most small numerical factors that can be neglected in order-of-magnitude calculations (exactly in Refs. [9][10][11])] for one-dimensional components thereof. These results also obtain for an (N > 1)-molecule Knudsen gas considering only gas-molecule/gas-molecule collisions-not considering gas-molecule/wall (or gas-molecule/EBR-photon) collisions. ...
We show that the velocity distribution in rarefied (i.e., Knudsen) gases is spontaneously weighted in favor of small speeds
away from the Maxwellian distribution corresponding to the temperature of the container walls—despite thermodynamic equilibrium
with the walls. The consequent paradox concerning the second law of thermodynamics is discussed.
... Maxwell-Boltzmann statistics usually applicable to classical particles is based on the assumption of distinguishability. Boltzmann-Gibbs (BG) statistics with elegant applications has enjoyed a golden era by being one of the strong pillars of modern physics [2,3,4,5,6,7,8,9,10,11]. However the domain of applicability remained constrained to short ranges and the extensive systems only [12]. ...
... The BG relationship between the entropy of a physical system with that of the probability values P i of the system in different accessible states is . 3,4,5,6,7,8,9,10,11,29] Preserving the structural similarity of BG entropy [30], and using the basic knowledge of q−algebra a new form of entropy has been recently proposed [1], which seems to open new discussions of non-extensive statistical mechanics. ...
The q-probability based on the definition of q analogs of neutral elements defined in [1] is used to develop q probability theory of Maxwell Boltzmann velocity distribution. The most probable speed and root mean square velocity is evaluated using q-probability. The principle of equipartition is also discussed in the light of new q-probability theory. The q-probability of black body radiation law, Stefan's law and Wein's law is analysed to see the impact of q parameter on the different laws. The important findings of the q-probability theory is that the most probable speed and the Wein's law is independent of q probability, thus is valid in both extensive and non-extensive systems. Thus indirectly the q-probability supports the universal nature of Wein's displacement law.
... If the free energy change is zero, the reaction is at ∆G ) RT ln Q K K (2.1. 6) equilibrium. If ∆G T o or ∆G is negative, the reaction may be called exergonic (work-producing), and if either of these quantities is positive, the reaction may be called endergonic (work-consuming). ...
... The activation energy can be very roughly interpreted as the minimum energy (kinetic plus potential, relative to the lowest state of reactants) that reactants must have to form products (the threshold for reaction), and the preexponential factor is a measure of the rate (collision frequency) at which collisions occur. A more precise interpretation of E a was provided by Tolman, 6,7 who showed that the Arrhenius energy of activation is the average total energy (relative translational plus internal) of all reacting pairs of reactants minus the average total energy of all pairs of reactants, including nonreactive pairs. The best way to interpret A is to use transition state theory, which is explained below. ...
A review of the theoretical and computational modeling of bimolecular reactions is given. The review is divided into several sections which are as follows: gas-phase thermal reactions; gas-phase state-selected reactions and product state distributions; and condensed-phase bimolecular reactions. The section on gas-phase thermal reactions covers the enthalpies and free energies of reaction, kinetics, saddle points and potential energy surfaces, rate theory for simple barrier reactions and bimolecular reactions over potential wells. The section on gas-phase state-selected reactions focuses on electronically adiabatic reactions and electronically nonadiabatic reactions. Finally, the section on condensed-phase bimolecular reactions covers reactions in liquids, reactions on surfaces and in solids and tunneling at low temperature.
... Onnes's 1913 Nobel lecture in which he credits use of the law of corresponding states for his success in liquefying helium may have brought attention to the power of using scaling laws and principles of mechanical similarity, since they were involved in Onnes's derivation of that law. That alone would explain Tolman's interest: although he worked in a number of areas, he had a special interest in foundations of statistical mechanics, and he later authored [textbooks] on it (Tolman 1927(Tolman , 1938). ...
This excerpt from a book length work on the history of the methodology of experimental physical models (physically similar systems) interwoven in Ludwig Wittgenstein's life begins in 1913-1914. It also discusses works by physicists around the same time that were thematically related to the philosophical topics he was working on: Ludwig Boltzmann, Wilhelm Ostwald, Edgar Buckingham, James Thomson, D'Arcy Wentworth Thompson, Henry Crew (and his new translation of Galileo's Two New Sciences during this period), Heike Kamerlingh Onnes, Van Der Waals, and Rayleigh (following up on the work of Gabriel Stokes), and Richard C Tolman. The landmark work at Britain's National Physical Laboratory in 1914 on Similar Motions by Stanton and Pannell, following up on Osborne Reynolds' work in Manchester, is also described and discussed. Connections between physics and the history of flight are mentioned, too: Penuad's successes, Boltzmann's relationship with engineer Otto Lilienthal, and the significance that Hermann von Helmholtz's landmark paper in meteorology which addressed the problem of steering aircraft, took on during this period.
... In the 7 th Chapter of his 1938 treatise "Kinetic Theory of Gases", Kennard [31] develops a comprehensive assessment of macroscopic irregular motion of molecules (including, e.g., the Brownian motion) as connected to averaged microscopic fluctuations: the connection between discrete statistical distributions and exponential functions is obtained by the Euler's succession, taking the limit to infinity of the number of particles. Earlier, in a collection of his investigations on statistical mechanics collected in a 1927 book, Tolman [32] had briefly discussed the role of taking limits in the description of fluctuations for a large number of molecules; in his treatise in 1938 [33] the theme of fluctuations and thermodynamic equilibrium are discussed in more details through a detour involving the Stirling formula for factorials and maximization of entropy in the Boltzmann-Gibbs approach. In either way these treatments involved imposing limiting values to specific variables and anticipating the operation that we will discuss in the next section, namely that of taking the thermodynamic limit, see [31,34]: in some cases, as intermediate steps in the course of derivations, physically insightful expressions were encountered. ...
A variety of current experiments and molecular dynamics computations are expanding our understanding of rate processes occurring in extreme environments, especially at low temperatures, where deviations from linearity of Arrhenius plots are revealed. The thermodynamic behavior of molecular systems is determined at a specific temperature within conditions on large volume and number of particles at a given density (the thermodynamic limit): on the other side, kinetic features are intuitively perceived as defined in a range between the extreme temperatures, which limit the existence of each specific phase. In this paper, extending the statistical mechanics approach due to Fowler and collaborators, ensembles and partition functions are defined to evaluate initial state averages and activation energies involved in the kinetics of rate processes. A key step is delayed access to the thermodynamic limit when conditions on a large volume and number of particles are not fulfilled: the involved mathematical analysis requires consideration of the role of the succession for the exponential function due to Euler, precursor to the Poisson and Boltzmann classical distributions, recently discussed. Arguments are presented to demonstrate that a universal feature emerges: Convex Arrhenius plots (super-Arrhenius behavior) as temperature decreases are amply documented in progressively wider contexts, such as viscosity and glass transitions, biological processes, enzymatic catalysis, plasma catalysis, geochemical fluidity, and chemical reactions involving collective phenomena. The treatment expands the classical Tolman’s theorem formulated quantally by Fowler and Guggenheim: the activation energy of processes is related to the averages of microscopic energies. We previously introduced the concept of “transitivity”, a function that compactly accounts for the development of heuristic formulas and suggests the search for universal behavior. The velocity distribution function far from the thermodynamic limit is illustrated; the fraction of molecules with energy in excess of a certain threshold for the description of the kinetics of low-temperature transitions and of non-equilibrium reaction rates is derived. Uniform extension beyond the classical case to include quantum tunneling (leading to the concavity of plots, sub-Arrhenius behavior) and to Fermi and Bose statistics has been considered elsewhere. A companion paper presents a computational code permitting applications to a variety of phenomena and provides further examples.
... Onnes's 1913 Nobel lecture in which he credits use of the law of corresponding states for his success in liquefying helium may have brought attention to the power of using scaling laws and principles of mechanical similarity, since they were involved in Onnes's derivation of that law. That alone would explain Tolman's interest: although he worked in a number of areas, he had a special interest in foundations of statistical mechanics, and he later authored [textbooks] on it (Tolman 1927(Tolman , 1938). ...
“Tell me," Wittgenstein once asked a friend, "why do people always say, it was natural for man to assume that the sun went round the earth rather than that the earth was rotating?" His friend replied, "Well, obviously because it just looks as though the Sun is going round the Earth." Wittgenstein replied, "Well, what would it have looked like if it had looked as though the Earth was rotating?”
What would it have looked like if we looked at all sciences from the viewpoint of Wittgenstein’s philosophy? Wittgenstein is undoubtedly one of the most influential philosophers of the twentieth century. His complex body of work has been analysed by numerous scholars, from mathematicians and physicists, to philosophers, linguists, and beyond. This volume brings together some of his central perspectives as applied to the modern sciences and studies the influence they may have on the thought processes underlying science and on the world view it engenders. The contributions stem from leading scholars in philosophy, mathematics, physics, economics, psychology and human sciences; all of them have written in an accessible style that demands little specialist knowledge, whilst clearly portraying and discussing the deep issues at hand.
Table of contents: below.
... The negative slope of the ln (K/T) vs (1/T) plot by the Tolman's interpretation implied that ΔH * , the energy difference between the sorbate and the sorptive, is equal to the average energy of the dye molecules that are adsorbed minus the average energy of all the dye molecules in the aqueous solution (Tolman, 1927). From tabs. 1 and2, the value of lies between 2 and 29 kJ/mol, indicating that the bond of uptake was only due to van der Waals interactions characteristic of physisorption. ...
Macroscopic phenomenon like adsorption has a mechanistic tie typical of thermodynamics and its principles. This work examined the thermodynamic parameters for methylene blue (MB) uptake onto modified Ekowe clay (EC). The purified clay was calcined for 4hrs. at 750 oC to obtain Natural Ekowe Clay (NEC). The purified clay was activated (1.6M H2SO4 (aq)) and calcined for 4hrs. at 750 oC obtaining Activated Ekowe Clay (AEC). Thermodynamic study applied the equilibrium data in determining the activation and heats of adsorption parameters. The concave Eyring plot suggests more than one rate-limiting steps coexisting in the sorption. For temperatures: 25, 30 and 40 oC, activation energies (3) for NEC and AEC lie between 2 - 29kJ/mol inferring physisorption. Negative activation enthalpies (ΔH * ) values confirm exothermic activations. The less negative ΔH * values, in compliance with the significant k2 values varying inversely with temperature, suggests high sorption rate. The negative activation entropy explained an associative uptake and the less negative values are attributive to a physical uptake. Negative free activation enthalpy, ΔG * indicated that uptake on the modified EC is spontaneous. High negativity of ΔG * values suggest strong physic-sorption bond. The negative ΔH * , ΔS * and ΔG * values characterize the physisorption of MB onto modified EC. Values of the isoexcess heats (qisox) obtained: 2.67kJ/mol. (NEC) and 2.47kJ/mol. (AEC) agreed with the value range of <80kJ/mol. typical to physic-sorption. This work opines that sorption of MB onto modified EC is a spontaneous exothermic multilayer phenomenon that progresses heterogeneously with continuous decrease in sorption potential and fall in isosteric heat.
... The article was translated to English by M. J. Moravcsik in 1959. Tolman actually gave us two textbooks on statistical mechanics, namely Statistical Mechanics with Applications to Physics and Chemistry[5] and The Principles of Statistical Mechanics[6]. The latter is more widely read. ...
J. Willard Gibbs' \emph{Elementary Principles in Statistical Mechanics} was the definitive work of one of America's greatest physicists. Gibbs' book on statistical mechanics establishes the basic principles and fundamental results that have flowered into the modern field of statistical mechanics. However, at a number of points, Gibbs' teachings on statistical mechanics diverge from positions on the canonical ensemble found in more recent works, at points where seemingly there should be agreement. The objective of this paper is to note some of these points, so that Gibbs' actual positions are not misrepresented to future generations of students.
... where hI a _ q 2 a i is twice the kinetic energy associated with internal mode a, and the angle brackets denotes < > ensemble average. For a thermally equilibrated system (i.e., equipartition [34,35] is satisfied), the temperature determined from any degrees of freedom of the system is the same as the system temperature T. Therefore, the ratio of T a and T is thus an indication of appropriateness of the proposed internal mass. ...
... None of these early mentions of stimulated emission proposed how to actually get amplification, that it would be useful, nor clearly noted its coherence. Tolman [5] did, however, write in 1927 that, 'We should expect radiation induced by an external field to be coherent with the radiation associated with that field'. I know of no proposal to actually make use of such amplification, until those made by microwave spectroscopists in the early 1950s. ...
The history of quantum electronics and its importance are discussed. The invention of laser was important as the combination of both quantum mechanics and electrical engineering were not mixed in the previous days. The electrical engineering and quantum mechanics were both important since lasers grew out of the study of microwave spectra of molecules. The physicists and engineers in the year 1950 did not think that amplification by stimulated emission was particularly useful to obtain microwave amplification.
... Negative absorption (induced emission) was also discussed at this time by J. Van Vleck [23], R. Tolman [29,30], and E. Milne [31] who derived in 1924 the formula of the "absorption integral" taking negative absorption into account. Milne found that the "absorption integral" is proportional to 1 − (N 2 /N 1 ), where N 2 and N 1 are number per cm 3 of atoms in excited state and normal atoms. ...
This paper is devoted to Moscow physicist Valentin A. Fabrikant who is known for his 1939 thesis with suggestion of experiments on light amplification directly proving the existence of negative absorption in gas discharge, his 1951 patent application (jointly with M.M. Vudynsky and F.A. Butaeva) for amplification of electromagnetic radiation (ultraviolet, visible, infrared and radio spectral regions), and his experimental attempts (jointly with F.A. Butaeva) to observe light amplification in gas discharge (paper submission of December 1957).
This is a revision of an article first posted on RG in December 2020 on the individual’s rate of thought, clarified, reorganized, and (hopefully) improved.
The degrees of freedom of a network that distributes energy measures network output capacity. The degrees of freedom of a network using that energy measures use capacity. The ratio of degrees of freedom --- 4 --- of a system distributing energy and the corresponding system using energy --- with 3 degrees of freedom --- gives a 4/3 ratio of capacities. A 4/3 ratio of capacities accounts for the 4/3 fractal envelope of Brownian motion, Richardson wind eddy scaling, Stefan's Law, and expansion of cosmological space, among other phenomena, and implies existence of dual contemporaneous reference frames. Degrees of freedom supplies useful heuristics that give, perhaps, glimpses of physical laws.
This article is another attempt to generalize dimensional capacity.
This chapter describes discussions by scientists in Wittgenstein’s milieu relevant to problems Wittgenstein was pondering after he had decided to devote himself to solving the problems of logic. The chapter opens just after his father has died, and Wittgenstein’s investigations into logic were bringing him to examine notions of mirroring and corresponding. It discusses Ludwig Boltzmann’s views on differential equations, mental models, experimental models, and debates with Ostwald on the use of models in the kinetic theory of gases. Work on similarity by various scientists (James Thomson, D’Arcy Thompson, Helmholtz, van der Waals, Onnes, Reynolds, Rayleigh, Tolman, Stanton and Pannell) developed from insights by Newton and Galileo is surveyed. Questions about equations analogous to those Wittgenstein was pondering about propositions in early 1914 would receive an answer by the end of 1914—by a physicist who had studied thermodynamics with Ostwald (Edgar Buckingham), and formalized the concept of “physically similar systems.”
Using molecular dynamics simulations, binary collision density in a dense non-ideal gas with Lennard-Jones interactions is investigated. It is shown that the functional form of the dependence of collision density on particle density and collision diameter remains the same as that for an ideal gas. The temperature dependence of the collision density, however, has a very different form at low temperatures, where it decreases as temperature increases. But at higher temperatures the functional form becomes the same as that for an ideal gas.
This article surveys the empirical information which originated both by laboratory experiments and by computational simulations, and expands previous understanding of the rates of chemical processes in the low-temperature range, where deviations from linearity of Arrhenius plots were revealed. The phenomenological two-parameter Arrhenius equation requires improvement for applications where interpolation or extrapolations are demanded in various areas of modern science. Based on Tolman's theorem, the dependence of the reciprocal of the apparent activation energy as a function of reciprocal absolute temperature permits the introduction of a deviation parameter d covering uniformly a variety of rate processes, from those where quantum mechanical tunnelling is significant and d<0, to those where d>0, corresponding to the Pareto-Tsallis statistical weights: these generalize the Boltzmann-Gibbs weight, which is recovered for d=0. It is shown here how the weights arise, relaxing the thermodynamic equilibrium limit, either for a binomial distribution if d>0 or for a negative binomial distribution if d<0, formally corresponding to Fermionlike or Boson-like statistics, respectively. The current status of the phenomenology is illustrated emphasizing case studies; specifically (i) the super-Arrhenius kinetics, where transport phenomena accelerate processes as the temperature increases; (ii) the sub-Arrhenius kinetics, where quantum mechanical tunnelling propitiates low-temperature reactivity; (iii) the anti-Arrhenius kinetics, where processes with no energetic obstacles are rate-limited by molecular reorientation requirements. Particular attention is given for case (i) to the treatment of diffusion and viscosity, for case (ii) to formulation of a transition rate theory for chemical kinetics including quantum mechanical tunnelling, and for case (iii) to the stereodirectional specificity of the dynamics of reactions strongly hindered by the increase of temperature.
This Lecture has two purposes. The first is to demonstrate that one can obtain useful results by making formal manipulations of the partition function and the ensemble average. The results of these manipulations include the First and Second Generalized Equipartition Theorems and the relation between energy fluctuations and the specific heat. The second purpose of the lecture is to illustrate the utility of studying special cases, as a method for checking one’s theoretical understanding. To show why special cases are helpful, we derive a result that is less general than it at first appears. The apparent result will be tested unsuccessfully against a special case; insight gained from failure will be used to derive a fully general result.
-This study provides a method for the examination of research efficiency in terms of costs to the subjects, and information gain for the researcher. The basic problem of maximizing the differentiations among subjects on a dependent variable is equivalent to the prob!em of maximizing information gain for a given cost in behavioral science research procedures. The distribution resulting from this maximization process, under certain circumstances, also minimizes two cost quantities of interest in such procedures, the average cost per subject and the average cost per unit information. Two data sets are examined in order to illustrate the use of this criterion of research efficiency.
The study of bimolecular gas reaction rate coefficients has been one of the primary subjects of kinetics investigations over the last 20 years. Largely as a result of improved reaction systems (static flash photolysis systems, flow reactors, and shock tubes) and sensitive detection methods for atoms and free radicals (atomic and molecular resonance spectrometry, electron paramagnetic resonance and mass spectrometry, laser-induced fluorescence, and laser magnetic resonance), improvements in both the quality and the quantity of kinetic data have been made. Summarizing accounts of our present knowledge of the rate coefficients for reactions important in combustion chemistry are given in Chapters 5 and 6.
We present here the results of quasiclassical trajectory calculations for H + H2 collisions. Our emphasis is to examine the dependence of the energy transfer, dissociation, and atom-exchange processes on the initial internal state of the H2 molecule, including states of high internal energy. For these high-energy states the transition probabilities are large and the concern with zero point energy is minimized; these conditions help justify our use of quasi-classical trajectories for the dynamical calculations. In the present study we use an accurate potential energy surface1-3 so that the calculations are more realistic than is possible for other systems for which the uncertainties in the potential energy surface are much greater.
The spectra of substances may, in a general way, be assigned to one of three categories, namely; the continuous spectra, the bright line spectra and the band spectra. The first of these occur only in emission and are produced by bodies heated to incandescence and are incapable of resolution into lines regardless of the resolving power of the available instruments. The second type are the spectra of atoms. They may be produced as bright line emission spectra by suitable excitation of the atoms, as for example by placing them in the crater of an arc or by passing an electric discharge through their vapors. When radiation from an incandescent source is allowed to pass through an atomic vapor and is examined with a spectroscope the spectra occur in absorption as dark line spectra against a continuous bright background.
The nonlinear thermal vibration behavior of a single-walled carbon nanotube(SWCNT) is investigated by molecular dynamics simulation and a nonlinear, nonplanar beammodel. Whirling motion with energy transfer between flexural motions is found in the free vibration of the SWCNT excited by the thermal motion of atoms where the geometric nonlinearity is significant. A nonlinear, nonplanar beammodel considering the coupling in two vertical vibrational directions is presented to explain the whirling motion of the SWCNT.Energy in different vibrational modes is not equal even over a time scale of tens of nanoseconds, which is much larger than the period of fundamental natural vibration of the SWCNT at equilibrium state. The energy of different modes becomes equal when the time scale increases to the microsecond range.
Classical trajectory methods for calculating inelastic scattering cross sections are covered in earlier chapters of this book, especially Chapters 10 and 12. This chapter covers the extension of this technique to treat reactive scattering. The first question which must be answered in a classical trajectory study of a reactive system is whether one should be using this method at all. Classical trajectory studies are useful not just because they yield reaction cross sections, angular distributions, reactivity as a function of initial and final energy distribution, and other observable reaction at-tributes, but also for the insight they may offer into the actual reaction event. One may look at the atomic motions in representative trajectories, and one may calculate such nonobservables as opacity functions (probability of reaction as a function of impact parameter) and dependence on features of the potential energy surface. But one must be careful not to overinterpret the reaction by a trajectory study. Because many reaction attributes depend sensitively on quantitative and qualitative features of the potential energy surfaces which are not quantitatively understood, one must be cautious about believing that the dynamical details of a particular trajectory calculation are in general accord with reality. Trajectory calculations are discussed from this point of view in Chapter 18 of this book.
A numerical test of the reliability of the Slater–Forst procedure for the inversion of the Arrhenius rate law is presented using the theoretical data for the reactions N2O → N2 + O and CO2 → CO + O reported previously: the test results are positive. The sensitivity of the procedure to variations in the Arrhenius parameters is also examined.
It has been shown that the afterglow following a discharge in xenon consists of two parts: (1) the resonance line, whose intensity initially decays exponentially in agreement with Holstein's theory; (2) an accompanying distributed radiation, experimentally a continuum, extending to about 1900 a.u. The intensity of this continuum decays exponentially at the same rate as the population of the lower metastable state at all wavelengths and each pressure studied. All departures from an exponential decay seem to be caused by repopulation of states by ionic recombination and are current dependent. Electron disappearance is due to ambipolar diffusion at low pressures and ion recombination at high pressures. The value of the ambipolar diffusion coefficient indicates that the afterglow positive ion is Xe2+ for pressures greater than 0.3 Torr. No phenomena which could be due to interchange of population between metastable states have been found in xenon.
Rate constants for the disproportionation reaction (1) OH + OH → H2O + O in the range 250–580 K have been measured by flash photolyzing H2O/N2 mixtures and monitoring the decay of OH using quantitative UV resonance spectrometry. The result k1 = (3.2 · 0.8) 10−12 exp (-242/T) cm3/molecule · s, does not correlate with existing high temperature data and confirms previously suggested non-Arrhenius behaviour for this reaction. On the basis of all available data the temperature variation of k1 over the range 250–2000 K is best represented by the empirical expression k1(T) = exp (-27.73 + 1.49 · 10−3T) cm3/molecule · s. The extent of the Arrhenius graph curvature can be reconciled by a “close-collision” model incorporating the energy variation of accessible product states.
The kinetics of complex chemical reactions is considered. Different time scales exist if one or more of the rate constants of the individual reaction steps is much larger than the others. Examples of specific reactions are given in which the intermediates vary on the fast time scale. They can be eliminated according to a standard scheme, the lowest order of which coincides with the steady-state approximation usually employed in textbooks on chemical kinetics.
The specific heat of hydrogen gas at low temperatures was first measured in 1912 by Arnold Eucken in Walther Nernst’s laboratory
in Berlin, and provided one of the earliest experimental supports for the new quantum theory. Even earlier, Nernst had developed
a quantum theory of rotating diatomic gas molecules that figured in the discussions at the first Solvay conference in late
1911. Between 1913 and 1925, Albert Einstein, Paul Ehrenfest, Max Planck, Fritz Reiche, and Erwin Schrödinger, among many
others, attempted theoretical descriptions of the rotational specific heat of hydrogen, with only limited success. Quantum
theory also was central to the study of molecular spectra, where initially it was more successful. Moreover, the two problems
interacted in sometimes surprising ways. Not until 1927, following Werner Heisenberg’s discovery of the behavior of indistinguishable
particles in modern quantum mechanics, did American theorist David Dennison find a successful theory of the specific heat
of hydrogen.
The concept of description mechanics is developed for the purpose of pointing out a common basis for many of the fields related to information theory. These fields include thermodynamics, photography, language, models, gambling, cryptology, and pattern recognition. Because of the broad scope of the subject and the varying interests of those we hope might read the paper, some effort has been made to present the basic methods in more detail than would be necessary for an audience purely of physical scientists.
It is shown that many descriptive processes can be reduced to mathematical terminology by dividing the domain of description into cells whose size is determined by the resolution, and filling the cells with occupants where the number of kinds of occupants is determined by the accuracy of distinguishing between them. Attention is also given to the functional processes by which one kind of a description or model is transformed into another.
A physical basis for the modeling of autonomous systems is presented. Namely the thesis is unfolded that irreversible thermodynamics
is presently capable of describing nature, man, and society. Four fragments of the theme are presented. The first is a description
of how thermodynamics orders all of nature. The second illustrates how the dynamics of fluid fields (mobile atomisms) is developed.
The third begins the formulation of a social physics. The fourth provides a primitive notion why all autonomous systems are
described by the same formal set of equations of state and change.
It appears to be axiomatic that termolecular and higher order reactions occur relatively rarely. The basis for this judgment
seems to lie in the supposition that successful 3-Body collisions of 3 interactive species of molecules cannot occur frequently
enought to account for chemical or biochemical transformation. In order to provide a more complete mathematical framework
than now exists for examining this hypothesis the probability of effective termolecular “δ-collisions” as a function of time
is derived. This amounts to adding to the class of reactions for which stochastic models are now available the termolecular
reaction. In common with the unimolecular and bimolecular cases this process is seen to satisfy the criterion of consistency-in-the-mean
with respect to deterministic formulations. It is planned next to use the termolecular process and the lower order processes
in computer-assistedin numero experimental studies aimed at comparing alternative mechanisms of reaction.
This paper draws attention to selected experiments on enzyme-catalyzed reactions that show convex Arrhenius plots, which are very rare, and points out that Tolman's interpretation of the activation energy places a fundamental model-independent constraint on any detailed explanation of these reactions. The analysis presented here shows that in such systems, the rate coefficient as a function of energy is not just increasing more slowly than expected, it is actually decreasing. This interpretation of the data provides a constraint on proposed microscopic models, i.e., it requires that any successful model of a reaction with a convex Arrhenius plot should be consistent with the microcanonical rate coefficient being a decreasing function of energy. The implications and limitations of this analysis to interpreting enzyme mechanisms are discussed. This model-independent conclusion has broad applicability to all fields of kinetics, and we also draw attention to an analogy with diffusion in metastable fluids and glasses.
ResearchGate has not been able to resolve any references for this publication.