Entropy - Science topic
The measure of that part of the heat or energy of a system which is not available to perform work. Entropy increases in all natural (spontaneous and irreversible) processes. (From Dorland, 28th ed)
Questions related to Entropy
How we can integrate the statistical method (Relative Shannon's entropy) with remote sensing and GIS to quantify the urban growth patters of mountain towns?.
We know that entropy is a measure of disorder and also we know that it always increase never decreases. Time arrow move in forward direction not in reverse.
Ok now my question is when we born we are in highly ordered state as a kids our skin is so tight and shiny than we grow up became young than again we got old our face fade up everything gets more disorders as we know but at same time we produce our relocate we give birth to baby which is highly ordered form now if we think like that entropy always increased.it gets low to high so our baby entropy again became low from our entropy which was high is this doesn't break entropy rules?
And can you reference articles or texts giving answers to this question?
This is briefly considered in https://arxiv.org/abs/0804.1924 which is on RG as https://www.researchgate.net/publication/314079736_Entropy_and_its_relationship_to_allometry_v17.
Reviewing the literature would be helpful before considering whether to updatie the 2015 ideas.
Entropy is one of the concepts used in textbooks and literature in a variety of descriptions and interpretations that often confuse students and teachers.
Recently, I published a manuscript in which I tried to simplify our understanding of the concept of entropy, relate it to potential energy and give a general description of entropy in the context of chemical reactions and biological processes.
I would be happy to receive your feedback:
On February 5, 2021, Casper A. Helder posted the question “can the second law of thermodynamics be abandoned?” I was surprised to read that the second law )together with the first law( the only two laws that remain unchanged since their formulation by Clausius in 1867, are boldly doubted. Moreover, the entropy in which its propensity to grow is the second law is not even mentioned, and therefore, I wrote a short cynical answer. To my surprise every few days since I followed this question, more people add answers and for today, there are 6618 reads and 386 answers! For Example, Henning Struchtrup a professor at Victoria university working in the field wrote: “No doubt one can criticize Carnot or Clausius but one should not forget that they were at the very beginning”. Struchtrup received 13 recommendations for his answers. I wonder what causes a respectful scientist from this field to say that no doubt that Carnot and Clausius's works are problematics.
From reading part of the answers including Helder's argumentation, I believe that somehow in the last century, the definition and therefore the meaning of the second law was forgotten. Hereafter, I will summarize the definition of the second law and its immediate consequences.
2nd Law Definition: In any irreversible process, the entropy S increases.
Irreversibility: If we have a reservoir at a temperature T and one adds an amount of energy Q in an irreversible route its entropy increases by S>Q/T. If the process is reversible then the entropy increase is S=Q/T. This inequality is called Clausius inequality.
The amount of energy added or removed from a reservoir is a measurable quantity. However, we see that in an irreversible path "T" is smaller than T in a reversible path of the same system. Therefore, we cannot define temperature and therefore entropy for a system, in an irreversible route.
Equilibrium: If a closed system is resting for a long time its entropy will increase to the maximum. Clausius inequality means that energy flows from hot to cold and therefore in equilibrium all the subsystems of an ensemble have equal temperature i.e. all its degrees of freedom have the same amount of energy i.e. in an ideal gas every degree of freedom of any molecule has kT/2 energy. Here k is the Boltzmann constant, which is the gas constant, divided by the Avogadro number and T is the temperature. Moreover, in equilibrium, all the microstates (a distinguishable configuration of any ensemble) have an identical amount of energy. Therefore,
Temperature and Entropy are defined only for systems in Equilibrium: to find the temperature of an ensemble in equilibrium one can take a single molecule measure its energy and know the temperature. However, this is seldom the case. Usually, there are “hotter” molecules and “colder” ones and this is the reason why both the entropy and temperature are defined only in equilibrium. This ambiguity about the “temperatures” out of equilibrium causes all kinds of anomalous behaviors like “super cooling” and the Mpemba paradox. This is why the application of the 2nd law has difficulties in microscopic systems. These phenomena are proving Clausius's inequality and the second law rather than disproving it.
Maximum Entropy and Quantum theory. Since in equilibrium the entropy expression of a system is maximum, every ensemble tends to reach spontaneously equilibrium. Therefore, one can calculate many properties of an ensemble by maximizing the statistical expression of the entropy (Max Entropy Technique). Planck did such a calculation for EM radiation in equilibrium with a materialistic body and found the quantized nature of energy. Is abandoning the 2ndlaw means giving up the Quantum theory? Can physics without entropy exist?
I post this question to find out if there is any concrete scientific evidence or argumentation that may cause scientists to declare that there is serious criticism against Clausius's inequality, Carnot's efficiency, and the second law.
I carried a similar reaction in two reactors ( batch reactor and microwave reactor); for sure, Gibbs free energy is negative in both cases ( same reaction). However, in MW, TΔS term becomes larger, and ΔG becomes more negative (higher entropy ) than the one obtained by conventional heating.
Moreover, the thermodynamic advantage provided by the MW is realized at lower temperatures where the free energy (ΔG = ΔH -TΔS) of the MW reaction becomes more negative. The fact that the MW-driven reaction has a negative ΔG at lower temperatures than the CH stems from its significantly lower value of ΔH, as ΔH will be less than - TΔS at lower temperatures. Therefore, at a lower temperature, the reaction with a microwave-driven reaction will become more favourable than the CH reaction as (-TΔS)CH > (-TΔS)MW
Can you please help me to explain this in simpler words? or can you please provide references for such justification?
I invite anyone to participate to an open discussion on the latest “findings” on Black-Holes' research. The motive of this thread is a set of articles appeared in the issue of September 2022 (p. 26-51) of Scientific American magazine under the title “Black Hole Mysteries solved”.
I have proposed a new way of thinking about Nature/Reality NCS(Natural Coordinate System) (https://www.researchgate.net/publication/324206515_Natural_Coordinate_System_A_new_way_of_seeing_Nature?channel=doi&linkId=5c0e3a7d299bf139c74dbe81&showFulltext=true) and I would ask whether you recognize any basic distinction between the above preprint(and the following Appendices) and the articles of Sci. Am.. This thread is intended to be an open– in respect to time and subject - discussion forum for the latest results of Black Hole research in order to advance new perspectives based on NCS and to put the proposals of NCS to the public assessment.
In order to seed points of arguments, I picked up some phrases from the articles of SciAm in comparison to phrases or references from NCS preprint.
- “Paradox Resolved” by G. Musser. “Space looks three-dimensional but acts as if it were two-dimensional.” (p.30) → NCS (p.11-13, 49-52).
- - “It says that one of the spacial dimensions we experience is not fundamental to nature but instead emerges from quantum dynamics” (p.31) → NCS (p.11-13).
- - “Meanwhile theorists think that what goes for black holes may go for the universe as a whole” (p.31) → NCS (p.31-38, 46-47).
- “Black Holes, Wormholes and Entanglement” by A. Almheiri- “The island itself becomes nonlocally mapped to the outside” (p.39) → NCS (p.44-47), https://www.researchgate.net/publication/345761430_APPENDIX_18_About_Black_Holes?channel=doi&linkId=5facf0fe299bf18c5b6a0d4d&showFulltext=true .
- “A Tale of Two Horizons” by E. Shaghoulian- The whole article is about BH-Horizon, Holographic Principle, Observer, and Entropy → NCS (p.31-38, 44-47, 54-61, 6-7), https://www.researchgate.net/post/What_is_Entropy_about_Could_the_concept_of_Entropy_or_the_evaluation_of_its_magnitude_lead_us_to_the_equilibrium_state_of_a_system .
- “Portrait of a Black Hole” by S. Fletcer- The article is about the history of the observation of Sagittarius A* (the BH at the center of Milky Way galaxy). There is no obvious connection with NCS.
PS. This discussion is NOT open for new “pet-theories” apart from NCS.(!!!)
Dear research colleagues,
when I calculate the entropy S° of crystalline materials from heat capacity measurements (Cp) from (almost) 0 K to 298 K by integrating Cp/T, I have "the feeling" to end up with a quite precise/reliable value of S° for the respective material. However, when I compare such a value with other ones in literature (eg. obtained from vapor pressure measurements), I frequently find a significant discrepancy between them, most of the time that the value from the Cp-measurement is quite lower (>20 J/(mol*K) than the other values.
Now my question is, if my feeling is "right" that the values obtained from Cp-measurements are actually more trustworthy, or if there are some problems with this way of determining the entropy, that I'm not aware of?
Thanks for your help!
In our theoretical studies entropy and heat capacity for electron gas calculated for semiconductor nanowires. Hovewer for compare our results not enaugh experimental results (references).
I am trying to calculate enthalpy of mixing of a HEA alloy, using the dHmix equation from
I am trying to calculate a know alloy CoCrFeNiAl with a known value of dHmix (-12.32kJ/mol), while using values from http://www.entall.imim.pl/calculator/ as by definition Hmix/ij is taken for equimolar pairs.
And the value i get is -18.94kJ/mol.
I am attaching a xlsx file with all the numbers.
The phrase in the Title line imitates Karl Popper’s All Life is Problem Solving.
Since thermodynamics plays a role in life processes, it was surprising that searching “All life is thermodynamics” on Google on August 16, 2022 gave no results.
Don’t organisms seek to optimize and preserve the entropy of their internal energy distribution? And to optimize their use of energy and outcomes based on energy inputs? Aren’t survival and procreation ways of preserving previous products of energy use?
Is there justification for the statement, All life is thermodynamics? Or is the statement too simple to convey any insight?
Schrodinger in What is Life referred to thermodynamics, statistical mechanics; chapter 6 is Order, Disorder and Entropy. And more recently there is: J. Chem. Phys. 139, 121923 (2013); doi: 10.1063/1.4818538 Statistical physics of self-replication by Jeremy England.
If one uses a coordinate transformation, say t -> a t' +b_i x^i, does it change the thermodynamic quantities of a black hole, say entropy, temperature and others?
the general question is that: does coordinate transformation change the Smarr relation (the generalized form of the first law of black hole thermodynamics)?
I am trying to calculate the change in enthalpy and entropy for a heat transfer fluid (HTF) for modeling purposes. I have one pressure-enthalpy diagram or chart for the HTF and I am also using software 'EES' for the calculation of change in enthalpy and entropy.
For the chart, the reference state at 0 C saturated liquid is: h=43 kJ/kg and s=0.175 kJ/kg.K.
For the software, the reference state at 0 C saturated liquid is: h=200 kJ/kg and s=1 kJ/kg.K.
When I calculate the change in h or s using each method (i.e. chart and software), I get different results. Is it normal that the change in h or s is different when different reference states are used? Because of this I have two sets of power and efficiency calculations which are different from one another under the same conditions.
I want the procedure to simulate the entropy generation in the enclosure with the effect of natural convection and radiation heat transfer using FLUENT / ANSYS.
I am conducting Latent Class Analysis with 8 binary indicators and a sample size of 6757 obs. The entropy for 2classes, 3classes, 4 classes clustering is very low (around 0.54). But when I reduce the sample size and drop some of the observations randomly, I obtain better entropies. For example the entropy for 4500 sample size is 0.68 and for 4280 sample size is 0.75.( For all of the sample sizes BIC suggests that 3 classes clustering is the best number of clustering). I am wondering is it acceptable to reduce the sample size to get a better entropy?! Is there any justifications to reduce sample size in this case ? Any hint would be of great help. Thanks
Is that right? (From an article, Using graph theory to analyze biological networks.)
Things flow in biological systems: energy, nutrients, blood, air, as examples.
A graph is a set of points with connections.
A graph is like a photograph. Biological systems are like movies. If that analogy is valid (well, maybe it is not?), then graphs are not the optimal way to model biological system; it is necessary to also model flow.
my sample is La0.95Sr0.05MnO3, in another sample La0.67Sr0.33MnO3 has one peak in entropy change vs temperature .Thanks for explanation me about the physics of the problem.
The general consensus about the brain and various neuroimaging studies suggest that brain states indicate variable entropy levels for different conditions. On the other hand, entropy is an increasing phenomenon in nature from the thermodynamical point of view and biological systems contradict this law for various reasons. This can be also thought of as the transformation of energy from one form to another. This situation makes me think about the possibility of the existence of distinct energy forms in the brain. Briefly, I would like to ask;
Could we find a representation for the different forms of energy rather than the classical power spectral approach? For example, useful energy, useless energy, reserved energy, and so on.
If you find my question ridiculous, please don't answer, I am just looking for some philosophical perspective on the nature of the brain.
Thanks in advance.
am trying to add entropy generation for hybridnano fluid in my research article what should i note before and after . and suggest me some of the article to get clear idea and matlab program to plot graph for entropy
During denaturation, hydrogen bonds and hydrophobic bonds are broken. This results in an increase in entropy even at the highest severity of molecular breakdown. Although, denaturation can be reversible by doing renaturation. However, it’s generally not possible to restore the protein to its original form again. So that the solubility can be reduced and the inability of biological activity.
My question : If there is a natural ingredient such as sago caterpillar that has the potential for high protein content it can be used as an innovation in the form of preparation. As I said earlier about the effects of denaturation above. Can this interfere with the process or even have a negative impact when the preparation has been distributed to the human body?
1, The gas diffuses to vacuum,dq = 0, dS = dQ / T = 0,so the entropy in the
diffusion process cannot be calculated: S (T1).
2, If S (T1) has no physical meaning, then S (T0) and S (T1) have no physical
Could anyone help me? Thanks
I've analysed a set of data including 11 continuous variables and 1 catergorical variable of 1031 patients. I explored to classify these patients into two class by latent class analysis using Mplus 8.3. But the Entropy was 1. These are the detail of the results:
Akaike (AIC) 39043.715
Bayesian (BIC) 39122.805
Sample-Size Adjusted BIC 39071.987
(n* = (n + 2) / 24)
Class Counts and Proportions
1 597 0.57625
2 439 0.42375
VUONG-LO-MENDELL-RUBIN LIKELIHOOD RATIO TEST P<0.001
LO-MENDELL-RUBIN ADJUSTED LRT TEST P<0.001
- I have done an MRF and Sliding mesh analysis of the centrifugal pump. I need to know how to calculate and show the production of entropy.
- The entropy of the attributes of alternatives for each criterion is a measure of the significance of this criterion. It is believed that the lower the entropy of the criterion, the more valuable information the criterion contains.
- Criteria Importance Through Inter-criteria Correlation (CRITIC). In the CRITIC method, the standard deviation is a measure of the significance of this criterion.
A very interesting topic, "quantification of randomness" in mathematics it is sometimes reffered to as "complex theory" (although it is more about pseudorandom than randomness) that is based on saying that a complicated series is more random and then there are tests for randomness in Statistics and perhaps the most intriguing test related to information theory -"entropy"(as also being of relevence to and result of second law of thermodynamics), while there are also random numbers generators (pseudorandom numbers generators) and true random numbers generators using quantum computing.
So, what I've been trying to, is making a complete list of all available algorithms or books or even random number generators that will allow me to tell me how much random a series is, allowing me to "quantify randomness".
There are 125 unique infinite series which are pseudorandom that I have discovered and generated based on a rule, now how do I test for randomness and quantify it? Uf the series is random or there is probably a pattern, or something that will allow me to predict the next number in the series given I don't know what the next number is.
Now, do anyone know of any github links based on any of the above? ^ (like anything related to quantifying randomness in general that you think will be helpful).
A book/books on quantifying randomness will be very very helpful too. Actually anything at all...
I have a dataset containing the concentration of a marker protein for different experiments that I wish to calculate the Shanon entropy for. To do this, I create a discretized space between the minimal and maximal values of the concentration observed and then I assign each of the concentrations to a certain interval and then calculate the probabilities from there. My problem is that I can chose the number of intervals in this discretized space and I do not know how to optimize the value of the entropy with regards to the number of intervals.
For example. Imagine I have 3 concentration values that are 1,4 and 10. I can chose to discretize the set to the two intervals [1,5] and [5,10] or to the 10 intervals [1,2], [2,3], [3,4] ... [9,10]. The Shanon entropy for the first set will be 2/3*ln(2/3) + 1/3*ln(1/3), while the Shanon entropy of the second set will be 3*(1/3*ln(1/3)). How do I know which set to chose? In other words, I'm looking for a way to discriminate between various discretizations of the concentration space to calculate the most accurate Shanon Entropy value, if such a thing exists.
I am currently considering to purchase the bench Top Mini high energy planetary ball mill. As I have never used it before, I need your perspectives on its suitability for synthesizing the above materials.
I' m using GSAS software to do the Rietveld refinement of equiatomic FeCoCrNi high entropy alloy, but I don't know how to get the cif file. So how can I get the cif file of FeCoCrNi high entropy alloy?
I want to calculate the Gibbs free energy and phase transitions for my high entropy alloy. I saw that the CALPHAD program is used more. How can I do this calculation? what type of file do I need as the input file. I have never used this program and these calculations. I would be happy if you could help me.
Suppose, a nation gathers all the resources its citizens have and then divides the total resources equally among all the citizens. Will this nation progress? It would fail badly due to the economic entropy. The economic entropy means that the same resources are there, but they cannot be employed to benefit the people and the nation. Megastructures like factories and companies cannot be established. No new resources could be generated.
Arrow of time (e.g. entropy's arrow of time): Why does time have a direction? Why did the universe have such low entropy in the past, and time correlates with the universal (but not local) increase in entropy, from the past and to the future, according to the second law of thermodynamics? Is this phenomenon justified by the Gibbs law and the irreversible process?
With respect to all the answers, in my opinion, no answer to such questions is completely correct.
In my current thinking/writing I have been exploring ideas behind quantum social theory, for example the potential of an entropic society. Here, such a society exhibits a default (temporal) tendency toward disorder. Entropy increases unless society works to reduce it. Why? Because, from a quantum super-positional perspective on a society of individuals, there is an infinite potential for interference through quantum interdependency: there is an indeterminate potentiality to disorder, with only a limited number of determinable, observable events that may signify order.
Statistically, unless we invest in reducing the range of interdependencies and thus work to reduce the indeterminacy of state changes and/or interferences, by implementing (social) negentropic constraints, we will experience emergent disorder. Such constraints, including our social institutions, laws, ethics and morals, are designed to increase the probability that a given/anticipated/expected/desired state change within society may be observable. This is society’s desire for normativity.
Yet, as I think on these lines, I begin to see the potential of the autistic mind and its consciousness as a radical free agent unbound to the idea of negentropic normativity. This, to my mind is a positive prospect: autism’s value to society. Society needs its free radicals to prevent excessive negentropy. By attention to the radical free agents of society, we can be reminded that social normativity cannot rule out indeterminacy entirely: society must respect its entropic potential. And, while all about us seek to normalise our activities, we can look to autism to remind us of our full, unrealised potential.
Thoughts? All opinions, normative and non-normative are welcome.
I have a range of different ion pairs , some of them formed ionic liquids and others didn't. I've read before that for ionic liquids to be formed, the entropy should be greater than the enthalpy. I am wondering if there is a certain range for both entropy, and enthalpy, or delta G where ionic liquids can be formed within?
The problem of self-interaction effects and errors arises in studies of, for example, anions, electrons, atoms and molecules.
It also arises in developing a theory of network effects in connection with network entropy (for example, https://arxiv.org/abs/0803.1443 ). In the network case, the concept of degrees of freedom leads to an apparent resolution.
Does the network case generalize?
To understand how gravity cannot be an entropic force, please see comment (5) in . Please assume that the “atoms-of-spacetime” defined in  can be heated up to the Davies-Unruh temperature (T), to produce the desired entropic inertial force,
f = T∇S
where (∇S) is some entropy gradient . What are the problems with this entropic definition of inertial force? Could entropic inertial force not be compatible with the principle-of-equivalence, etcetera?
P.S. Remember what Boltzmann always used to say, “If you can heat it up, then it’s made out of atoms!”
 Thanu Padmanabhan, (PDF) Atoms of Spacetime and the Nature of Gravity (researchgate.net)
Recently, black-holes were demonstrated to exert a pressure on their adjacent surrounding space (see: https://scitechdaily.com/physicists-total-surprise-discover-black-holes-exert-a-pressure-on-their-environment/amp/"Physicists’ Total Surprise: Discover Black Holes Exert a Pressure on Their Environment – SciTechDaily")
Reference: “Quantum gravitational corrections to the entropy of a Schwarzschild black hole” by Xavier Calmet and Folkert Kuipers, 9 September 2021, Physical Review D. DOI: 10.1103/PhysRevD.104.066012
In two old preprints of mine (2018 and 2020), I've also predicted that all black-holes exert a mechanical pressure on their adjacent environment which I've defined as being actually a reaction(al) force produced by a predicted universal black-hole-associated Casimir force.
More precisely, a black-hole (bh) limits the spontaneous appearance of virtual pairs (VPs) inside it and that creates a gradient between the outer and inner VPs (a positive gradient between the number of VPs per unit of volume outside vs inside that bh), a gradient translated in a (bh-associated) Casimir force (bhaCF) exerting an additional out-to-in pressure on that bh (which bhaCF also generates a reactional force that acts from inside to outside that bh, manifesting as a pressure exerted by that bh on its adjacent surrounding space). See the two preprints cited next:
 “(eZEH working paper - version 1.0 - 10 pages - 2.08.2018) An extended zero-energy hypothesis: on some possible quantum implications of a zero-energy universe, including the existence of negative-energy spin-1 gravitons (as the main spacetime “creators”) and a (macrocosmic) black-hole (bh) Casimir effect (bhCE) which may explain the accelerated expansion of our universe”:
 “(bhCE-HR-AE-ObU - version 1.0 - 18.11.2020- 2 A4 pages without references) A proposed black-hole-associated Casimir effect (bhCE) possibly inhibiting Hawking radiation (HR) and creating a spatial expansion around any macro/micro black-hole (possibly driving the global accelerated expansion of our observable universe)”:
What do you think of my proposed black-hole-associated Casimir force and its reactional force (manifested as a pressure exerted by black-holes on their surrounding space)?
These excitations at low temperatures give rise to disorder beyond the long/short-range ordered magnetic states. In the case of spin ice systems, the quantum fluctuations act as a significant perturbation force that deviated the system far away from the two-in-two-out spin configuration. It gives rise to the three-in-one-out spin configuration and forms the dynamic monopole-antimonopole pairs. Surprisingly, no change in the "Spin ice" entropy is observed.
Why is a dynamic ground state (or increase in the entropy) not observed beyond the spin ice state due to the dominance of quantum fluctuation at low temperatures?
I used " Solve/set/expert " command to obtain the temperature gradient and used "Custom field function" to manually enter the entropy equation. However, the result is not plausible and I cannot validate my case with other researches.
I am working on lid driven cavity problem in porous media. I have calculated the fluid flow using finite difference method and average nusselt number by simpson rule varying y from 0 to 1. I don't know how to calculate the entropy generation and bejan number which is described in the attached figure.
I carried out the DFT calculations using Gaussian. By varying the pressures (.8GPa, 5GPa,8GPa...) , I coud observe that there is a change in value the following parameters.
- Thermal correction to Gibbs Free Energy
- Thermal energy. specific heat capacity and entropy caused by translation .
- also. no change in the Thermal energy. specific heat capacity and entropy caused by rotational and vibrational motion
can anyone give a clear explanation
Considering that ASCII characters take range from 32 to 127, so we need 7 bits to represent each character, what is the highest theoretical value of information entropy for a given text?
For example for grayscale images, since we represent each pixel using 8 bits and there is a maximum of 2^8=256 shades, the highest entropy value is 8.
Does this hold for text? So is the highest possible atainable value 7?
I am trying to configure this for a text encryption design.
Does the entropy command in MATLAB provide the correct result for this?
I am implementing GLCM on MRI images. And also have got its features, & got the results too, but I don't understand the significance and utilization of all the features like entropy, contrast, etc. I know definition and equations, but want to know how to utilize all these features for classification and get the result.
Dear experts, colleagues
In MCDM, is it possible to apply the Entropy method to derive weights of criteria without predefined alternatives?
In my case, the criteria data is available as raster layers that cover the entire area of interest and I want to explore the optimal sites on it.
Thanks in advance for your support
I did XRD of my powder alloy samples, I got major peak of all samples with different concentration but beside major peaks some small distortion peaks, I think the high entropy alloy powder hasn't been well crystallized after 45hrs mechanical alloying (Ball milling). How can I improve the crystallinity of my powder alloy sample.?
It is suggested that the Zero Point Energy (that causes measurable effects like the Casimir force and Van der Waals force) cannot be a source of energy for energy harvesting devices, because the ZPE entropy cannot be raised, as it is already maximal in general, and one cannot violate the second law of statistical mechanics. However, I am not aware of a good theoretical or empirical proof that ZPE entropy is at its highest value always and everywhere. So I assume that ZPE can be used as a source of energy in order to power all our technology. Am I wrong or right?
My goal is to discretize a set of attributes using entropy discretization. The plan for this program is the following:
1. Discretize the attribute A1 using Entropy-Based discretization when either
a. The number of distinct classes within a partition is 1.
b-The ratio of the minimum to maximum frequencies among the distinct values for the attribute Class in the partition is <0.5and the number of distinct values within the attribute of Class in the partition is Floor(n/2), where n is the number of distinct values in the original dataset.
I've attached a sample of my python program to this discussion and a sample dataset. The issue that I am running into is minimizing the information gain. My hypothesis is that the distinct number of class within a partition is 1 when the information gain is 0. I would appreciate any feedback on this.
1. Can we use equation “delta S = delta q/ T” for any process for which T is constant? Can it be used for any isothermal process? Are all isothermal processes reversible? Can an adiabatic process be reversible if it is carried out infinitesimally slowly? Can we use equation “delta S = delta q/T” for such an adiabatic change to get “delta S = 0”?
2. What is the meaning of non PV work? How can it be extracted?
I would welcome any comments on these questions.
Can we define entropy for a particle?
in quantum mechanics when we are dealing with a quantum system, the measurement process makes us to suppose we are dealing with an ensemble of systems. Is it legitimate to define entropy for a single quantum system in this way?
I done solid liquid adsorption studies and I done thermodynamic parametric studies. When I'm doing thermodynamic calculations I got negative enthalpy and entropy values, and I got positive gibb's free energy values? As per literature's if gibb's free energy values are negative the adsorption process is spontaneous and if it is positive process is non-spontaneous. I would like to know if the process is non-spontaneous, desorption will occur or not. Anyone please help me how I can explain this study with the above mentioned values and help me to get some idea about the thermodynamic studies.
Thank you in advance.
Globally, the phenomenon interpretations of the sign of: the standard Gibb′s free energy (ΔG°), the standard enthalpy (ΔH°) and the standard entropy (ΔS°) are the same in different publications.
Nevertheless, when the amount of the absolute value of one thermodynamic parameter is enough high or low, we read some small differences of interpretations, especially when there are some specific phenomenon, or adsorbent, or adsorbate, or metallic ions, etc…
The goal of this discussion is to help researchers in this field to enrich their interpretations in new experimental data by reading our different answers and replies on different situations already published.
Another delicate point which can enrich this discussion is that if one thermodynamic parameter varies slightly with temperature, how to affect the usual interpretations (the increase or decrease) for the case of positive or negative values?
We know that entropy is associated with every matter. Can it be associated with the expansion of the universe too? Is there any spiritual angle associated with the entropy and universe expansion?
Firstly, I use historical time serial data of 20 ports' throughput to predict the future output. Then I calculate the information entropy of these time serial data. I find that the prediction accuracy in a certain interval of the entropy is very good. How to explain this phenomenon? why the prediction accuracy is related to the entropy? what is the mechanism in the ARIMA method?
The inner energy is: U= E(el) + E(ZPE) + E(vib) + E(rot) + E(trans)
E(el) - is the total energy from the electronic structure calculation
= E(kin-el) + E(nuc-el) + E(el-el) + E(nuc-nuc)
E(ZPE) - the the zero temperature vibrational energy from the frequency calculation
E(vib) - the the finite temperature correction to E(ZPE) due to population
of excited vibrational states
E(rot) - is the rotational thermal energy
E(trans)- is the translational thermal energy
Summary of contributions to the inner energy U:
Electronic energy ... -1650.67074291 Eh
Zero point energy ... 0.16891920 Eh 106.00 kcal/mol
Thermal vibrational correction ... 0.00502963 Eh 3.16 kcal/mol
Thermal rotational correction ... 0.00141627 Eh 0.89 kcal/mol
Thermal translational correction ... 0.00141627 Eh 0.89 kcal/mol
Total thermal energy -1650.49396154 Eh
Summary of corrections to the electronic energy:
(perhaps to be used in another calculation)
Total thermal correction 0.00786217 Eh 4.93 kcal/mol
Non-thermal (ZPE) correction 0.16891920 Eh 106.00 kcal/mol
Total correction 0.17678137 Eh 110.93 kcal/mol
The enthalpy is H = U + kB*T
kB is Boltzmann's constant
Total free energy ... -1650.49396154 Eh
Thermal Enthalpy correction ... 0.00094421 Eh 0.59 kcal/mol
Total Enthalpy ... -1650.49301733 Eh
Note: Only C1 symmetry has been detected, increase convergence thresholds
if your molecule has a higher symmetry. Symmetry factor of 1.0 is
used for the rotational entropy correction.
Note: Rotational entropy computed according to Herzberg
Infrared and Raman Spectra, Chapter V,1, Van Nostrand Reinhold, 1945
Point Group: C1, Symmetry Number: 1
Rotational constants in cm-1: 0.073719 0.034947 0.034938
Vibrational entropy computed according to the QRRHO of S. Grimme
Chem.Eur.J. 2012 18 9955
The entropy contributions are T*S = T*(S(el)+S(vib)+S(rot)+S(trans))
S(el) - electronic entropy
S(vib) - vibrational entropy
S(rot) - rotational entropy
S(trans)- translational entropy
The entropies will be listed as mutliplied by the temperature to get
units of energy
Electronic entropy ... 0.00000000 Eh 0.00 kcal/mol
Vibrational entropy ... 0.00764430 Eh 4.80 kcal/mol
Rotational entropy ... 0.01390861 Eh 8.73 kcal/mol
Translational entropy ... 0.01975045 Eh 12.39 kcal/mol
Final entropy term ... 0.04130337 Eh 25.92 kcal/mol
In case the symmetry of your molecule has not been determined correctly
or in case you have a reason to use a different symmetry number we print
out the resulting rotational entropy values for sn=1,12 :
| sn= 1 | S(rot)= 0.01390861 Eh 8.73 kcal/mol|
| sn= 2 | S(rot)= 0.01325416 Eh 8.32 kcal/mol|
| sn= 3 | S(rot)= 0.01287133 Eh 8.08 kcal/mol|
| sn= 4 | S(rot)= 0.01259970 Eh 7.91 kcal/mol|
| sn= 5 | S(rot)= 0.01238901 Eh 7.77 kcal/mol|
| sn= 6 | S(rot)= 0.01221687 Eh 7.67 kcal/mol|
| sn= 7 | S(rot)= 0.01207132 Eh 7.57 kcal/mol|
| sn= 8 | S(rot)= 0.01194524 Eh 7.50 kcal/mol|
| sn= 9 | S(rot)= 0.01183404 Eh 7.43 kcal/mol|
| sn=10 | S(rot)= 0.01173456 Eh 7.36 kcal/mol|
| sn=11 | S(rot)= 0.01164457 Eh 7.31 kcal/mol|
| sn=12 | S(rot)= 0.01156241 Eh 7.26 kcal/mol|
GIBBS FREE ENERGY
The Gibbs free energy is G = H - T*S
Total enthalpy ... -1650.49301733 Eh
Total entropy correction ... -0.04130337 Eh -25.92 kcal/mol
Final Gibbs free energy ... -1650.53432070 Eh
For completeness - the Gibbs free energy minus the electronic energy
G-E(el) ... 0.13642221 Eh 85.61 kcal/mol
Could you please help me in this ORCA output thread. Which value should be considered as Thermal correction to Gibbs Free Energy ?
Should exergy analysis be taught to undergrad engineering students? I am wondering to what extent I should include this in my undergrad Thermodynamics course for 3rd year chemical engineering students? We do energy balance and the entropy balance already.
I'm trying to detrmine the gibbs free energy change for some possible reaction pathways to see if their feasible or not.
No current literature data can be found for these reactions. Initially, I thought to use the bond dissociation energy to find the change in enthalpy, but this doesn't take into account the entropy of the reaction.
when a system say a heat engine draws heat from a hot reservoir (body at high temperature compared to the system) it does work, since heat is a low grade energy not all of heat is converted to work, therefore remaining heat is released into a cold reservoir (say to the environment, which is at lower temperature compered to the system's temperature). Now, this released heat (which is unavailable for the system) is somehow related to the entropy of the system, according to some literatures. My Question here is how is this heat related to Entropy?
The immediately answer without too much thinking would possibly be: Yes, we need them, because not all criteria have the same importance, and this is true.
Fine, now what are criteria weights? They are subjective transcriptions of verbal ordinal expressions, also subjective. Analyzing this step, one realizes that they are totally arbitrary, and in addition, subject to the decision of a DM, which may be different from that of another DM,
Really, a very suspicious procedure and with no mathematical support.
We agree that weights are used to have a rank of criteria according to their relative importance. And now, the crucial question comes, ‘Importance relative to what?
If you analyze a fact such as for instance, the relative importance of quality and price regarding the purchase of a car, you can legitimately say, that in this respect, and for YOU, quality is more important, without assigning a quantitative value to that preference, because it would be meaningless. Nobody can put a value to a feeling or a preference.
Observe, and this is important, that your preference must be related to something, in this case the car, because in other aspects you may prefer price to quality, according to your tastes and budget, for instance, in purchasing a necktie, and thus, it depends on the object of the comparison.
In case of criteria weights in MCDM, a practitioner may express these two postulates:
1) Weights are necessary to evaluate alternatives,
2) Due to the fact that not all criteria have the same importance.
Is this answer correct? NOT in the first statement, and YES in the second.
The reason of the fallacy of the first statement, is that the relative importance between criteria is NOT associated with alternatives evaluation, unless they are objective weights derived from entropy.
Probably you will say: But these entropy weights also establish a ranking between criteria, as subjective weights do, then, why the evaluation of alternatives is valid for entropy and not from preferences?
Because Shannon Theorem, the base for Information Technology, as we know it today, demonstrates that to evaluate something you must have capacity for evaluation, you need a certain quantity of information, and it depends on the discrimination of the values of a criterion that is, it is an attribute. As an analogy, the actual system is equivalent to asking a 5 years old child to evaluate different cars.
He/she, at that age, does not have on cars the amount of information that an adult possesses.
This amount of information in a criterion can be found by entropy; this is the great discovery of Shannon, and he even developed a formula to compute it.
If in a criterion the different values corresponding to the alternatives are very similar or close, the entropy or uncertainty is high, the quantity of information low, and thus, this criterion has a low significance for evaluating alternatives.
The best example are dice. The uncertainty about which number will appear when casting, is 100%, because all of them have the same values (1/6); the entropy will be 1, and the quantity of information 0.
Since weighting does not have this property, it can’t be used for evaluation.
Therefore, why to spend thousands of hours trying to find out how to have better subjective weights, when they are useless, at least for MCDM?
I am not claiming that I am right but at least is what I get based on reasoning and science.
Therefore, if subjective weights are inappropriate to evaluate alternatives, why are we using them?
I would very much like if some of my colleagues in RG can add something else, supporting or not, my arguments. As a bottom line, I believe that weights, other that entropy ones, should not be used in MCDM methods. I am not saying that we should consider all criteria with the same importance, because it is not true most of the times, thus, we should use either entropy weights, or allow the method determine by itself the importance of each criterion, based on inputted data, as is done in Linear Programming.
I will be more than glad to discuss this issue, but please, with arguments, not with simple words or masking references about other authors wrote.
I designed a single effect absorption cooling system working with LiBr/H2O in Aspen Plus software. But the exergy flow rates don't seem right. For example, when I try to find the exergy destruction of the pump (and other components), the result is negative. What is the reason of this? My reference conditions are 25 oC and 101.325 kPa.
I am a beginner in using REFPROP software by NIST. I am trying estimate various thermodynamic properties of carbon dioxide using GERG and other equations of state. I am wondering, why I am observing a significant difference between the results of GERG-2008 and AGA8 model. Specifically the difference is in the enthalpy, entropy and internal energy.
For an example, please see the REFPROP results in the attached file. It can be seen that GERG-2008 is yielding negative and completely different values as compared to Peng-Robinson and AGA8 models. What is the reason of that variation?? How to get the similar results for comparison?
Note: I used the default reference state by REFPROP.