You are here
Understanding the 2nd Law of Thermodynamics
Sean Carroll at CosmicVariance has an interesting set of slides on the 2nd law of thermodynamics and the arrow of time. In this age of simulations, one question that came to my mind was whether we could do a molecular dynamics simulation of a glass of ice cubes, allow them to melt, and then simulate the process in negative time to get the ice cubes back. Probably not. But the fundamental physical laws do not have any time directionality to them. So what gives? More importantly, the universe appears to have had a low entropy beginning but the second law suggests that we are at an entropy minimum at this point in time.
Some answers can be found in the comments on Sean's post.

John Baez clarifies some points (including the Boltzmann HTheorem) saying:
So, Sean wrote:

Boltzmann’s Htheorem, while interesting and important, is even worse. It makes an assumption that is not true (molecular chaos) to reach a conclusion that is not true (the entropy is certain, not just likely, to increase toward the future — and also to the past).
whereas when I last thought about this theorem, I claimed otherwise (see Blake Stacey’s comment).
So, is the assumption behind Boltzmann’s Htheorem timesymmetric or not? And what about the theorem’s conclusion: timesymmetric or not? Is the conclusion that entropy increases both in the future and the past… or just towards the future? I’m pretty darn sure the assumptions and conclusions are timeasymmetric. Boltzmann actually called his assumption the “Stosszahlansatz”, or “collision number assumption”.
So: what’s the Stosszahlansatz? It goes like this.
Suppose we have a homogeneous gas of particles and the density of them with momentum p is f(p). Consider only 2particle interactions and let w(p1, p2; p1′, p2′) be the transition rate at which pairs of particles with momenta p1, p2 bounce off each other and become pairs with momenta p1′, p2′. To keep things simple let me assume symmetry of the basic laws under time and space reversal, which gives:
w(p1, p2; p1′, p2′) = w(p1′, p2′; p1, p2).
In this case the Stosszahlansatz says:
df(p1)/dt = integral w(p1, p2; p1′, p2′) [f(p1′)f(p2′)  f(p1)f(p2)] dp2 dp1′ dp2′
This is very sensiblelooking if you think about it. Using this, Boltzmann proves the Htheorem. Namely, the derivative of the following function is less than or equal to zero:
H(t) = integral f(p) ln f(p) dp
This is basically minus the entropy. So, entropy increases, given the Stosszahlansatz! The proof is an easy calculation, and you can find it in section 3.1 Zeh’s The Physical Basis of the Direction of Time (a good book). Now: since the output of the Htheorem is timeasymmetric, and all the inputs are timesymmetric except the Stosszahlansatz, we should immediately suspect that the Stosszahlansatz is timeasymmetric.
And it is!


Sean's reponse was:
John’s comment above about the Stosszahlanzatz is basically right, but there is a little bit more behind the story. (And it is all there in Zeh’s book, if you bother to unpack it.)
Boltzmann actually did two innocuoussounding things that enabled him to miraculously derive a timeasymmetric conclusion from timesymmetric microscopic laws. First, he worked with distribution functions defined in the singleparticle phase space. That is, if you have two particles, you can either keep track of them as one point in a twoparticle phase space (separate dimensions for the position and momentum of each particle) or as two points in the singleparticle phase space. Both are completely equivalent. But if you go to a distribution function in that singleparticle phase space, f(q,p), then you have thrown away information — implicitly, you’ve coarsegrained. In particular, you can’t keep track of the correlations between different particle momenta. There’s no way, for example, to encode “the momentum of particle two is opposite to that of particle one” in such a distribution function. (You can keep track of it with a distribution function on the full multiparticle phase space, which is what Gibbs ultimately used. But there entropy is strictly conserved, unless you coarsegrain by hand.)
Because you’ve thrown away information, there is no autonomous dynamics for the distribution function. That is, given f(p,q), you can’t just derive an equation for its time derivative from the Hamiltonian equations of motion. You need to make another assumption, which for Boltzmann was the Stosszahlansatz referred to above. You can justify it by saying that “at the initial moment, there truly are no momentum correlations” (plus some truly innocent technical assumptions, like neglecting collisions with more than two particles). But of course the real Hamiltonian dynamics then instantly creates momentum correlations. So that innocentsounding assumption is equivalent to “there are no momentum correlations before the collisions (even though there will be afterwards).” Which begins to sound a bit explicitly timeasymmetric.
The way I presented the story in my talk was to strictly impose molecular chaos (no momentum correlations) at one moment in time. That’s really breaking timetranslation invariance, not timereversal. From that you could straightforwardly derive that entropy should increase to the past and the future, given the real Hamiltonian dynamics. What the real Boltzmann equation does is effectively to assume molecular chaos, chug forward one timestep, and then reassume molecular chaos. It’s equivalent to a dynamical coarsegraning, because the distribution function on the singleparticle phase space can’t carry along all the finegrained information.
What do you think?
 Biswajit Banerjee's blog
 Log in or register to post comments
 8080 reads
Comments
A nonstatistical way...
Another way to look at this question...
Is the discrete probability theory at all necessary here? Of course, Boltzmann used it, but I think it should not be absolutely necessary. ... To see how, you may think in terms of the Fourier analysis of diffusion. The time evolution in diffusion leads to a loss of the sharpness initially present in the signal. ... This fact, viz., that diffusion smoothens out any sharpness, is what corresponds to the increase in entropy. A discussion in terms of particles is just one way to describe the situation.
As an aside: For a continuum description, only the Fourier theorya nonlocal theoryhas been generally available so far. However, a *local* continuum theory *is*, of course, possible. This is an interesting issue by itself...
Coming back to the main discussion thread: The second law really is marvellous. No matter what formalism you use, what terms, you simply can't escape the basic physical fact denoted by it.

Another matter. I am not sure if the current MD simulation technology is at all capable of showing any melting in the first place... Could someone clarify this matter, please? Is the theory of MD sufficiently well developed that quantum mechanical calculations done on an initial solid state of a material would invariably *lead* to a change of state to the liquid state in the simulation? How about the solids like camphor that directly vaporize? (This would be a test case, really speaking!)