Thermodynamics
respond
responses
|
I've long thought there was something wrong with chemistry. Any field that takes 12 upon the mass of the proton in grams as a fundamental constant has something basically wrong with it. One aspect of this wrongness is that one weird constant breeds others. Take the ideal gas constant. R = k_b*A_o where k_b is Boltzmann's constant and A_o is Avagadro's number. The only interesting equation where this appears is in the ideal gas law, which is usually given as: P*V = n*R*T where n is in some weird units given by the number of particles divided by Avagadro's number. The use of dimensionless quantities as named units has always seemed to me basically obscurantist, and while I can appreicate that as the field developed historically it made sense to use A_o, it no longer does. The ideal gas law ought to be written as: P*V = k_b*n*T where n is the plane old number of particles. Boltzmann's constant at least has a claim to a certain level of fundamentality. I've been playing around with kinetic theory lately, partly via simulation, and it's surprisingly hard to get the results of simulation equal to the ideal gas law. This is a valuable exersize because it exposes the weaknesses of the thinking behind it. This turns out not to be a new thought--other people are aware of the weakness of Boltzmann's whole approach, which is still the basis for how we teach elementary kinetic theory. But I have a problem with Gibb's approach too, which is this: on what basis to we take the ensemble average to get the macroscopic properties of the system? In Gibb's approach we deal with N interacting particles and imagine many copies of the system in different micro-states. We then take weighted averages over this collection of imaginary copies (called the Gibbs ensemble) to get the macroscopic properties of the real system. But on what dynamical basis do we do this? At any moment the real system is in exactly one real state, and that state determines its evolution with time. The macroscopic dynamics of the system are determined by the micro-dynamics of that state alone, not the micro-dynamics of any other states that the system "might" be in but isn't. In particular, is there any reason to believe that all or even most of the other microstates are dynamically accessible? For example, one can imagine a situation where we set up a 10 particles, say, in thermal equilibrium in a box, but we do so in such a clever way that there are always exactly five particles in either half of the box--never four or six. That is, we can choose a set of initial conditions (or can we?) such that there are a very large number of states that the system "might" be in that are simply not dynamically accessible from those initial conditions. States that are not dynamically accessible do not have any business in a dynamical description of the system. And if all possible states are dynamically accessible from arbitrary initial conditions, then I want to pursue that fact in a little more detail, and see if it affords a more physically satisfactory way of taking averages. Classically it is pretty rare to get dynamical systems that don't explore the full volume of phase space allowed by energy conservation--two-body Newtonian orbits are a notable exception, to which fact we owe whatever stability the solar system has--so it seems plausible that over time the system may explore the full range of possible states. But that still suggests we should be taking time averages, not ensemble averages, to maintain the fundamental connection between microscopic and macroscopic physics, because at any given time there is only one state a system can possibly be in, and that state determines its future states as well.
|