1 Kinetics

1.1 Big numbers and probability

(September 1, 2023)

1.1.1 Big numbers

Take a number like Avogadro’s number, NA=6×1023. The number of rearrangements is of Avogadro’s number NA! is exponentially big, meaning the logarithm is also big. We proved the Stirling approximation11 1 log(x) is the same as ln(x) throughout this course! If we ever need the log base 10 we will write log10(x).

log(N!)NlogN-N (1.1)

The Stirling approximation can also be written

log(N!)Nlog(N/e)  or    N!(Ne)N (1.2)

Given N objects, the number off ways I can choose r1 objects for group 1, and the remaining r2 objects in group 2 (with r2+r2=N) is given by the ‘‘binomial” coefficients 22 2 While we wont need it, the reason why its called the binomial coefficient is because the binomial x+y raised to a power is (x+y)N= (x+y)(x+y)(x+y)N factors (1.3) = r1=0NCr1r2Nxr1yr2 (1.4) In passing to the second line I have to choose r1 terms out of the N terms in the first line to take x and the remaining r2 terms will take y. Try it out for N=2 and N=3. The multinomial coefficients are similar, and expanding (x+y+z)N will lead to a similar expansion involving xr1yr2zr3.

Cr1r2N=N!r1!r2! (1.5)

You should be able to explain this formula. This generalizes – if I have N objects, and I select r1 objects into group 1, r2 objects into group 2, and the remaining r3 objects into group three (with r1+r2+r3=N), the number of ways to do this is given by the “multinomial” coefficient:

Cr1r2r3N=N!r1!r2!r3! (1.6)

You should be able to explain this formula.

1.1.2 Probability

First consider a set of discrete outcomes i=1N, each with probability 𝒫i (like a weighted six sided die). The sum of probabilities is unity

i𝒫i=1 (1.7)

Associated with each outcome is a quantity xi, e.g. x3 the money you get for rolling a three. Then the mean of x (the mean money you get by rolling the die)

x=i𝒫ixi (1.8)

For a given quantity x we define the deviation from the average

δxx-x (1.9)

and the average deviation is zero δx=0. Then the variance is the mean of the squared deviation

δx2=(x-x)2=x2-x2 (1.10)

The standard deviation is

σx=x2 (1.11)

For continuous variable we need the concept of a probability distribution. The probability, d𝒫, to find a particle with position in a range between x and x+dx, which we denote [x,x+dx], is denoted

d𝒫=P(x)dx, (1.12)

where the probability density P(x) is

P(x)=d𝒫dx. (1.13)

A very important probability density is the Gaussian or “normal” distribution which you should try to memorize:

P(x)=12πσ2e-x2/2σ2 (1.14)

It is also called the Bell shaped curve and you should be able to sketch it. In class and in homework we showed:

-P(x)dx= 1 (1.15)

And worked out a number of integrals

xn=-P(x)xndx=σnCn (1.16)

The numbers are C0=1, C2=1, C4=3 with odd moments, such as x, being zero.

1.1.3 Independence and the central limit theorem

Consider a two dimensional probability distribution

d𝒫x,y=P(x,y)dxdy (1.17)

This is the probability of x in [x,x+dx] and y in [y,y+dy].

We say that x and y are independent if P(x,y)=P(x)P(y) factorizes so that the probability of finding x and y (in interval dxdy) is probability of x (in interval dx) times the probability of in y (in interval dy)

d𝒫x,y=P(x)dx×P(y)dy (1.18)

The constants can be arranged so that P(x) and P(y) are separately normalized, e.g.

P(x)dx=1andP(y)dy=1 (1.19)

When the distributions are independent

xy=xy (1.20)

For definiteness, consider a sequence of random steps in position x. Assume x1, the step in position from step number one, is drawn from the probability distribution P(x). Also assume second step x2 is drawn from the same distribution, and that the choice of x2 is no way dependent on x1. Similarly, the third step x3 is drawn from P(x) and is no way dependent on x1 or x2; and so on for x4,x5,x6. Then we want to know what is the mean, variance, and probability distribution of the sum

Y=x1+x2++xN (1.21)

The answer is for the mean and variance are

Y= Nx (1.22)
δY2= Nδx2 (1.23)

In general the probability of Y depends on P(x), and nothing much can be said about P(Y). However, if N is large N1, then, remarkably, the probability of Y takes on a universal form of a Normal distribution

P(Y)=12πσY2exp[-(Y-Y)2/2σY2] (1.24)

with σY=δY2. We did not go over the proof, and it is enough at this level to just accept it as a statement of fact