Posts

In my last post, I discussed the geometric mean and how it relates to the more familiar arithmetic mean. I mentioned that the geometric mean is often useful for estimation in physics. Lawrence Weinstein, in his book Guesstimation 2.0, gives a mental algorithm for approximating the geometric mean. Given two numbers in scientific notation \[ a \times 10^x \quad \text{and}\quad b \times 10^y, \] where the coefficients \(a\) and \(b\) are both between 1 and 9.

CONTINUE READING

Given two positive numbers \(a\) and \(b\), the most well-known way to average them is to take the sum divided by two. This is often called the average or the mean of \(a\) and \(b\), but to distinguish it from other means it is called the arithmetic mean: \[ \text{arithmetic mean} = \frac{a + b}{2}. \] Another common, and useful, type of mean is the geometric mean. For two positive numbers \(a\) and \(b\), \[ \text{geometric mean} = \sqrt{ab}.

CONTINUE READING

I was recently flipping through a real analysis textbook, and came across a notation that I had not seen before. First, the set difference of two sets \(A\) and \(B\) (which I had seen before) was defined as \[ A \setminus B = A \cap B^c. \] Here \(B^c\) is the set compliment of \(B\), or all elements not in \(B\). You can equivalently write the set difference as \[ A \setminus B = \{x \, |\, x \in A\, \text{and}\, x \notin B\} \]

CONTINUE READING

Probability Current in Quantum Mechanics In nonrelativistic 1D quantum mechanics, consider the probability of finding a particle in the interval \([x_1, x_2]\): \[ P_{x_1,x_2} = \int_{x_1}^{x_2} |\Psi(x,t)|^2\, \text{d}x. \] It is interesting to look at how this probability changes in time. Taking a time derivative, and using the Schrodinger equation \[ i \hbar \partial_t \Psi = \frac{-\hbar^2}{2m} \partial_x^2 \Psi + V \Psi \] to simplify things, we get (assuming a real potential) \[ \frac{d P_{x_1, x_2}}{dt} = \frac{\hbar}{2mi}\int_{x_1}^{x_2} \left[ (\partial_x^2 \Psi^*) \Psi - (\partial_x^2 \Psi) \Psi^* \right] \, \text{d}x \] where since \[ \left[ (\partial_x^2 \Psi^*) \Psi - (\partial_x^2 \Psi) \Psi^* \right] = \partial_x \left[ (\partial_x \Psi^*) \Psi - (\partial_x \Psi) \Psi^*\right], \] we can write this as \[ \frac{d P_{x_1, x_2}}{dt} = J(x_1, t) - J(x_2, t), \] where \[ J(x,t) = \frac{\hbar}{2mi} (\Psi^* \partial_x \Psi - \Psi \partial_x \Psi^*) \] is called the “probability current.

CONTINUE READING

I once read, though I cannot now recall where, that an appropriate purgatory for theologians would be forcing them to read all of the words ever written on the Filioque.

If that is so, then an appropriate purgatory for physicists would be forcing them to read all of the words ever written on the interpretation of quantum mechanics.

CONTINUE READING

I am teaching undergraduate quantum mechanics for the first time this semester. One thing I have discovered is that it is very easy to make mistakes when talking about quantum mechanics. Not mathematical mistakes (the math is fairly straightforward), but conceptual mistakes in the interpretation of the mathematics. I was therefore pleased to read a recent paper by Blake Stacey entitled “Misreading EPR: Variations on an Incorrect Theme.” The “EPR” in the title stands for Einstein-Podolsky-Rosen (Einstein and his two postdocs at the time), and is used as shorthand for a famous thought-experiment these three published in 1935.

CONTINUE READING

What is the equation of a straight line in the complex plane? There are many different forms, but I want to look at some of the simplest ones. If you know the slope \(m \in \mathbb{R}\) and intercept \(b \in \mathbb{R}\) of the line, you can write an equation in parametric form \[ z = x + i (m x + b) ,\] where \(x \in \mathbb{R}\) and \(z \in \mathbb{C}\).

CONTINUE READING

Two random variables \(X\) and \(Y\) can be conditionally independent given the value of a third random variable \(Z\), while remaining dependent variables not given \(Z\). I came across this idea while reading a paper called “The Wisdom of Competitive Crowds” by Lichtendahl, Grushka-Cockayne, and Pfeifer (abstract here). I’m sure it is a familiar idea to those with more of a formal background in statistics than me, but it was the first time I had seen it.

CONTINUE READING

In my last post, I calculated the mean distance to the nearest point to the origin among \(N\) points taken from a uniform distribution on \([-1, 1]\). This turned out to be \(r = 1/(N+1)\), which is close to but greater than the median distance \(m = 1 - (1/2)^{1/N}\). In this post, I want to generalize the calculation of the mean to a \(p\)-dimensional uniform distribution over the unit ball.

CONTINUE READING

Consider the uniform distribution on \([-1, 1]\), and take \(N\) points from this distribution. What is the mean distance from the origin to the nearest point? If you take the median instead of the mean, you get the answer outlined in my last post. The mean makes things more challenging. Here is a solution that makes sense to me. I am sure there is a more formalized way to go about this, but I was trained as a physicist, so I tend to use “informal” mathematics.

CONTINUE READING