Soft Physics - PhysicsGirl.com

Transcription

Soft Physics - PhysicsGirl.com
Speed Demon
Sabrina Gonzalez Pasterski
(Dated: Dec 16, 2014)
This note proposes a detector arrangement/measurement corresponding to the subleading soft
graviton theorem.a
I.
where the Bondi mass term is subleading in r0 . Subtracting the two equations gives:
STARTING ASSUMPTIONS
The metric conditions considered in S.&Z.1 were:
mb = Mi = constant, Czz = 0
mb = Mf = constant, Czz 6= 0.
zz 0 u
(I.1)
Here, I will consider a particular scattering configuration
where instead Czz = 0 both initially and finally. Moreover, I will restrict myself to situations where the envelope of Czz (u) has a finite u integral at each point on the
sphere. Under these conditions:
R
R
duu@u Czz = uCRzz |11
duCzz
(I.2)
=
duCzz
where the boundary term can be dropped for quick
enough Czz (u) fall-o↵s, which I will assume.
z0 z u
(
zz 0 u)
2
+(
z 0 z u)
2
= 4r02
2r0 | z|
L=
1 + z z̄
zz 0 u
1
= L̃ + [Dz Czz z + c.c.]
2
(II.8)
z0 z u
= L̃
1 z
[D Czz z + c.c.]
2
(II.9)
where
III.
du
2
2dudr + 2r
2
z z̄ dzdz̄.
(II.2)
SUBLEADING MEASUREMENT
zn = ✏ei
z z̄+r0 Czz z 2 +Dz Czz
(II.3)
a
z z̄+r0 Czz z 2 Dz Czz
ISBN: 978-0-9863685-9-2
1 arXiv:1411.5745v1.pdf
z0 z u
z+c.c. (
z+c.c. (
zz 0 u)
2
=0
(II.4)
going from z ! z 0 , whereas the reverse route will have:
z z̄
zz 0 u
2⇡n
N
, n 2 {0, ..., N
z = ✏ei ,
1}
(III.1)
z 0 z u)
2
=0
(II.5)
z = iz
(III.2)
The di↵erence between a clockwise versus a counter
clockwise circuit for a constant Czz is:
NP1
{ n,n+1 u
n+1,n u}
N !1 n=0
NP1
2⇡n
2⇡n
= lim
Dz Czz (✏ei N )i✏ei N 2⇡
N
N !1 n=0
2⇡
R z
= D Czz (z)iz + c.c.
H0
= ✏ Dz Czz dz + c.c.
lim
the trajectories of light rays traveling between detectors
will satisfy:
2r02
(II.10)
In the large N limit, one has:
ds2 = ds2F + 2mr B du2
+rCzz dz 2 + Dz Czz dudz + c.c.,
z z̄
r0
[Czz z 2 + c.c.]
2L
Consider N detectors arranged in a regular polygon
around z=0.
(II.1)
When, in addition to the flat metric, there is a perturbation:
2r02
z z̄ + r0 Czz z 2 + c.c. (II.7)
Combing these yields:
is their spatial separation using the standard flat metric:
=
z z̄
TWO DETECTOR PRIMER
Following S.&Z., I will consider detectors that are at
a fixed r = r0 , with z = z 0 z describing their angular
separation in complex coordinates. If we assume z is
small, then:
ds2F
(II.6)
While adding the two equations gives:
L̃ = L +
II.
= Dz Czz z + c.c.
+ c.c.
(III.3)
This is equivalent to sending the signal chains in opposite
directions around the array of detectors and looking at
the time di↵erence when the two pulses arrive at the
starting point after a single loop.
Note that this is a cumulative e↵ect. The di↵erence in
timing is a correction to the net time for a single circuit,
which is at leading order in r0 is:
L✏ = 4⇡r0 ✏.
(III.4)
2
Under the assumption that the u integral is finite, the
final time delay between two initially synched counter
rotating signal chains thus corresponds to u, z integral.
IV.
FIG. 1. Two counter propagating relays of signals are triggered sequentially around the circular array of detectors, with
the final de-sychronization recorded.
The subleading correction, which does not change the
relative time delay of the counter rotating signals, is due
to the di↵erence between the loop integrals of L and L̃.
SOFT FACTOR INTERPRETATION
The net de-synchronization is related to the subleading
soft graviton factor. For a stationary metric satisfying
the the vacuum Einstein equations, it would be the time
integral of the curl of the twist, which would be zero. In
a scattering process with gravitational radiation, however, one can
R use the expectation value interpretation
to think of duDz̄2 Czz in terms of the soft factor. In
this case, the real and imaginary parts of the subleading
soft factor correspond to divergence and curl of Da Cab .
Note that the subleading soft graviton theorem contains
more content than that used for the superrotation charge.
The real part ofR Dz̄2 Czz is what appears in the superrotation charge, RduDa Db Cab , while the imaginary part
contributes to i du[Dz̄2 Czz Dz2 Cz̄z̄ ]. This imaginary
part is expected to be finite under the boundary condi+
tions2 Dz2 Cz̄z̄ Dz̄2 Czz = 0 at I+
and I + , assuming quick
enough fall o↵s in u for this quantity.
Consider a Czz (u) which varies slowly enough that
summing the accumulated time delay for intervals spaced
by u = 4⇡r0 ✏ approximates the u integral:
ACKNOWLEDGEMENTS
PH
m
=
✏
Dz Czz (u = 4⇡r0 ✏m)dz + c.c.
1
4⇡r0 ✏
R
du
H
(III.5)
✏
Many thanks to A. Strominger and A. Zhiboedov for
useful questions.
Dz Czz dz + c.c.
2 arXiv:1406.3312.pdf
Massive Soft Factors
Sabrina Gonzalez Pasterski
(Dated: September 13, 2014)
I show that the soft factor corresponds to the measured time integrated radiation for any speed.
I.
MASSIVE CHARGES
II.
Start with the Liénard-Wiechert radiation field for a
single accelerating charge:
"
#
~ (t0 )) ⇥ ~˙ (t0 ))
Q
~
e
⇥
((~
e
r
r
~ rad (r, t) =
E
(I.1)
4⇡✏0 rc
(1 ~er · ~ (t0 ))3
where ~ =
~
v
c
and t0 is the delayed time:
ct0 = ct
r + ~er · ~s(t0 ) = u + ~er · ~s(t0 )
(I.2)
for a source at position ~s(t0 ). In the following, I will take
units where c = 1 and suppress the 4⇡✏0 normalization.
From (I.2), one finds:
dt
=1
dt0
~er · ~ (t0 ).
(I.3)
I will now consider a superposition of (I.1) for a set of
massive charges accelerating from zero velocity to p~k such
that the accelerations are always parallel to the velocities.
In this case:
~k (t0 ) = p~k fk (t0 )
Ek
(I.4)
for some functions fk (t0 ) that go from 0 to 1 over the
time during which the particles accelerate.
An observer sitting at R~er for some fixed, large R will
then observe the following time-integrated radiation field:
Z
~ rad
@i ~x · dtE
"
#
XZ
~˙ k (t0 ))
~
e
⇥
(~
e
⇥
r
r
=
(1 ~er · ~k (t0 ))dt0 Qk
· @i x̂
(1 ~er · ~k (t0 ))3
k
"
#
XZ
~er ⇥ (~er ⇥ Ep~kk f˙k (t0 ))
0
=
dt Qk
· @i x̂
(1 ~er · Ep~kk fk (t0 ))2
k
"
#
XZ 1
~er ⇥ (~er ⇥ Ep~kk )
=
dfk Qk
· @i x̂
(1 ~er · Ep~kk fk )2
0
k

X
pk · n
=
Qk @i log
(I.5)
Ek
k
˙
where the first equality uses the fact that ~ ⇥ ~ = 0
from (I.4), and (I.2) to change the integration measure
from dt to dt0 . This cancels a factor of (1 ~er · ~k (t0 )) in
the denominator, as seen in the second equality. In the
third equality, the t0 integral is converted to an integral
over fk , which evaluates to the final equality.
INTERPRETATION
The last line in (I.5) agrees with the results in “Classical Interpretation of the Weinberg Soft Factor” and
“Generalizing the Soft Factor/Classical Connection,” in
which the two opposite limits of 1) non-relativistic and 2)
massless charged particles, were considered. Using charge
conservation, this was seen as the Weinberg soft factor
with the position of the far away observer n replacing the
direction of the soft photon momentum:
h
i
R
P
pk ·n
duFuz /
Qk @z log
Ek
(II.1)
k
(0)+
= ✏ˆ+⇤
!S
.
z
Note that changing the integration interval for fi from
[0, 1] to [1, 0] is equivalent to having the particle decelerate. This changes the sign of the ith particle’s contribution to (I.5), just as the sign of its contribution to the
soft factor would switch.
Here, I have shown that the result discussed in the
previous papers for particular velocity limits holds for
charged particles of any mass or velocity, when they are
forced to accelerate on linear trajectories. In the QFT
picture, one treats the interactions as occurring in a small
space-time region. In this regime, ~s remains small, while
~ need not be. One can then take the limit in which the
accelerations are instantaneous. In this case, the soft factor appears as a classical background pulse of radiation
emitted from the interaction point, as relevant to Section
II of “Subtleties of Zero Modes.”
ACKNOWLEDGEMENTS
Many thanks to J. Barandes.
[1] “Classical Interpretation of the Weinberg Soft Factor”
[2] “Generalizing the Soft Factor/Classical Connection”
[3] “Subtleties of Zero Modes”
Subtleties of Zero Modes
Sabrina Gonzalez Pasterski
(Dated: September 11, 2014)
Here, I introduce the two distinct zero mode components of the gauge field expansion and the
subtlety of distinguishing their commutation relations.
I.
FOURIER SERIES
As seen in “Generalizing the Soft Factor/Classical
Connection,” the time integral of Fuz gave the boundary term Nz = A+
Az = @z N . While this describes
z
the zero mode behavior of the field strength, there is
an additional part of the zero frequency behavior of
Az which is pure gauge, namely a constant-in-u shift
Cz = 12 [A+
z + Az ] = @z C. The value of Cz can change
under a gauge transformation by an arbitrary function
@z (z, z̄) of the angular variables, but what is important
is that under such a u-independent gauge transformation, only C shifts while all other modes are una↵ected.
Since, N appears in the charge generating these large
residual gauge transformations,1 it is natural to look for
two independent zero mode components which are conjugate to each other, while the bulk-bulk commutation
relations that ordinarily appear in bracket formulations
exclude the zero modes.
Consider performing an expansion over a finite interval
u 2 [ T2 , T2 ] of the gauge field along I + for a particular
point (z, z̄) on the S 2 :
1
X
2⇡n
2⇡n
@ u Az = a 0 +
an cos(
u) +
bn sin(
u)
T
T
n=1
n=1
=
1
X
1
X
↵n e
i2⇡n
T u
(I.1)
n= 1
where ↵0 = a0 , ↵n = 12 (an ibn ), ↵ n = 12 (an + ibn )
for n > 0. The first line is shown to emphasize the presence of the constant term. Note that the above expansion assumes that the function is periodic on the interval
[ T2 , T2 ]. (It would be reasonable to consider the radiated
electric field starting and ending at zero.) The coefficients
are given by:
1
↵n =
T
@ u Az e
i2⇡n
T u
du.
(I.2)
↵00 = Cz ,
1
P
↵n0 e
n= 1
iT
2⇡n [(
↵n0 =
i2⇡n
T u
(I.4)
1)n ↵0
↵n ].
This new expansion goes to Cz at both endpoints, while
matching the function on the open interval.
II.
FOURIER TRANSFORM
Taking the large T limit naturally leads to the Fourier
transform when the mode coefficients are well behaved.
Letting ! = 2⇡n
T ! 0 for n 6= 0,
!
↵n =
2⇡n
Since ! =
!
n
n and
becomes:
2⇡n
T , when
1
P
!
n ... !
n=1
1
@ u Az =
2⇡
Z1
Z1
@ u Az e
i!u
du.
(II.1)
1
n increments by n, d! = 2⇡
n=
T
R1
d!... , so that in this limit (I.1)
0+
d!e
i!u
1
Z1
@ u 0 Az e
i!u0
du0 .
(II.2)
1
If we want to write:
Z1
d!↵(!)ei!u ,
(II.3)
1
the mode expansion will include a (!) piece:
T
2
Starting with (I.1) and integrating to find the expansion
for Az would give:
X T
i2⇡n
Az = C z + ↵ 0 u +
↵n e T u .
i2⇡n
n6=0
1 arXiv:1407.3789
A0z =
Az =
T
2
Z
The subtlety of the zero mode, and the likely origin of
discrepant factors of 12 in matching residual gauge transformation commutators comes from the linear term. Using (I.3), we find Nz = ↵0 T . Meanwhile, If we performed
a fourier series expansion of (I.3) on the interval ( T2 , T2 ),
we could absorb the linear term into the sin( 2⇡n
T u) coefficients.
(I.3)
↵(!) = Cz (!) + ...
(II.4)
not usually included in soft factor expansions since it
sits at ! = 0. For small ! the Weinberg pole behavior,
proportional to Nz , dominates. This Nz appeared as a
linear term in (I.3), however, a linearly growing term has
an ill-defined Fourier transform. Since ↵0 is suppressed
by T 1 (the time integral rather than time average of Fuz
gives the physically relevant quantity), the same limiting
2
behavior can be achieved with a sign function. Looking
at:
Z1
2
sin(!u)d! = ⇥(u)
⇡!
(II.5)
0
shows how the ! 1 part of the mode expansion for Az
can pick out the Nz behavior. It is possible to modify
the mode expansion with a sign function rather than a
linear term by constructing:
Âz = Az
[
Nz
⇥(u) + Cz ]
2
(II.6)
which goes to zero on the boundaries. Note that the way
in which Nz is split out of Âz can a↵ect the O(! 1 ) behavior of what are defined as the new bulk modes. The
downside of this choice is that the same limiting behavior
occurs for translating ⇥ by finite u. However, the classical result for the massless case described in the previous
paper showed that @u Az was composed of Nzj (u uj )
terms where the uj corresponded to the timing of wavefronts of massless charged particles approaching I + contributing to the full Nz .
When accelerations are assumed to take place over a
long time frame, so that the radiated power is minimized
while prescribed changes in velocities are still achieved,
the Fourier series+linear expansion (I.3) has the appeal
of describing a continuously radiating background, being
the zero mode of the radiated field Fuz .
On the other hand, in the context of a single scattering process, where the interaction is centered at the
spacetime origin so that the massless matter wavefront
is centered at u = 0, an augmented mode expansion similar to (II.6) can be thought of as subtracting the classical
radiation solution if all accelerations were forced to occur
instantaneously, allowing the higher frequency modes to
capture the fluctuations about this configuration in a way
such that Âz decays to zero at both limits. From a QFT
point of view, where the scattering occurs in a localized
region, small compared to where the products would be
detected, this second approach has a natural context.
Generalizing the Soft Factor/Classical Connection
Sabrina Gonzalez Pasterski
(Dated: September 10, 2014)
Here, I generalize the connection between soft factors and classical solutions described in “Classical
Interpretation of the Weinberg Soft Factor,” showing that the same relation also holds when the
scattered charged particles are massless. Namely: the polarization vector times the soft factor for
a photon emitted in the x̂ direction during a scattering process is proportional to the time integral
of the classical radiated electric field for the same process, as measured by an observer sitting along
that direction at a large distance from the scattering process. While this is the opposite velocity
limit, compared to the previous paper, the underlying connection is the same: the duality between
position space and momentum space for the photon field near null infinity. The low frequency limit
picks out the large distance, classical behavior, which in turn aligns the position and momentum
space directions.
I.
SOFT FACTOR AS EXPECTATION VALUE
Again, I start with the mode expansion for Fuz = @u Az
from “Low’s Subleading Soft Theorem as a Symmetry of
QED”:
Fuz =
eˆ
✏+
z̄
8⇡ 2
Z1
d! ![aout
+ (!x̂)e
i!u
+ aout (!x̂)† ei!u ].
(I.1)
0
where the integral over u is given by:
Z
eˆ
✏+
z̄
out
du Fuz =
lim ![aout
(!x̂)† ]. (I.2)
+ (!x̂) + a
8⇡ !!0+
Here, I have made the fact that these operators correspond to outgoing photons explicit, relating to the fact
that I am measuring the radiation in a far-field region a
long time after the scattering process generating it occurred.
Let’s draw some intuition from quantum mechanics. If
you have an operator O and a state | i the expectation
value of the operator in this quantum state is:
h |O| i
h | i
(I.3)
In some sense, it is then natural to say that if I have
a scattering process going from some |ini state to some
|outi state, I can think of:
hout| : OS : |ini
hout|S|ini
corresponds to the long-distance radiated electric field
measured at some point labeled by the direction x̂ on
the S 2 at infinity.
As in (I.4), I would like to interpret the soft part that
factorizes from the matrix element:
eˆ
✏+
z̄
8⇡
out
lim !hout|[aout
(!x̂)† ]S|ini
+ (!x̂) + a
!!0+
✏+
z̄
= eˆ
8⇡
lim+ !S (0)+ hout|S|ini
!!0
as the classical expectation value of the radiated electric
field integrated over time.
From a QFT point of view, the zero frequency limit
has extracted the Weinberg pole in the soft factor for
the matrix element describing the scattering process of
|ini to |out + 1 soft photon @ x̂i. Note here, that this x̂
is the direction of the soft photon in momentum space.
The key to the connection between the classical measurement and the QFT soft factor is that the massless
photon localizes in the large r limit to the same point on
the position space sphere as its direction in momentum
space. The multiplication by ! not only picks out just
the Weinberg pole in the soft factorization but, by leaving just the x̂ dependence, also allows me to use the (I.4)
notion of an expectation value for operator (I.2) to arrive
at the position-space interpretation of the soft factor as a
classical measurement of the time integral of the radiated
electric field at large r, made by an observer sitting at x̂
on a far away sphere.
In the particular gauge choice:
(I.4)
like an expectation value of the operator for that process. Here, the denominator is the matrix element describing the transition amplitude between the |ini and
|outi states. As opposed to the expectation value given
a fixed state in (I.3), both the numerator and denominator in (I.4) are transition amplitudes. Time ordering the
operator O with the scattering matrix S is used to explicitly distinguish operators which modify the incoming
and outgoing states.
R
Now let O = duFuz (r ! 1, x̂), where I have made
the remaining spatial dependence of (I.2) explicit. This
(I.5)
Ar = 0,
Au = O(r
1
)
(I.6)
only the fields Az and Az̄ remain in the large r limit, so
that Fuz = @u Az and (I.2) takes the form A+
Az , the
z
di↵erence of the gauge field at the future and past u limits
of I + , future null infinity.1 By going to this gauge, I can
express an observable field strength in terms of boundary
values of the gauge field.
1 arXiv:1407.3789
2
II.
MASSLESS SCATTERING
In “Classical Interpretation of the Weinberg Soft Factor,” I showed that in the limit of non-relativistic scattering of charged particles, this connection between soft factors and classical measurements held, using results from
classical electromagnetism. The essential reason why the
interpretation worked is that the classical observable I
was interested in depended only on the charges and momenta of the incoming and outgoing particles, which are
the same variables used to define the |ini and |outi states.
Here, I use the gauge field solution for a set of massless
charges emerging from the spacetime origin2 and show
that the classical value is again the soft factor.
The LSET gauge choice is equivalent to Ar = 0 in our
null coordinates (t = u + r). After using conservation of
charge, the radial dependence drops out and:

X
pj · n
Aµ (u, r, x̂) = (
Qj log
(u), 0, 0, 0) (II.1)
Ej
j
where n is a radial null vector parameterized by the direction x̂. Using a (u, z, z̄) dependent gauge transformation,
I can convert this expression into the further constrained
gauge choice (I.6). The result is that Au = Ar = 0 while:

X
pj · n
Ai = @ i
Qj log
✓(u)
(II.2)
Ej
j
for i 2 {z, z̄}. This solution has the nice property that
is an exact 1-form on the S 2 where the u dependence
implies that the value jumps when the wavefront of the
massless particles passes the observer’s position. Taking the large r limit does not a↵ect the numerical form
of the expression since all of the particles are moving
on the same light shell, however, this limit has the nice
interpretation of allowing me to superpose massless scattering processes starting at di↵erent points, where the
finite shifts in origin give subleading e↵ects.
Using (II.2), the classical value of the operator (I.2)
becomes:
h
i
P
pj ·n
A+
A
=
@
Q
log
z
j
z
z
Ej
(II.3)
P j pj ·@z n
=
Qj pj ·n
j
At the same time, the soft factor gives:
(0)+
✏ˆ+
=
z̄ lim+ !S
!!0
P
j
P
Qj ✏ˆ+
z̄
pj ·✏+
pj ·ñ
p ·✏+
j
Qj ✏ˆ+⇤
z pj ·ñ
j
P
p ·✏↵
= @z ñ · Qj ✏↵⇤ pjj ·ñ
P j,↵
p ·@ ñ
=
Qj jpj ·ñz
=
(II.4)
j
where ñ = !q . Here, I have dropped pre-factors corresponding to di↵erent normalization conventions for
the gauge fields. The third line uses the fact that
@z ñ(x̂) · ✏ ⇤ (x̂) = 0 so that the completeness relation
for
P
polarization vectors and charge conservation,
Qj = 0,
can be used to arrive at the fourth line.
The final results of (II.3) and (II.4) are the same when
we set ñ corresponding to the direction of the soft photon, equal to n corresponding to the direction at which
the classical field is measured. What is interesting is
that, while from the QFT interpretation the soft factor
naturally gets exponentiated to allow multiple soft insertions, there is less of a motivation to do so for the
time integrated quantity in the classical interpretation,
except for computing correlation functions, for example.
Since it can be interpreted as a closed 1-form on the
S 2 , A+ A = d , correlation functions of the scalar
rather than A at di↵erent points, are more natural, and
would be the analog of multiple soft emissions in di↵erent
directions.
ACKNOWLEDGEMENTS
Many thanks to A. Sajjad, A. Strominger, and D.
Kapec.
2 arXiv:1401.7667
Classical Interpretation of the Weinberg Soft Factor
Sabrina Gonzalez Pasterski
(Dated: July 31, 2014)
I show how the radiation emitted during the scattering of non-relativistic charged particles corresponds to the O(! 1 ) soft factor in QED. Namely, if we think of the photon momentum in the soft
factor as labeling a direction at which a far-field observer sits, the QED matrix element pre-factor
corresponds to the time integral of the radiated electric field measured by that observer when a set
of non-relativistic charged particles scatter and accelerate.
I.
CLASSICAL SCATTERING
II.
Consider the mode expansion of Fuz = @u Az from
“Low’s Subleading Soft Theorem as a Symmetry of
QED”:
Fuz =
eˆ
✏+
z̄
8⇡ 2
Z1
d! ![a+ (!x̂)e
i!u
+ a (!x̂)† ei!u ].
(I.1)
0
du Fuz =
For a far-field point labeled by (z, z̄), we have:
✓
◆
z + z̄ i(z̄ z) 1 z z̄
~er =
,
,
(II.1)
1 + z z̄ 1 + z z̄ 1 + z z̄
while a particle traveling with four momentum:
pk = |pk |
Its integral over u is given by:
Z
CONNECTION TO QED SOFT FACTOR
eˆ
✏+
z̄
lim ![a+ (!x̂) + a (!x̂)† ]
8⇡ !!0+
(I.2)
so that a soft insertion picks out: !ˆ
✏+
z̄ times the Weinberg
soft factor.
Semi-classically, we can think of the mode expansion
of Fuz as the Fourier transform for the corresponding
electric field component. The Weinberg soft theorem thus
corresponds to the time integral of the radiated electric
field measured at any far-field point labelled by (z, z̄).
Such a non-zero time-integrated value would be expected
for a charged particle that accelerates.
Let’s look at some equations from classical electrodynamics:
!
~ rad
@A
~
Erad = ~er ⇥ ~er ⇥
(I.3)
@t
~
where @ A@trad becomes the u derivative of the gauge field
component tangent to the two sphere at the far-field
point, just as Fuz is the u derivative of Az . For a nonrelativistic accelerating particle:
~ rad =
E
Q
~er ⇥ (~er ⇥ ~a)
4⇡✏0 rc2
(I.4)
~ rad is proportional to the
so that the time integral of E
change in velocity of the particle. For instance, in the
non-relativistic regime where the same particles come in
and out, but with di↵erent velocities:
Z
X
Qk
~ rad =
dt E
~e ⇥ (~er ⇥ ~vk ).
(I.5)
2 r
4⇡✏
0 rc
in out
s
m2k zk + z̄k i(z̄k zk ) 1 zk z̄k
1+
,
,
,
|pk |2 1 + zk z̄k 1 + zk z̄k 1 + zk z̄k
!
(II.2)
has, at leading order in the non-relativistic limit:
✓
◆
|pk | zk + z̄k i(z̄k zk ) 1 zk z̄k
~vk =
,
,
. (II.3)
mk 1 + zk z̄k 1 + zk z̄k 1 + zk z̄k
We then find that
o
P n Qk
~
e
⇥
(~
e
⇥
~
v
)|
· @z ~x
r
r
k
r
in out
P
k |pk | (z̄k z̄)(1+zk z̄)
=
2 Qm
2
k r (1+z z̄) (1+zk z̄k )
(II.4)
in out
where ~x = r~er .
Meanwhile, in the low-particle-momentum limit
X
!ˆ
✏+
z̄
in out
X
pk · ✏ +
Qk |pk | (z̄k z̄)(1 + zk z̄)
=
2
pk · q
mk r (1 + z z̄)2 (1 + zk z̄k )
in out
(II.5)
for photon momentum and polarization four vectors
given by:
⇣
⌘
z+z̄ i(z̄ z) 1 z z̄
q = ! 1, 1+z
z̄ , 1+z z̄ , 1+z z̄
(II.6)
1
+
✏ = p2 (z̄, 1, i, z̄).
Similarly,
X ⇢ Qk
X
pk · ✏
~er ⇥ (~er ⇥ ~vk ) · @z̄ ~x =
!ˆ
✏+
.
z̄
r
pk · q
in out
in out
(II.7)
where ✏µ = ✏+⇤
µ in Minkowski coordinates.
We thus see that the Weinberg soft factor appearing
in the mode expansion of Fuz in the ! ! 0 limit corresponds to the total time integral of the electric field
radiating towards (z, z̄) coming from the acceleration of
massive charged particles when their velocities change in
a scattering process.
ACKNOWLEDGEMENTS
Many thanks to J. Barandes and A. Strominger.
Gaussian Measures and the QM Oscillator
Sabrina Gonzalez Pasterski
(Dated: April 20, 2014)
In this paper, I show how probability densities associated with a Gaussian field can be expressed
in terms of the Boltzmann heat kernel. The N  2 calculations are based o↵ of the work of Arthur
Ja↵e, while the proof of his postulate for general N is original.
In “Fields with a Gaussian Measure,” I found:
R QN
⇢t (x) ⌘
xi )dµc
i=1 ( (ti )
=p
where Cij = C(ti
the operator:
H0 =
1
e
(2⇡)n detC
tj ) =
1
2
✓
1 >
2x C
1
x
1
m|ti tj |
.
2m e
2
d
+ m2 x2
dx2
(1)
Now, consider
◆
m
(2)
⇣ m ⌘1/4
⇡
e
mx2
2
1
For N = 1, ⇢t (x) = ⌦0 (x)2 . When N > 1, it is convenient to define C = 2mC so that Cij = e m|ti tj | .
Then:
⇢t (x) =
(5)
Here, I will consider t1 < ... < tN . For N = 2 explicitly
inverting
✓
◆
1
e m(t2 t1 )
C2 ⌘
(6)
e m(t2 t1 ) 1
gives an expression for ⇢ in terms of B:
⇢t1 ,t2 (x1 , x2 ) = ⌦0 (x1 )Bt2
t1 (x1 , x2 )⌦0 (x2 )
N
(CN
1)
), leads to an ex(CN1 1 v)
µ
1
µ
1 v)
(7)
1
A
(9)
where µ = 1 v > (CN1 1 v). Rather than inverting CN 1 ,
my expression for ⇢N in terms of ⇢N 1 will only need
the product (CN1 1 v). Because the inverse exists, it is
equivalent to finding ⇠ such that v = CN 1 ⇠. Since the
last column of CN 1 is (e m(tN 1 t1 ) ...1), I find that:
(3)
For t > 0, the Boltzmann integral kernel gives the evolution:
Z 1
e tH0 f (x) =
Bt (x, x0 )f (x0 )dx0
(4)
>C 1x
m N/2 e mx
p
⇡
detC
where v > = (e m(tN t1 ) ...e m(tN tN
pression for the inverse:
0
(C 1 v)(C 1 v)>
CN1 1 + N 1 µ N 1
1
C ⌘@
1
>
µ
which can be thought of as the Hamiltonian for the simple harmonic oscillator with position coordinate scaled to
have unit mass, and frequency ! = m. The spectrum is
mZ+ and the ground state is given by:
⌦0 (x) =
I can now find an expression for general N using induction. Writing CN in blocks:
◆
✓
CN 1 v
CN ⌘
(8)
v>
1
(CN1 1 v)j = e
m(tN
tN
1)
j,N
(10)
1
which gives:
µ = 1 e 2m(tN tN 1 )
1
1 2
= x>
N 1 CN 1 xN 1 + µ [xN + e
m(tN tN 1 )
2xN xN 1 e
]
1
x>
N CN x N
2m(tN
tN
1)
x2N
(11)
In terms of ⇢N
⇢N =
m
⇡
= ⇢N
= ⇢N
1
2
1,
m
⇡
one thus finds:
N
1 ⇢tN ,tN
1 ⌦0 (xN
1
2
e
mx>
N
p
1 CN
x
1 N
detCN
(xN , xN
1
B tN
1)
1
1
1
1 )⌦0 (xN
tN
e
(x> C
m
p
1
1
(xN
1)
1 x)
µ
2
1 , xN )⌦0 (xN )
(12)
The expressions for N = 1 and N = 2 are both consistent
with the following expression for general N :
⇢N = ⌦0 (x1 )Bt2 t1 (x1 , x2 )Bt3 t2 (x2 , x3 )...
...BtN tN 1 (xN 1 , xN )⌦0 (xN )
where t1 < ... < tN .
(13)
1
From Gaussian Measures to SDE
Sabrina Gonzalez Pasterski
(Dated: March 31, 2014)
In this paper, I show how a field with gaussian measure can give the Schwinger-Dyson Equations
of QFT in Wick rotated time.
In my paper on “Fields with a Gaussian Measure,” I
found that:
R Qn
xi )dµc
i=1 ( (ti )
=p
where Cij = C(ti
C(ti tj ) solves:
([ r2 + m2 ]
1
e
(2⇡)n detC
1
f )(t) =
Pluging in f (t0 ) = (t0 ) gives:
Z
C(t
C(t) = ([ r2 + m2 ]
1
(1)
x
1
m|ti tj |
.
2m e
tj ) =
1
1 >
2x C
The function
t0 )f (t0 )dt0
)(t)
(2)
(3)
So that the matrix elements can be expressed as: Cij =
C(ti tj ) = ([ r2 + m2 ] 1 )(ti tj ). This means that
the entries are the value of the impulse response of the
operator.
In a discrete time basis, the delta function becomes the
identity operator. In this basis, x> C 1 x is transformed
so that xi = x(ti ). Then:
x> C
1
x = x> (ti )[ r2 + m2 ]ij x(tj )
In the continuum time limit n ! 1, this becomes:
Z
x> C 1 x / dt{x(t)[ r2 + m2 ]x(t)}
So that the original expression becomes:
R
( (t) x(t))dµc
/
p
det[ r2 + m2 ]e
1
2
R
dt{x(t)[ r2 +m2 ]x(t)}
(4)
(5)
This expression
R allows one to show how adding an interaction term dtLint [x(t)] amounts to rescaling the
gaussian measure:
 R
R
R
( (t) x(t))dµc
S + dt0 Lint [x(t0 )] = ln e dt0 Lint [x(t0 )] p
det[ r2 +m2 ]
R
R
dt0 Lint [ (t0 )]
( (t) x(t))e
dµc
p
= ln
2
2
det[ r +m ]
(9)
Here, x(t) appears as a field from QFT. I will show
that the expectation value of x(t)x(t0 ) evaluated in terms
of the gaussian measure field
obeys the SchwingerDyson equations of QFT for the time ordered product:
h0|T {x(t)x(t0 )}|0i.
hx(t)x(t0 )i =
=
1 R
R
2
This expression agrees with the conventional definition
of the path integral normalization if we set the constant
of proportionality to 1 and think of the delta function in
as a product over delta functions at each time:
R
p
R
2
2
1
1 = R Dx R det[ r2 + m2 ]e 2 dt{x(t)[ r +m ]x(t)}
= R Dx
( (t) x(t))dµc
R
=R
Dx ( (t) x(t)) dµc
= dµc
=1
(7)
I can now write the Euclidean action for a free field
x(t) of mass m in terms of the Gaussian field (t):
"R
#
( (t) x(t))dµc
S = ln p
(8)
det[ r2 + m2 ]
Lint }
Lint }
(10)
Because the fields have a gaussian measure, the integral
will factor into a product over all possible permutations
of two point functions. The exponential of the interaction
term can be written as a sum over products of Lint :
R
P1
1)m R
e dt0 Lint = m=0 ( m!
dt1 ...dtm Lint [ (t1 )]...Lint [ (tm )]
(11)
The sum over all possible two point correlation functions
can be factored into a sum over the product of (t) with
one other field, times the expectation value of the remaining fields. As a first step in this decomposition:
R
(6)
2 ]x(t)
Dx{x(t)x(t0 )}e 2 dt{x(t)[ r +m
R
1 R
2
2
Dxe 2R dt{x(t)[ r +m ]x(t)
R
0
dt0 Lint [ (t0 )]
(t) (tR)e
dµc
R
dt0 Lint [ (t0 )] dµ
e
c
h (t) (t0 )e dt0 Lint [R (t0 )] ic
= h (t) (t0 )ic he dt0 Lint [
(t0 )]
(12)
ic + ...
In the remaining terms, (t) is contracted with a (ti )
from the expansion of the Lint term in Equation 11.
When acted on with ( r2 + m2 )t :
( r2 + m2 )t h (t) (ti )ic = (t
ti )
(13)
Since the ti is integrated over, this amounts to removing
one factor of
from each term in the Lint expansion
and replacing its argument with t. This is equivalent to
having a factor of:
R
Lint R dt0 Lint [ (t0 )]
e dt0 Lint [ (t0 )] =
e
(14)
(t)
(t)
(t0 ). As a result:
in the remaining term multiplying
( r2 + m2 )t hx(t)x(t0 )i = ( r2 + m2 )t h R (t) (t0 )ic
h
= (t
Lint
(t)
he
t0 )
(t0 )e
R
h
dt0 Lint [ (t0 )]
dt0 Lint [ (t0 )] i
Lint
0
x(t) x(t )i
ic
c
(15)
2
This is the Schwinger-Dyson Equation in Wick rotated
time. Note if L is replaced by L/~, and ( r2 +m2 )C(t
t0 ) ! ~ (t t0 ), the contact term associated with “quantum corrections” will vanish for ~ ! 0, while the L term
will get multiplied by ~/~ = 1 and will remain, giving the
“classical result.” In these calculations, the di↵erence between classical and quantum field theory results for this
one-dimensional example were shown to arise from hav-
ing a field that is a gaussian random variable in time
(with width set by ~), rather than being di↵erentiable.
In this context, the significance of ~ is as the limiting
variance of a free field in the vacuum (when interaction
terms become irrelevant). The independence of this variance on the mass is interesting, but can often be absorbed
into the normalization of the fields. The gaussian nature
of the time correlations would seem to relate to the ubiquity of gaussians following the central limit theorem.
Soft Theorems and Symmetry
Sabrina Gonzalez Pasterski
(Dated: March 31, 2014)
I examine the connection between symmetries and soft factors for the case of graviton scattering.
In QFT, soft theorems describe the e↵ect of adding an
additional low momentum particle to an existing process
and observing the change in the S matrix as this new
particle’s momentum is taken to zero. In the particular
case where this particle is added to an external line, the
change in the S matrix amounts to adding an interaction
vertex factor and an extra factor of the propagator for
the line to which the soft particle is attached.
This can be more clearly seen by considering the momentum space correlation function for the interaction.
As the external particles for a given process go on shell:
2
3
n
Y
i
5 S(p1 , ...pn )
G(p1 , ...pn ) / 4
(1)
2 + m2
p
i✏
j
j
j=1
so that there are poles corresponding to each incoming
and outgoing particle. When a massless soft particle of
momentum q is also emitted, the momentum space correlation function becomes:
2
3
n
i 4Y
i
5 S̃(q, p1 , ...pn )
G(q, p1 , ...pn ) / 2
q
i✏ j=1 p2j + m2j i✏
(2)
The goal is to relate S(p1 , ...pn ) to S̃(q, p1 , ...pn ).
Consider the case where a massless particle of momentum q is attached to an outgoing particle with momentum p1 and mass m1 , a Feynman diagram approach
shows that the di↵erence between the two matrix elements S and S̃ is and overall factor of the propagator of
a particle with momentum p1 + q and the vertex factor
associated with the
interaction, which I will denote
V( J ):
S̃(q, p1 , ...pn ) ⇡
=
i
V( J )S(p1 , ...pn )
(p1 +q)2 +m21 i✏
i
2p1 ·q V( J )S(p1 , ...pn )
(3)
Two important considerations allowed me to write the
new matrix element in this form: 1. Since q is small, I
assumed that changing p1 to p1 + q did not a↵ect the rest
of the diagram. (Sometimes, derivations will change the
momentum of the new external leg instead. Either case
requires the approximation that the particle is nearly
on shell both before and after the
emission.) 2. the
fact that q is massless and on-shell resulted in the dotproduct form of the denominator, since the p21 cancelled
the m21 .
What remains is to calculate the vertex factor due
to the
particle interacting with the J current. As
q ! 0, only terms of up to O(q) in V( J ) will survive.
This amounts to considering interaction terms in the Lagrangian that have only 0 or 1 derivatives of .
Gauge invariance restricts the form of the interaction
terms allowed. Consider electromagnetism as an example. If the photon field Aµ couples to matter via Aµ J µ ,
then sending Aµ ! rµ gives:
Aµ J µ ! (rµ )J µ
!
rµ J µ
(4)
where the second line is after integrating by parts within
the Lagrangian. Since this must be zero for any function
, one concludes rµ J µ = 0.
For the case where represents a soft graviton, similar
gauge invariance allows us to predict the form of the interaction vertices in the Lagrangian. Here the dynamical
field is hµ⌫ where gµ⌫ = ⌘µ⌫ + hµ⌫ . Interaction terms
in the Lagrangian involving hµ⌫ must be completely contracted to preserve Lorentz invariance.
First, consider a term with no derivatives of hµ⌫ :
L0int = hµ⌫ S µ⌫ . Gauge invariance requires that sending
hµ⌫ ! rµ ⇠⌫ + r⌫ ⇠µ gives zero:
0 = (rµ ⇠⌫ + r⌫ ⇠µ )S µ⌫
= ⇠⌫ (rµ S µ⌫ ) ⇠µ (r⌫ S µ⌫ )
= 2⇠⌫ rµ S (µ⌫)
(5)
from this one concludes rµ S (µ⌫) = 0. Since only the
symmetric part of Sµ⌫ remains after contracting with
hµ⌫ , we find that hµ⌫ couples to a conserved rank 2 tenµ⌫
µ⌫
sor. A natural candidate is L0int / hµ⌫ TM
. Where TM
is the matter stress-energy tensor.
Now consider terms with a single derivative of hµ⌫ .
For a transversely polarized graviton field a @⌫ hµ⌫ term
within L1int would result in a q⌫ ✏µ⌫ = 0 within the vertex
factor associated to this interaction. We can thus hypothesize an interaction term of the form: L1int = @ hµ⌫ S µ⌫ .
Gauge invariance implies:
0 = @ (rµ ⇠⌫ + r⌫ ⇠µ )S µ⌫
= (rµ ⇠⌫ + r⌫ ⇠µ )@ S µ⌫
⇡ ⇠⌫ (@µ @ S µ⌫ ) + ⇠µ (@⌫ @ S µ⌫ )
= 2⇠⌫ @µ @ S (µ⌫)
(6)
where I have replaced covariant derivatives with partial derivatives in my weak gravity approximation to
avoid ambiguities in the product rule for the covariant derivative acting on a non-tensor object. The conserved angular momentum tensor has the desired strucµ
µ⌫
ture: @µ M µ⌫ = 0 where M µ⌫ = x⌫ TM
x TM
. Note
also that the antisymmetry of this tensor in ⌫ ! would
2
µ⌫
take TM
/ @ µ @ ⌫ . This gives:
give:
r rµ S
µ⌫
µ⌫
= @ rµ S
+
= @ rµ S µ⌫
⌫
rµ S
µ
+
rµ S
I will now show that these Lagrangian interaction
terms give expected vertex factors for a massless scalar
field. The stress energy tensor for a massless scalar field
is:
g µ⌫ g ↵ )@↵ @
(9)
and
(7)
at linear order in hµ⌫ if I choose a traceless gauge, since
the second term is zero by the antisymmetry of S, while
= 12 g ⇢⌧ @ g⇢⌧ = @ h⇢⇢ + O(h2 ) which is zero if h⇢⇢ =
0. From Equation 6, it is thus natural to hypothesize in
interaction term: L1int / @ hµ⌫ M µ⌫ .
µ⌫
TM
/ (g µ↵ g ⌫ + g µ g ⌫↵
L0int / hµ⌫ @ µ @ ⌫
µ⌫
(8)
so that within the contractions for L0int and L1int , I can
L1int / @ hµ⌫ (x⌫ @ µ @
x @µ @⌫ )
⌫
⌫
= @ hµ⌫ (x @
x @ )@ µ
(10)
Partial derivatives will pull down factors of the corresponding field’s momentum, while hµ⌫ will go to the the
polarization vector ✏µ⌫ in the vertex factor:
V 0 (hJ ) / ✏µ⌫ pµ p⌫
V 1 (hJ ) / q ✏µ⌫ S ⌫ pµ
(11)
So that the soft factors corresponding to the addition of
a soft graviton to an external line are:
Xq0 /
Xq1 /
✏µ⌫ pµ p⌫
p·q
q ✏µ⌫ S ⌫ pµ
p·q
(12)
Fields with a Gaussian Measure
Sabrina Gonzalez Pasterski
(Dated: March 17, 2014)
R Qn
In this paper, I calculate
xi )dµc . This corresponds to specifying the value of a
i=1 ( (ti )
field defined by a gaussian measure dµc to take the values xi at a set of n distinct times ti .
In order to calculate
by calculating:
R Qn
i=1
( (ti )
xi )dµc , I begin
the Fourier transform of the delta function:
R Qn
xi )dµc
i=1 ( (ti )
R
Q
◆n ) = 1 n dµc n dkj eikj ( (tj ) xj )
1 n ✓Z
X
j=1
(2⇡)
i
dµc ei dv (v)g(v) = dµc
dv (v)g(v)
.
R
P
P
n!
R Qn
R
i dv (v)[ n
i n
n=0
1
j=1 kj (v
j=1 kj xj
= (2⇡)
dk
e
dµ
e
n
j
c
j=1
(1)
(6)
A generic term in the expansion of the exponential can
Pn
Identifying
g(v)
=
k
(v
t
)
and
using
Equaj
j
j=1
be rewritten in terms multiple integration variables vi :
tion ?? gives:
R Qn
✓Z
◆n Z
Z
xi )dµc
n
i=1 ( (ti )
Y
dµc
dv (v)g(v)
= dµc
dvi g(vi ) (vi ).
P
R Qn
i n
1
j=1 kj xj
i=1
= (2⇡)
n
j=1 dkj e
(2)
R
Pn
1
The integral over dµc will be non-zero only when n is
⇥ e 2 dv1 dv2 j,`=1 kj k` (v1 tj ) (v2 t` )C(v1 v2 )
even, in which case it takes the form:
P
Pn
R Qn
1
i n
1
j=1 kj xj e 2
j,`=1 kj k` C(tj t` )
= (2⇡)
n
j=1 dkj e
Z
n
Y
X
R
>
1 >
dµc
(vi ) =
C(vi1 vi2 )...C(vin 1 vin )
1
= (2⇡)
dke ik x e 2 k Ck
n
i=1
pairings
(7)
(3)
where
the
last
line
is
written
in
matrix
form.
Since
C
1
ij =
where C(vi vj ) = 2m
e m|vi vj | and there are (n 1)!!
1
m|ti tj |
C(t
t
)
=
e
is
a
real
symmetric
matrix,
it
i
j
2m
possible pairings. Since each term in the sum involves
1
>
n
can be diagonalized with an orthogonal matrix P = P
a product of 2 two-point correlation functions and the
so that PCP> = ⇤ and:
g(vi ) are symmetric under exchange of vi :
R Qn
xi )dµc
i=1 ( (ti )
R
Qn
R
dµc i=1 dvi g(vi ) (vi )
> >
1 > >
1
= (2⇡)
dke ik P Px e 2 k P ⇤Pk
n
(4)
n
R
= (n 1)!! dv1 dv2 g(v1 )g(v2 )C(v1 v2 ) 2 .
R Qn
2
1
1
ikj (Px)j
= (2⇡)
e 2 kj ⇤ j
n
j=1 dkj e
(8)
(Px)j 2
Qn
The original expression then simplifies to:
1
2⇤j
= j=1 p
e
Z
(
Z
R
2⇡⇤j
R
dµc ei
=
R
dv (v)g(v)
P1
=e
i2n
n=0 2n! (2n
1
2
R
1)!!
R
dv1 dv2 g(v1 )g(v2 )C(v1
dv1 dv2 g(v1 )g(v2 )C(v1 v2 )
Now, I can write
R Qn
i=1
v2 )
.
(5)
( (ti )
xi )dµc in terms of
n
1
= p
e
(2⇡)n detC
1 >
2x C
1
x
R
1
When n = 1, C(0) = 2m
and
( (t) x)dµc =
p m mx2
2
= |⌦0 (x)| the norm-squared of the ground
⇡e
state wave function for the quantum simple harmonic oscillator.
t j )]
.
Gaussian Measures and Commutators
Sabrina Gonzalez Pasterski
(Dated: March 10, 2014)
I show how defining a field with a gaussian measure gives rise to a classical interpretation of the
equal time commutation relations for a field and its derivative.
One aspect that distinguishes time from space is that
while displacements along any spatial axis can be in either the positive or negative direction, our motion in time
is monotonic. It is thus possible to consider a space-time
that is infinite in spatial extent, but only semi-infinite in
temporal extent (i.e. considering functions which are 0
for t < 0). I will work in Wick-rotated time coordinates
for which ⌧ = it and treat ⌧ as a real coordinate.
I.
BACKGROUND FROM A.J.’S 216
Consider the d’Alembertian: ⇤ = @t2
(this paper
will take units in which ~ = c = 1). In terms of ⌧ , this
becomes: ⇤ = @⌧2
. One can study the action of
⇤ + m2 on f (⌧, ~x) by considering its Green’s function:
Z
0
1
eik·(x x )
C(x x0 ) =
dEd~k
(1)
(2⇡)4 R4 k 2 + m2
where kp= (E, ~k) is a Euclidian momentum. Defining
µ(~k) = ~k 2 + m2 , the E integral becomes:
Z
1
eiE⌧
e |⌧ |µ
dE
=
(2)
2⇡
E 2 + µ2
2µ
where for this integral to converge, the time t must be
treated as imaginary, so that ⌧ is ⇣
real.
⌘
0
1
0
The result is that: C(x x ) = 2µ
e |⌧ ⌧ |µ (~x x~0 )
p
for µ =
+ m2 . C(x x0 ) can be used to define a
gaussian measure:
Z
2
¯
S( f ) = e 2 hf ,Cf i = ei (f ) dµ( )
(3)
so that:
d2
S( f )|
d 2
=0
= hf¯, Cf i =
Z
(f )2 dµ( ).
(4)
Requiring that the function f be real and non-zero only
for ⌧ > 0 has the advantage of making hf¯, ✓Cf i
0,
where the operator ✓ inverts ⌧ 0 .
The above calculations/definitions are based on A.J.’s
notes, with the Wick rotation made explicit here. In the
following section, I will used the above gaussian measure to show how the canonical commutation relations
of quantum fields can be viewed as arising from possible
temporal discontinuities in .
II.
COMMUTATION RELATIONS FROM THE
GAUSSIAN MEASURE
Here, I will integrate out the spatial dependence, and
take µ ! M , where M is a constant, mass-like term. The
equal time commutation relations of a quantum field are:
[ (t, ~x), @t (t, ~x0 )] = i (~x
~x0 ); [ (t, ~x), (t, ~x0 )] = 0.
(5)
For a classical field (⌧ ) defined by the gaussian measure dµ( ), the time correlation is given by C(⌧ ⌧ 0 ) =
1
M |⌧ ⌧ 0 |
, which satisfies ( @⌧2 + M 2 )C(⌧
⌧ 0) =
2M e
0
(⌧ ⌧ ) by its definition as a Green’s function.
In the context of a random, not necessarily continuous,
field (⌧ ), the idea of a local time derivative should be
replaced with the limit definition:
⌧)
(⌧ )
(6)
⌧ !0
⌧
where this limit takes physical meaning when its expectation value with respect to dµ( ) is taken.
If one interprets h⌦|[ , @t ]|⌦i as the di↵erence of 1:
measuring the time derivative and then the field, and
2: measuring the field and then its time derivative, in
the limit at which the field and derivative measurements
approach being at the same time, then the necessity of
specifying such a time order becomes natural in the context of a derivative that is defined in terms of a limit of
two field measurements spaced by ⌧ . For (⌧ ):
R
(⌧ + )@t (⌧ ) @t (⌧ + ) (⌧ )dµ( )
@t (⌧ ) = i@⌧
lim
=i ⌧ !0
lim
R
R
=i ⌧ !0
lim
= 2i ⌧ ! 0
=
i
M
=i
(⌧ +
= i lim
(⌧ +
⌧)
(⌧ + ⌧ )
⌧
(⌧ + ⌧ )
(⌧ )
(⌧ )dµ(
⌧
( (⌧ + ⌧ )
(⌧ ))2
dµ(
⌧
(⌧ )
)
)
(7)
[C(0) C( ⌧ )]
⌧
lim
[1 e M
⌧
⌧ !0
⌧
]
Since (⌧ ⌧ 0 ) = i (t t0 ), C(x x0 ) satisfies the same
di↵erential equation as the Feynman propagator:
Z
2
(⇤ + m )
(x) (x0 )dµ = i 4 (x x0 ).
(8)
A classical expectation value which behaves like the Feynman propagator gives a classical interpretation of [ , @t ].
The ordering of (x) and @t (x) matters because of the
non-locality of measuring a time averaged derivative of a
function that is Hölder continuous with exponent 1/2.
Covariant Derivatives and the Hamilton-Jacobi Equation
Sabrina Gonzalez Pasterski
(Dated: March 2, 2014)
I define a covariant derivative to simplify how higher order derivatives act on a classical generating
function.
When studying the connection between classical and
quantum mechanics, it would be nice to have a di↵erential operator which, when acting repeatedly on some
S(q,P,t)
function ei ~
pulls down powers of the derivatives of
the function within the exponent.
Consider the results of “Wavefunctions and the
Hamilton-Jacobi Equation.” There, I performed a canonical change of variables from (qi , pi ) to constants (Qi , Pi ):
pi q̇i
H(q, p, t) = Pi Q̇i
where F = S(q, P, t)
H=
K(Q, P, t) +
dF
dt
S 00
~ 0 n i S(x,P,t)
x
e ~
xx = S 0 , so
iS
S(x,P,t)
~ n
rn e i ~
holds by induction.
i
since
@S
pi =
@qi
@S
Qi =
.
@Pi
hO(x, p)i ⌘
S
S
pj · ei ~ =
@S i S
e ~
@t
(2)
hO(x, p)i =
@S
@qj
S
= i~@t ei ~
S
(3)
S
ei ~ = ~i @qj ei ~ .
Moreover, this connection between multiplication and a
di↵erential operator held at first order for arbitrary superpositions:
(q, t) =
Z
P(Pi )ei
S(q,P,t)
~
dPi .
R
is exactly equivalent to multiplying ei
n
( @S
@x ) .
S(x,P,t)
~
by pn =
S(x,P,t)
The key is to treat ei ~
as a scalar, with a non2
trivial one-dimensional spatial metric gxx = ( @S
@q ) . Then
S
S
rn ei ~ ⌘ rx rx ...rx ei ~
S
= @x (rn 1 ei ~ ) (n
1)
x
iS
xx rn 1 e ~
S
S
S
(8)
(9)
The last equality comes from considering a generic term
in the series expansion of O(x, p)
h{xn pm , xr ps }i = (ns
= (ns
mr)hxn+r
mr)hxn+r
1 m+s 1
p
p̂
1 m+s 1
i
i
(10)
versus
i
h:[xn p̂m , xr p̂s ] :i
~
= h: xn [p̂m , xr ]p̂s + xr [xn , p̂s ]p̂m :i
= (ns mr)hxn+r 1 p̂m+s 1 i
(11)
(5)
S
It is quick to check for n = 1 that r1 ei ~ ⌘ rx ei ~ =
S
S
S
@x ei ~ . If it is true that rn 1 ei ~ = ( ~i S 0 )(n 1) ei ~ , then:
rn ei ~ = @x (( ~i S 0 )(n 1) ei ~ ) (n 1) xxx · ( ~i S 0 )(n
S
S
= (n 1)( ~i S 0 )(n 2) ~i S 00 ei ~ + ( ~i S 0 )n ei ~
S
(n 1) xxx · ( ~i S 0 )(n 1) ei ~
i 0 n iS
= (~S ) e ~
S(x,P,t)
~
P(P )dP dx : Ô(x, p̂) : ei
R
S(x,P,t)
P(P )dP dxei ~
h dO
i = h{O, H}i
dt
\
= h: {O,
H} :i
= ~i h:[: Ô :, : Ĥ :] :i
00
there is a non-zero connection xxx = 12 g xx @x gxx = SS 0 ,
where primes denote partial derivatives with respect to
x.
S(x,P,t)
S(x,P,t)
n
If I treat p̂n ei ~
⌘ ~i rn ei ~
as a covariant
rank-n tensor, I find that:
(7)
where the normal ordered operator is defined such that
all of the momentum operators appear on the right. The
direct correspondence between xn pm = xn (S 0 )m in O
and xn p̂m = xn ( ~i )m rm in : Ô(q, p̂) : thus follows from
the composition property of the covariant derivative.
Summarizing Equation 8 as hO(q, p)i = : Ô(q, p̂) :i, one
finds that for an operator which does not explicitly depend on time:
(4)
In this paper, I will consider the one dimensional case
q = x and define a covariant derivative such that acting
S(x,P,t)
on ei ~
a total of n times with the operator ~i rx
=
S(x,P,t)
P(P )dP dqO(x, p)ei ~
R
S(x,P,t)
P(P )dP dqei ~
then this is equivalent to:
S
H · ei ~ =
R
(1)
when K = 0. At first order, the function ei ~ had the
property that ordinary multiplication by the value of H
or pj was equivalent to acting with a di↵erential operator:
S(x,P,t)
~
If I define a partition function expectation value:
Pi Qi , and found:
@S
@t
= p̂n ei
1) i S
~
e
(6)
where some care must be taken when specifying what it
means to normal order the commutator (e.x. I would
want to have : [x, p̂] := i~ and not : [x, p̂] :=: xp̂ :
:
p̂x := 0).
Equation 9 is similar to Ehrenfest’s Theorem. There
is a natural association between the Poisson Bracket of
classical mechanics and the normal ordered commutator
of normal ordered operators.
Expectation Values from the Hamilton-Jacobi Equation
Sabrina Gonzalez Pasterski
(Dated: March 2, 2014)
I use the classical Hamilton-Jacobi Equation to formulate a definition for operator expectation
values consistent with those for quantum operators.
In “Wavefunctions and the Hamilton-Jacobi Equation,” I showed that performing a canonical change of
variables from (qi , pi ) to constants (Qi , Pi ):
pi q̇i
H(q, p, t) = Pi Q̇i
for F = S(q, P, t)
K(Q, P, t) +
dF
dt
(1)
Pi Qi , gave:
@S
@t
H=
pi =
@S
@qi
Qi =
@S
.
@Pi
(2)
S
i~
when K = 0. Here dS
had
dt = L and the function e
the property that ordinary multiplication by the value
of H or pj was equivalent to acting with a di↵erential
operator:
S
@S i S
e ~
@t
H · ei ~ =
S
pj · ei ~ =
@S
@qj
Under certain conditions for S, the boundary
term can
R
cancel or vanish. The suppression by dq in the denominator, however, can eliminate this term even for finite
@S
@q , when the range of q is taken to infinity. The result is
that hp̂2 i = hp2 i. At any time, the operator expectation
value for p̂2 , as defined in Equation 4, is equal to the
classical spatial average of the momentum squared for
particles distributed with probability P(P ) over classical
orbits with constant P .
Higher powers of p̂ will give similar boundary terms
when integrated by parts, so that hp̂n i = hpn i. Products
of p̂ with q include extra terms. For instance:
S
= i~@t ei ~
S
(3)
S
ei ~ = ~i @qj ei ~ .
hq p̂
p̂qi =
R
=
R
In that paper, I went on to study how classical equations di↵er by a term of order ~. Here, I will focus on the
one-dimensional case and instead look at a new definition
for the expectation value of an operator:
hÔi ⌘
R
n
P(P )dP dq e
i
R
S(q,P,t)
~
Ôei
dq
o
S(q,P,t)
hp̂2 i =
=
R
R
(
P(P )dP dq e
i
@
P(P )dP dq ~
i @q
@S
@q ,
S(q,P,t)
~
(
R
e
i
R
@ ) e
(~
i @q
S(q,P,t)
~
~
i
+
R
R
@S(q,P,t) 2
)
@q
P(P )dP dq(
R
dq
@
(q ~
i @q
R
dq
~
i
@ )ei
q~
i @q
)
S(q,P,t)
~
)
which agrees with the quantum commutation relation
[q̂, p̂] = i~ for the position and momentum operators.
HERMITICITY REQUIREMENT
either side of Ô in Equation 4 allow one to replace @S
@q in
@
O with a partial derivative ~i @q
on either the left or the
right.
)
dq
dq
S(q,P,t)
~
S(q,P,t)
~
A classical observable which does not depend explicitly
on time can be described as a function of position and
momentum: O(q, p) = O(q, @S
@q ). The exponentials on
)
S(q,P,t)
S(q,P,t)
~ @ i
~
~
i @q
@S(q,P,t) q=+1
|q= 1
@q
P(P )dP
R
i
~ @ q)ei
i @q
= i~
II.
dq
S(q,P,t)
S(q,P,t)
@ e i
@ ei
~
~
(~
)( ~
)
i @q
i @q
R
P(P )dP dq
=
(
P(P )dP dq e
@
(q ~
i @q
R
dq
(4)
one finds that:
2 i
S(q,P,t)
~
(6)
QUADRATIC TERMS
Since S is defined so that p =
i
~
where P(P ) is a normalized, real probability distribution
over the classical constant P , S is real, and Ô is a hermitian operator found by promoting the classical function
O(q, p) to O(q, ~i @q ).
I.
(
P(P )dP dq e
(5)
The di↵erential operator Ô must be Hermitian to be
consistent, i.e. the expectation value of the operator
1 n m
m n
2 (q p̂ + p̂ q ) needs both terms to agree with the clas@2
@2
n m
sical hq p i. Consider Ô = ~2 [f (q) @q
2 + @q 2 f (q)]:
2
~2
hÔi =
~2
=
R
~
+
=
R
+
R
(
P(P )dP dq e
P(P )dP dq
2R
n
R
@ [e
@q
P(P )dP dq
P(P )dP dq (
R
(
S(q,P,t)
i
~
(
i
@ e
@q
S(q,P,t)
2
2
i
~
[f (q) @ 2 + @ 2 f (q)]e
@q
@q
dq
S(q,P,t)
~
R
i
@ e
f (q)] @q
i
S(q,P,t)
~
)
dq
S(q,P,t)
S(q,P,t)
@ [f (q)ei
~
~
@q
R
)
]
d
O(q, p) = {O, H}
dt
)
dq
@S(q,P,t) 2
@S(q,P,t) 0
) f (q)+i~
f (q)
@q
@q
R
d
i
hÔi =
h[Ô, Ĥ]i
dt
~
dq
n
o
@S(q,P,t) 2
@S(q,P,t) 0
P(P )dP dq (
) f (q) i~
f (q)
@q
@q
R
dq
(7)
where the symmetry of Ô led to the cancellation of terms
depending on f 0 (q).
POISSON BRACKET TO COMMUTATOR
One often considers the promotion of a poisson bracket
(when there are no constraints) to a quantum commutator as a rule of thumb. In the context of the expectation values considered here, one sees that this association comes in two steps: first, the canonical commutation
relations allowed me to construct the classical function
S(q, P, t); second, the expectation value defined in Equation 4 allowed me to promote classical multiplication to
a di↵erential operator.
For functions that can be Taylor expanded in q and p:
h{q n pm , q r ps }i = (ns mr)hq n+r 1 pm+s 1 i
= 4~i h[q n p̂m + p̂m q n , q r p̂s + p̂s q r ]i
(9)
and, from Equation 8, conclude that:
o
= h2f (q)( @S(q,P,t)
)2 i
@q
III.
the expectation value of the Poisson bracket is proportional to the expectation value of the commutator of the
associated Hermitian operators.
As a result, I can use the classical equations of motion
to compute the time derivative of a classical operator:
(8)
(10)
since the time derivatives of the exponential factors cancel. Equation 10 is known as Ehrenfest’s Theorem.
IV.
COMPLEX PHASE VS. REAL
EXPONENTIAL
Up until this point, the meaning of the constant ~ has
been unspecified. Its units are required to be the same as
Planck’s constant. The definition of the phase factors was
designed to make the results consistent with quantum
S(q,P,t)
Ht
mechanics. The ei ~
is reminiscent of e i ~ . Note
that the classical hamiltonian is given by H = @S
@t .
One could opt to consider only real functions, as opposed to complex ones, by taking ~ ! i✏. This changes
S(q,P,t)
S(q,P,t)
✏
ei ~
to e
, which is reminiscent of Wick rotating the time dimension when computing path integrals.
While having ~ be real makes the expectation value
look like it involves a unitary transformation of Ô, imaginary ~ gives something similar to a statistical mechanics
partition function.
Wavefunctions and the Hamilton-Jacobi Equation
Sabrina Gonzalez Pasterski
(Dated: September 8, 2013)
I show how the di↵erential equation governing a distribution over classical trajectories is consistent
with the quantum Schrödinger Equation in the ~ ! 0 limit.
In classical mechanics, a change of variables from
(qi , pi ) to (Qi , Pi ) produces equivalent equations of motion when:
pi q̇i
H(q, p, t) = Pi Q̇i
K(Q, P, t) +
dF
dt
(1)
for a new Hamiltonian K. Let F = S(q, P, t)
then:
pi q̇i
H(q, p, t) =
Ṗi Qi
K+
@S
@t
pi =
@S
@S
@S
+
q̇i +
Ṗi . (2)
@t
@qi
@Pi
@S
@qi
Qi =
@S
.
@Pi
(3)
If K = 0, then Qi and Pi are constants and H =
known as the Hamilton-Jacobi Equation. Here
dS
@S
@S
=
+
q̇i = pi q̇i
dt
@t
@qi
H=L
@S
@t ,
(4)
showing that S(q, P, t) can be thought of as an action.
@S
The only dynamic variables are qi and t. Pi and Qi = @P
i
are the 2N constants needed to specify the trajectory
of a classical particle. To solve for the trajectory of a
particle using the Hamilton-Jacobi Equation, S(q, P, t) is
@S
found and then the constants Qi = @P
provide implicit
i
expressions for qi (t).
S
Now consider ei ~ , where the constant ~ makes the
phase dimensionless:
H ·e
iS
~
@S
e
@t
=
S
pj · ei ~ =
@S
@qj
iS
~
= i~@t e
S
iS
~
S
S
2
~
@
i qj
S
ei ~ =
=
⇣
h
@S
@qj
p2j
+
⌘2
~
i
+
⇣
~ @2S
i @qj2
@pj
@qj
⌘ i
P
S
ei ~
(6)
e
iS
~
where the partial derivative of the momentum pj with
respect to the coordinate qj is non-zero since the transformed momenta Pi are kept constant during di↵erentiation, not the original momenta pi . In the limit:
~
@2S
⌧
@qj2
S
✓
@S
@qj
◆2
(7)
multiplying H(q, p, t)·ei ~ can be approximated as acting
S
with the operator Ĥ(q, ~i @qj , t)ei ~ . This gives:
S
S
~
i~@t ei ~ ' Ĥ(q, @qj , t)ei ~
i
Z
P(Pi )ei
S(q,P,t)
~
dPi
(9)
where P(Pi ) is a probability distribution over trajectories
that pass through the point (q, t) with di↵erent velocities,
then (q, t) also satisfies Equation 8. That equation has
the same form as the Schrödinger Equation for the quantum wave function, substituting (q, t) ! (q, t).
A function (q, t) analogous to the quantum wavefunction (q, t) thus results from taking an array of particles
traveling along classical trajectories and weighting the
S
phase ei ~ at each position ~q and time t with a probability distribution for the constants Pi that distinguish
trajectories passing through (q, t) with di↵erent velocities
(see Figure 1).
P
t
x
FIG. 1. Set of classical free particle trajectories through different (x,t) on a surface of constant P with ẋ > 0.
(5)
S
ei ~ = ~i @qj ei ~ .
Acting twice on ei ~ with p̂j = ~i @qj introduces correcS
tions of order O(~) to the value of p2j · ei ~ :
p̂2j ei ~ =
(q, t) =
Pi Q i ,
Taking (qi , Pi ) as the independent variables means Equation 2 is satisfied for:
H=K
which is a linear di↵erential equation that holds for any
set of Pi in S(q, P, t). If I define a function:
(8)
For the case of the free particle, the limit p
defined in
Equation 7 even holds for ~ 9 0, since S = ± 2mP x
P t has a zero second derivative with respect to x. The
function (q, t) is thus a superposition of plane waves.
The free-particle solution is often used as a basis for motivating the quantum Schrödinger Equation and we see
here that classical mechanics gives the same result.
In the free particle example, the constant of motion P
is identified with the energy of the particle. While a single classical trajectory in N dimensions can be specified
with 2N constants, the expression for S depends on only
N constants. This is analogous to the number of independent quantum numbers that can be used to describe
spatial wavefunctions: ex. E in one dimension, {E, Lz }
in two dimensions, and {E, L, Lz } in three dimensions.
Since, for one dimension, the energy can be used as the
constant P , weighting with P(P ) can be compared to using a Boltzmann factor to weight an ensemble of classical
states based on their energy.
Solving the Shrödinger Equation Using a Complex Gauge
Sabrina Gonzalez Pasterski
(Dated: September 8, 2013)
I show a quick route to the n = ` + 1 wave functions and energies for the Hydrogen atom using a
method which is also applied to find the ground state of the simple harmonic oscillator.
In this paper, I modify the procedure normally used to
change between gauges for a magnetic field, and employ
it as a quick route to finding particular solutions to the
Schrödinger Equation for 1/r and quadratic potentials. I
will start the radial Schrödingier Equation for Hydrogen:
 2
~2 `(` + 1) e2
p
Hu` (r) =
+
u` (r).
(1)
2m
2mr2
r
Here, u` (r) obeys an e↵ective one-dimensional Hamiltonian with the restriction that u` (0) = 0. A threedimensional solution to the full Schrödingier Equation
is then:
u`
= Y`m .
(2)
r
In the presence of a magnetic field, the Hamiltonian for
an electron would be modified by changing pi ! pi + ec Ai
~ is the vector potential: r
~ ⇥A
~ = B.
~ Changing
where A
~ by a gradient r
~ does not modify B
~ but will add a
A
phase to the wavefunction, such that the new solution is:

ie
0
= exp
.
(3)
~c
For motion along a single direction x̂ with no magnetic
field, we can change the x̂ component of the conventional
~ = 0 by an arbitrary function c f (x).
vector potential A
e
The new vector potential ec f (x)x̂ will still have zero curl
and the solution to the Schrödingier Equation:

(p f )2
0
H 0=
+V
(4)
2m
will also describe the motion of an electron in a potential
V with no magnetic field. Using Equation 3, the solution
to the original Schrödingier Equation will be given by:
 Z
i
= exp
f dx 0 .
(5)
~
The radial Schrödingier Equation for Hydrogen is an
example where the introduction of a vector potential-like
term can actually simplify finding u` for the ground state.
Let:
H̃ ũ` (r) =
"
1
2m
✓
k
p+
r
◆2
~2 `(` + 1)
+
2mr2
#
e2
ũ` (r)
r
(6)
where I use the notation H̃ since I will not restrict k
to being real. Although this makes H̃ non-hermitian,
the equation for u` in terms of ũ` is still valid, and it is
useful to think of it as a generalized change in gauge.
Expanding H̃ and arranging p to the right of 1/r in
the expansion of the squared term gives:
H̃ =
p2
k
k 2 + i~k ~2 `(` + 1)
+
p+
+
2m mr
2mr2
2mr2
e2
.
r
(7)
For k = {i~`, i~(` + 1)} the 1/r2 terms in Equation 9
cancel. These choices for k would give:
u` = {ũ` · r
`
, ũ` · r(`+1) }.
(8)
The restriction u` (0) = 0 leads us to choose the second
option: k = i~(` + 1). This gives a di↵erential equation
for the stationary states ũ` :

~2 2 ~2 (` + 1)
e2
E ũ` =
@r
@r
u˜` .
(9)
2m
mr
r
for which it is seen that an exponential solution e↵r can
me2
be chosen such that the 1/r terms cancel: ↵ = ~2 (`+1)
.
I have thus found a solution with energy E = ~2m↵ =
me4
2~2 (`+1)2 . The corresponding wave functions for ` =
0, 1, 2 . . . are:

r
= Ar` exp
Y`m
(10)
(` + 1)a0
2
2
for some normalization constant A and Bohr radius a0 =
~2
me2 . These correspond to the n = ` + 1 states of the
Hydrogen atom.
This derivation takes advantage of a special cancellation allowed for a 1/r potential that makes an exponential
solution to the resulting di↵erential equation apparent.
This technique of introducing an e↵ective gauge can also
be applied to a quadratic potential:
H̃ =
=
1
2m (p
p2
2m
+ im!x)2 + 12 m! 2 x2
+ i!xp +
~!
2
(11)
has 0 = const. as a solution with energy E = ~!
2 . Equation 5 gives a solution to the original Hamiltonian:
 Z

i
m!x 2
= A exp
im!xdx = A exp
(12)
~
2~
which is the ground state of the simple harmonic oscillator.
Space and Time
Sabrina Gonzalez Pasterski
(Dated: August 5, 2013)
I arrive at Maxwell’s Equations, gauge invariance, and the force law for charges, starting from a
conserved current.
Electricity and magnetism can be motivated from two
equations:
@µ J µ = 0 & vµ F µ = 0.
(1)
The first, @µ J µ = 0, states that there exists a conserved
current and is a compact expression for the continuity
equation:
0
1
⇢c
@⇢ ~
B ⇢v C
Jµ = @ x A !
+ r · ⇢~v = 0.
(2)
⇢vy
@t
⇢vz
The second, vµ F µ = 0, is consistent with the definition of
the four-force, F µ , as the derivate with respect to proper
time ⌧ (see “Conformal Mapping of Displacement Vectors”) of the energy-momentum four-vector. Plugging in
pµ = mv µ :
vµ F µ =
1
d
1 d
pµ p µ =
pµ pµ /
m d⌧
2m d⌧
dm
= 0.
d⌧
(3)
The defining feature of these equations is that they are
valid in any inertial reference frame. While the fourvectors within these expressions transform during boosts
(as discussed in “Motivating Special Relativity using Linear Algebra”), the contraction of indices gives a Lorentz
invariant (see “Dot Products in Special Relativity”).
I focus on an approach that employs determinants
rather than Levi-Civita tensor notation and extends the
concept of finding orthogonal vectors described in “An
Alternate Approach for Finding an Orthogonal Basis.”
An important starting point is that the divergence of a
curl is zero. Looking at the expression:
~ ⇥B
~ =
r
x̂ ŷ ẑ
@x @y @z
Bx By Bz
(4)
and noting that the dot product with a constant vector:
~ · (r
~ ⇥ B),
~ is equivalent to replacing the unit vectors
C
~ gives a helpful mnemonic for
in the first row with C,
remembering that the divergence of the cross product is
zero. However, the reason why this is true is not because
~ can be thought of as a vector that is parallel to itself,
r
but rather because partial derivatives commute and the
determinant produces antisymmetric combinations of the
entries.
Electricity and magnetism arises from finding an expression for J µ . The core idea is that J µ can be expressed
in terms of a single four-potential Aµ . In the following
calculations, J µ will be constructed from Aµ and other
objects that are independent of the system: @ µ and the
Minkowski metric.
The constraint is that @µ J µ = 0 from Equation 1. Because there are four components, it is not possible to use
an ordinary curl, as in Equation 4. The idea from my
orthogonal vector paper of using a 4 ⇥ 4 determinant can
be applied, however. The one extension needed is that
the unit vector for the time direction picks up a negative
sign relative to the spatial unit vectors to be consistent
with the modified inner product. One vector that has
zero divergence is:
t̂
⌫0
K (⌫) =
@0
A0
x̂
⌫1
@1
A1
µ
ŷ
⌫2
@2
A2
ẑ
⌫3
@3
A3
(5)
for any constant four-vector ⌫ µ (so that the derivatives
only hit Aµ ). This is not sufficient to define J µ , however,
because there should not be a special direction ⌫ µ . Instead, I use K µ to find another vector that is divergenceless. Wanting J µ to be linear in Aµ , I replace the Aµ row
with K µ :
µ
H (⌫, ) =
t̂
x̂
0
1
ŷ
2
ẑ
3
@0 @1 @2 @3
K0 K1 K2 K3
(6)
to get a vector that involves products of the components
of ⌫ µ and µ . This suggests a generalization of Equations 5 and 6. Rather than including two arbitrary vectors, ⌫ µ and µ , I could have their components represent
unit vectors. Products of di↵erent components would
then correspond to inner products of these unit vectors
and the invariance of the Minkowski metric would allow
an expression for J µ in terms of only derivatives of Aµ .
In place of Equation 5, I get the tensor:
G
µ⌫
=
t̂µ
t̂⌫
@0
A0
x̂µ
x̂⌫
@1
A1
ŷµ
ŷ⌫
@2
A2
ẑµ
ẑ⌫
.
@3
A3
(7)
Substituting the coefficients of the µ unit vectors in place
of the Aµ row, as in Equation 6, and contracting over the
⌫ index by taking inner products of ⌫ unit vectors that
2
are multiplied together gives:
1
2
Jµ /
=
t̂µ
t̂⌫
@0
G0⌫
✓
x̂µ ŷµ ẑµ
x̂⌫ ŷ⌫ ẑ⌫
@1 @2 @3
G1⌫ G2⌫ G3⌫
r 2 A0
~
~
~ +
r ⇥ (r ⇥ A)
1 @ ~
~
c @t (r · A)
1 @ ~ 0
1 @ ~
c @t (rA + c @t A)
(8)
◆
Where the overall scaling does not a↵ect the divergenceless property of J µ .
~ = rA
~ 0 c @t A
~ and B
~ = r
~ ⇥A
~ as the
Defining E
0
electric and magnetic fields, with A = /c as the static
electric potential, Equation 8 gives the inhomogeneous
Maxwell equations:
~ ·E
~ =
r
~ ⇥B
~
r
APPENDIX
⇢
✏0
1 @ ~
c2 @t E
(9)
~
= µ0 J.
The divergence-less property of Gµ⌫ gives the homogeneous Maxwell Equations:
~ ·B
~ =0
r
~ ⇥E
~+
r
@ ~
@t B
(10)
= ~0.
The gauge invariance of the vector potential is evident
from the way in which J µ was derived using determinants. Determinants are linear in each row in the sense
that if one row includes the vector ~v = ~u1 + ~u2 , the total determinant is equal to the sum of the determinants
found by replacing ~v with ~u1 and then ~v with ~u2 . The result of Equation 7 would be the same for A0µ = Aµ + µ ,
if µ = @ µ f for some function f (ct, x), since the determinant with @ µ f as the last row is zero.
The step that was used to get to Equation 8 from
Equation 7 can also be used to arrive at the force law
for charged particles. Equation 1 says that vµ F µ = 0.
Rather than needing to find a vector whose inner product with @ µ is zero, the goal now is to get a vector whose
inner product with v µ is zero:
F
µ
/
1
2
=
✓
t̂µ x̂µ ŷµ ẑµ
t̂⌫ x̂⌫ ŷ⌫ ẑ⌫
v0 v1 v2 v3
G0⌫ G1⌫ G2⌫ G3⌫
1
~
v·E
c~
~
~
E + ~v ⇥ B
In summary, I arrived at results from electricity and
magnetism by requiring that: 1) a divergence-less current
be described by second derivatives of a vector potential;
and 2) that the four-force for a charge be proportional to
first derivatives of this potential and orthogonal to the
four-velocity of that charge. In this method, the di↵er~ and B
~ appears as a di↵erence between
ence between E
~ =r
~ ⇥A
~ takes crossed spatial
space and time. While B
~ I found that
derivatives of the spatial components of A,
~
E combines spatial derivatives of the time-like component with the time derivatives of the corresponding spatial components. This highlights a symmetry between
~ and B.
~
interchanging space and time, rather than E
(11)
◆
where the same contraction of ⌫ unit vectors is performed
as was described earlier for finding J µ . With the charge q
as the proportionality constant, this returns the Lorentz
force law:
h
i
~ + ~v ⇥ B .
F~ = q E
(12)
I will now use Einstein summation notation to reach
the same results from a di↵erent route. The above derivation illustrates a structure to the way in which the fields
and four-potential are defined relative to one another:
each field mixes only two of the four space-time coordinates of the four-potential, so that there are 42 = 6
field components. This can be imbedded in the way that
the potential is defined if I postulate that currents and
forces can arise from taking derivatives or multiplying
four-velocities by a tensor potential:
Aµ⌫ =
1 µ⌫
"
2
↵
"
↵ ⌧A
⌧
(13)
which restricts the set { , ⌧ } to the set {µ, ⌫}. The antisymmetry of µ $ ⌫ means there are six possible ways to
form distinct scalars by contracting with combinations of
@ µ and v µ :
@µ @⌫ @ Aµ⌫ = 0, vµ v⌫ @ Aµ⌫ = 0, vµ @⌫ @ Aµ⌫ 6= 0;
@µ @⌫ v Aµ⌫ = 0, vµ v⌫ v Aµ⌫ = 0, vµ @⌫ v Aµ⌫ 6= 0.
(14)
Since @µ J µ = 0, the first column in Equation 14 gives
@⌫ @ Aµ⌫ and @⌫ v Aµ⌫ as possible candidates for J µ .
The second option is excluded by requiring that the fourcurrent be independent of the four-velocity of an external
charge carrier.
Since vµ F µ = 0, the second column in Equation 14
gives v⌫ @ Aµ⌫ and v⌫ v Aµ⌫ as possible candidates for
F µ . Requiring that the force involve derivatives of the
potential eliminates the second option. In terms of Aµ⌫ ,
I then have:
@ Aµ⌫ =
F µ⌫ , @⌫ @ Aµ⌫ = µ0 J µ , v⌫ @ Aµ⌫ =
1 µ
F (15)
q
the Faraday Tensor, four-current, and force law.
Conformal Mapping of Displacement Vectors
Sabrina Gonzalez Pasterski
(Dated: August 4, 2013)
I explore the asymmetry between displacements in space and time for motion in one dimension.
The existence of a maximum velocity c establishes a
notion of symmetry between space and time: it gives
a way to equate distances to time intervals such that a
space-time diagram can appear to put ct and x on equal
footing. There exists a fundamental di↵erence between
displacements in space and time, however. While I can
move either forward or backward in x, I can only move
forward in time. If I take my current position as the
origin of my coordinate system, the region of possible
displacements (cdt, dx) only fills a half-plane.
This raises a question about whether dx and cdt are
the best coordinates to use to describe a displacement.
For instance, using a polar coordinate system to describe
a change in position in the (x, y) plane (where my current
position is always taken as the origin), I could describe
my motion as a series of positive dr displacements along
di↵erent directions ✓ that parameterize the slope. While
such a system may be convenient for a single, instantaneous displacement, it makes reconstructing the full path
more challenging than if (dx, dy) were given. It is ideal to
find descriptions of displacements that are independent
of the observer.
In what follows, I consider what happens when I map
the half plane (cdt, dx) onto a full plane. It is possible
to perform a conformal mapping of this type by temporarily treating the displacement as a complex vari2
able z = cdt + idx and squaring to get w = z2 . The
real and imaginary parts are then used to define the
horizontal and vertical coordinates in this new plane:
ds2 = 12 (c2 dt2 dx2 ) = d⌘d⇠ as the horizontal coordinate and cdtdx as the vertical coordinate.
The result of this mapping is depicted in Figure 1.
It stretches angles at the origin by a factor of 2, but
preserves the orthogonality of lines of constant cdt and
dx. The curved lines in the (cdt, dx) plane show contours
of constant ds2 and cdtdx, while the curved lines in the
(ds2 , cdtdx) plane show contours of constant cdt and dx.
Because this map squares the magnitude of the initial
space-time displacement, the coordinates of the map are
now area elements. What is intriguing is that the two
orthogonal area element coordinates correspond to those
of the Minkowski (ct, x) diagram and the rotated (⌘, ⇠)
diagram of my paper “Motivating Special Relativity using Linear Algebra.” While the former (ct, x) basis diagonalizes the metric, the latter (⌘, ⇠) basis diagonalizes
the Lorentz transformation.
This mapping also highlights the x ! x symmetry
that was important in my linear algebra derivation: it
moves (0, dx) and (0, +dx) to the same point on the
negative d⌘d⇠ axis. An irreconcilable ambiguity in the
definition of the map along this axis would arise if we
dx
cdtdx
ds2
cdt
FIG. 1. Conformal mapping of space-time displacements.
could not consider these displacements as fundamentally
the same in some respect. Restricting v < c is equivalent
to requiring displacement vectors to have a positive value
of d⌘d⇠. Placing the same restriction in the (cdt, dx)
plane would be analogous to saying that a particle could
only move forward in time if there was no maximum velocity.
In my linear algebra derivation of Special Relativity,
I showed that the area element d⌘d⇠ is invariant under
boosts. Meanwhile cdt depends on the frame. In this new
coordinate system, the displacements along the horizontal axis behave similar to time displacements in Galilean
transformations: they are the same regardless of the reference frame. As such, we can divide by |ds| to get a set
of displacements, rather than area elements, that can be
more easily compared to Galilean intuition.
For a particle traveling at a constant positive , the
time-like and space-like displacements are:
d⌧ =
p
1
2 dt
d =p
1
1
2
dx.
(1)
These coordinates describe the increment in proper time,
⌧ , and the length contraction of distances parallel to the
motion.
The mapping from half-plane to full plane thus illustrates important features of Special Relativity. Moreove,
these features of the w(z) mapping are consistent with
Special Relativity because the scale factor for the time
axis relative to the space axis, c, was equal to the speed
of light.
An Alternate Approach for Finding an Orthogonal Basis
Sabrina Gonzalez Pasterski
(Dated: August 2, 2013)
I present an alternative to the Gram-Schmidt method for finding a basis of orthogonal vectors
spanning the same space as a set of starting vectors.
The Gram-Schmidt method for creating an orthogonal
basis that spans a set of vectors is as follows: 1) Start
with one of the vectors and divide by its norm to get
a unit vector. 2) Take another vector, subtract o↵ its
projection along the first vector and normalize the result. 3) Repeat for each subsequent vector in the set,
subtracting o↵ projections along all previous unit vectors that have been found.
The alternative that I present is elegant although computationally slower because it involves calculating multiple determinants. It is advantageous in situations where
unknown symbolic variables need to be manipulated,
or where small numbers cause instabilities using GramSchmidt. My determinant approach provides an orthogonal basis without requiring division, although scaling by
the modulus of the vectors can be done at the end if a
normalized basis is desired.
My alternative also uses linear algebra, but starts from
determinants rather than projections of vectors. The determinant of a set of linearly dependent vectors is zero.
When a matrix has a non-zero determinant, the rows can
be taken as a basis for the space spanned by the set of
vectors.
Begin by considering the cross product from vector calculus. The cross product of two vectors can be written
as a determinant with arbitrary unit vectors occupying
the first row:
x̂ ŷ ẑ
~⇥B
~ = Ax Ay Az
A
Bx By Bz
~ that I used to find C
~ in
other. If I replace the original B
~ I find a new vector that is
Equation 1 with the vector C,
~ and C:
~
orthogonal to both A
x̂ ŷ ẑ
~ =A
~⇥C
~ = Ax Ay Az
D
Cx Cy Cz
~ is already orthogonal to A,
~ I have just found an
Since C
orthogonal basis for three dimensions using two determinants, without needing to do any division.
This can be extended to higher dimensions in a way
that: 1) gives a basis for the full space spanned by a set
of m linearly independent m-dimensional vectors; and
2) includes a known subset of these basis vectors which
span the same n-dimensional space as a set of n linearly
independent vectors of length m which are pxrovided as
a starting point. This second feature is important when
one wants to study that particular subspace.
Consider the case with n < m. I require that the
n starting vectors of length m be linearly independent
and, additionally, that the vectors found by truncating to
the first n + 1 components are also linearly independent.
Truncate these vectors to get n vectors of length n + 1
and construct an n⇥n determinant similar to Equation 1,
with arbitrary unit vectors in the first row:
(1)
~ and B,
~
The result is a vector that is orthogonal to A
~
~
~
as can be seen by the following route: C · (A ⇥ B) is
equivalent to replacing the first row of the determinant
~ Since C
~ =A
~ ⇥ B,
~ it follows that
in Equation 1 by C.
~
~
~
~
C · C will be non-zero as long as C 6= 0. This is the case
~ and B
~ are not parallel. As a result, the
as long as A
determinant of:
0
1
Cx Cy Cz
M = @ Ax Ay Az A
(2)
Bx By Bz
is non-zero and the three rows are linearly independent.
~ will be perpendicular to both A
~ and B.
~
Moreover, C
~ and B
~ need not be perpendicular to one anHowever, A
(3)
~vn+1
x̂1 x̂2 x̂3 . . .
v1x1 v1x2 v1x3 . . .
= x1 x2 x3
v2 v2 v2 . . .
.
.
.
(4)
This determinant will give an n + 1 dimensional vector
orthogonal to the previous n. Next, include the n + 2th
coordinate of the first n vectors and add a zero as the
n + 2th coordinate of ~vn+1 from Equation 4. Place this
vector in the last row of an n + 2 ⇥ n + 2 determinant
and repeat.
This will build up to a set of m vectors, of which
~vn+1 ...~vm will be orthogonal to each other and the starting set ~v1 ...~vn . One can now work backwards, using steps
analogous to Equation 3 to sequentially replace the first
n vectors with ones orthogonal to all other vectors, using
m ⇥ m determinants.
Degeneracy vs. Energy Level Scaling for Hydrogen
Sabrina Gonzalez Pasterski
(Dated: August 1, 2013)
I explore the relationship between degeneracy and energy for the Hydrogen atom.
The energy levels of the Hydrogen atom from quantum
mechanics are given by:
E0
n2
En =
E0 =
me4
, n 2 Z+
2~2
(1)
in c.g.s. units, where no perturbations are included. The
degeneracy of each level is n2 . If it were possible to ignore
electron-electron interactions and fill the energy levels
following the Pauli Exclusion Principle, a total of 2n2
electrons could occupy each En , corresponding to one
spin-up and one spin-down electron in each of the n2
states. To make the atom neutral when it has a fixed
number of electrons, the charge of the nucleus could be
increased, scaling E0 . In this manner, a completely filled
shell would contribute 2E0 to the total energy of the
system, regardless of the n that the shell corresponds to.
This is consistent with the Hellman-Feynman theorem:
@En
=
@e
⌧
n
@H
@e
n
!
⌧
1
r
=
1
a 0 n2
(2)
where a0 = m~e e2 is the Bohr radius. Higher energy shells
of Hydrogen are further away from the proton. Increasing
the amount of orbiting electron charge by n2 would cancel
this e↵ect and give the same potential energy for each
shell.
The total energy of the system is linear in the filling
fraction ⌫, representing the number of filled energy levels.
Each electron within a given shell contributes equally,
while a totally filled shell contributes 2E0 . Figure 1
shows how the total energy decreases with ⌫ while the
density of allowed values of ⌫ increases as 2n2 for shells
that can hold more electrons. In the ground state, at
T = 0, the lowest energy levels are filled.
The symmetry of a constant Dn En , for degeneracy Dn ,
also e↵ects the relative probability for finding a single
electron in an n-shell versus m-shell state. No electronelectron interactions are being ignored, although only relative probabilities are studied since the sum over all levels
diverges. If the probability distribution for the state of
a single particle can be treated with Maxwell-Boltzmann
statistics, then:
2
E0
P(n)
( 1
= n 2 e kB T n 2
P(1)
1)
.
(3)
In the high temperature limit, P(n) / n2 , so that the
average contribution to the energy of the system: P(n)En
is independent of n.
Equation
3 also shows that P(n) has a minimum at
q
0
n = kEB0T , corresponding to kB T = E
n2 = |En |. When
the characteristic thermal energy kB T is closest to En ,
the nth energy level is least likely to be populated. This
relates to stimulated absorption since the amount of thermal energy is just enough to ionize the nth energy level.
For instance, a gas of hydrogen atoms (as opposed to H2 )
at the temperature of the sun’s photosphere would have
a minimum relative population for the n = 5 energy level
according this Boltzmann distribution. Higher temperatures preferentially depopulate shells with smaller n.
If instead, we had a constant Dn En with Dn / nj
rather than n2 , the least populated state would still correspond to kB T = |En |. If the power of the degeneracy
is di↵erent, such that Dn / nf , En / n j , the minimum
population occurs at kB T = fj |En |. A similar extremum
occurs for energy levels that are positive and increase
with n, reminiscent of Rotational Raman Scattering.
Since degeneracies broken by perturbations usually
only cause small splittings in the energy levels of a system, one could imagine that there would be cases where
the fj proportionality factor could be used to define an
e↵ective order for the degeneracy of energy levels.
E
1
2
3
ν
Etot= -2E0ν
FIG. 1. Total energy as a function of the filling fraction for
the ground state of an idealized multi-electron atom.
Dot Products in Special Relativity
Sabrina Gonzalez Pasterski
(Dated: July 30, 2013)
I present a derivation of the Minkowski metric and visualizations of the space-time dot product.
I.
gives g = 1 for the convention where spatial dot products are positive. Rotating back into the (ct, x) basis:
POSTULATES
1. The dot product is linear and commutative.
M=
2. The dot product is invariant under boosts.
3. The metric defining the dot product is independent
of the reference frame.
II.
DERIVING THE METRIC
The goal is to construct a scalar from two space-time
vectors that is invariant during Lorentz boosts between
inertial reference frames. Requiring that the dot product
be linear (Postulate 1) gives:
A · B = (at ax )
= A0 M B
✓
t̂a · t̂b t̂a · x̂b
x̂a · t̂b x̂a · x̂b
◆✓
bt
bx
◆
(1)
where the matrix M corresponds to the Minkowski metric, written ⌘µ⌫ for Special Relativity. I use M to distinguish it from the (⌘, ⇠) basis defined in my “Motivating
Special Relativity Using Linear Algebra” paper.
If the dot product is assumed to be commutative: A ·
B = B · A, so M is symmetric. This holds in any basis.
Moreover, the (ct, x) basis should diagonalize M so that a
displacement in the rest frame is orthogonal to a change
in time. This intuition will be used as a cross check.
Starting in the (⌘, ⇠) frame:
M=
✓
f g
g h
◆
.
(2)
I have used the symmetry of M and a restriction that
the entries are real to reduce M to three parameters,
corresponding to dot products of the ⌘ˆ and ⇠ˆ unit vectors
as in Equation 1 for the (ct, x) basis.
As shown in the linear algebra-based derivation paper, a boost stretches ⌘ by 1 (v) and ⇠ by 2 (v). This
stretching can be absorbed into M 0 :
0
M =
✓
2
1f
1 2g
1 2g
2
2h
◆
= M.
(3)
The matrices M and M 0 can be equated by choosing A
and B to pick out each entry, satisfying Postulate 2.
Postulate 3 says that M should be independent of .
While M = M 0 would hold if f / 1/ 21 , the definition of
the dot product should not change with reference frames.
This is satisfied if f = h = 0. Moreover, 1 2 = 1 gives
M = g x.
Finally, the dot product should reduce to the ordinary
ˆ
dot product for two vectors along the x̂ = ⇠p2⌘ˆ axis. This
✓
1 0
0 1
◆
(4)
I have thus derived the space-time metric for a single spatial coordinate, finding the invariant dot product between
two space-time vectors: A · B = at bt + ax bx .
III.
VISUALIZING DOT PRODUCTS
While spatial bases can be rotated onto each other,
for boosts, the transformations are restricted: time-like
and space-like vectors on opposite sides of the light cone
cannot be transformed into one another.
The above derivation generalizes to three spatial dimensions by having ŷ and ẑ behave like x̂ so that:
A · B = at bt + ~a · ~b, where ~a is the spatial part of A.
It is possible to rotate the spatial axes so that x̂ aligns
with A and then only use (bt , bx ).
To build intuition, consider the case A = B. This
dot product is zero if at = ax and is largest if one of
the two components is zero. For general B: |A · B| =
|(ax , at , 0) ⇥ (bt , bx , 0)|, illustrating the cross-product-like
nature of the space-time dot product when time is treated
as an additional spatial coordinate. The magnitude of
this cross product is equal to the area defined by reflecting one of the vectors across the line x = ct. Alternatively, a standard dot product can be taken after
reflecting one of the vectors in time. Both methods are
illustrated in Figure 1.
x
x
A
A
B
B
ct
ct
FIG. 1. Two space-time dot product visualizations which use
reflections in the (ct, x) plane.
The Schrödinger Equation and Phase Space
Sabrina Gonzalez Pasterski
(Dated: July 30, 2013)
I arrive at the Schrödinger Equation given a particular form for probabilities in phase space.
In this paper, I explore what happens when the probability distribution for particles in phase space arises from:
1
'(x, p, t) = p
[
2 2⇡~
⇤
(x, t) ˜ (p, t)e
ipx
~
+ c.c.]
(1)
for some complex (x, t), such that expectation values
of classical observables are
RR found by integrating over
this function: hf (x, p)i =
f (x, p)'(x, p, t)dxdp. Here
“ + c.c.” means that the complex conjugate is added.
Equation 1 forces ' to be real, but allows it to be negative. It is completely specified by (x, t), since ˜ (p, t) is
defined as the Fourier transform of (x, t):
˜ (p, t) = p 1
2⇡~
Z
+1
(y, t)e
ipy
~
dy.
(2)
1
To get some intuition for how ' relates to a probability
distribution, notice that:
R +1
P(x, t) = 1 '(x, p, t)dp = | (x, t)|2
R +1
P(p, t) = 1 '(x, p, t)dx = | ˜ (p, t)|2
0 = Im[ ⇤ @x ]
= Im[A(x)(A0 (x) + iA(x)g 0 (x))]
p2
+ V (x).
2m
Z
+1
H(x, p)'dp =
1
Z
While Equation 1 gives:
'˙ =
ipx
2
p1
e ~
2⇡~
⇤
[ @@t ˜ +
⇤@˜
@t
0
V (x)(
+
p @ ⇤
(
m @x
⇤@˜
@p
+
˜+
⇤˜
ix
~
ip
~
⇤˜
)
(6)
)] + c.c.
The following
calculations
explore the consequence of reR
R
stricting 'dp
˙ = ⇢dp
˙ = 0. Using integration by parts:
p ˜ (p, t) =
=
=
p1
2⇡~
p1
2⇡~
p1
2⇡~
R +1
1
R +1
R
p (y, t)e
(y, t)
1
+1 ~
@ [
1 i y
ipy
~
dy
~
@y [e
i
(y, t)]e
ipy
~
ipy
~
]dy
(7)
dy
@p ˜ (p, t) =
p1
2⇡~
1
[
iy
]
~
(y, t)e
dy.
(8)
These relations cause the ṗ@p ' term to vanish when integrated over p. The first two terms in Equation 5 yield:
@t [
⇤
]=
~
@x [
(
2mi
⇤
@x
@x
⇤
)].
(9)
~2 2
@x + V (x))
2m
+ c.c.]
~2 2
@x + V (x)) .
2m
1
There will be some
(11)
Z
+1
1
H(x, p)'n dp = En Pn (x).
From Equation 13, these eigenfunctions
di↵erential equation:
~2 2
@x
2m
(12)
for which:
n
n
+ V (x)
n
= En
(13)
n
satisfy the
(14)
n
away from n = 0, and continue to satisfy Equation 14
if restricted to having @x2 n = 0 when n = 0.
Letting n (x, t) = n (x)eifn (t) , plug = 1 + 2 into
Equation 9:
1
[
i~
⇤
~2 2
@
2m x
] + c.c. =
=
2
2
~
0
2 [f1
1
1
f20 ] sin(f1
E2 ] sin(f1
2 [E1
f2 ) (15)
f2 )
If a time translation of n is still a solution, let 2 (x, t) =
if1 (t+ t)
. Since E2 = E1 , and f1 (t) f1 (t + t) =
1 (x)e
n⇡ for n 2 Z would not hold for all t unless f1 is constant, we must have f10 (t) = f10 (t+ t): the phase is linear
in time. Equation 15 is consistent with fn (t) = En t/~
up to a constant phase. This gives:
En
ipy
~
(
H(x, p)'dp = (
since the boundary term is zero. Similarly,
R +1
⇤
+1
This yields:
(5)
1
[
2
which for the time-independent case becomes:
(4)
0 = @t ⇢ + ẋ@x ⇢ + ṗ@p ⇢
p
= @t ⇢ + m
@x ⇢ V 0 (x)@p ⇢.
(10)
so g 0 (x) = 0 and (x) is real up to a constant phase.
Next, consider taking the expectation value of H(x, p)
as a function of x by integrating over p:
(3)
would be the same definitions for the probability distributions in x and p if (x, t) were taken to be the wave
function from quantum mechanics. The integral over all
x and p is defined to be normalized for all time, and '
approaches zero as x, p ! ±1.
My “Hamilton’s Equations of Motion” paper postulated that ⇢˙ = 0 in phase space for the Hamiltonian:
H(x, p) =
If P(x, t) = P(x), so that (x, t) = (x)eif (t) for some
time-dependent phase, then ⇤ @x
@x ⇤ must be a
constant. Using the limit that ! 0 for x ! ±1 means
this constant is zero: ⇤ @x is real. An arbitrary (x)
can be written as = A(x)eig(x) for some real functions
A(x) and g(x):
n
= i~@t
If we restrict to linear combinations of
the Schrödinger Equation is obeyed:
i~@t
=
~2 2
@x
2m
(16)
n
+ V (x) .
n,
we see that
(17)
Position as a Single-Valued Function of Time
Sabrina Gonzalez Pasterski
(Dated: July 27, 2013)
I present a geometrical illustration of how the speed limit in Special Relativity requires position
to be a single-valued function of time in any reference frame.
I.
BACKGROUND
In my paper “Motivating Special Relativity using Linear Algebra,” I derived the Lorentz transformations assuming that the speed of light is the same in any reference
frame and that transformations of space-time coordinates
are linear. In the (⌘, ⇠) basis where:
⌘=
ct x
p
2
⇠=
ct + x
p
2
(1)
a boost in velocity is described by:
✓ 0◆ ✓
◆✓ ◆
⌘
⌘
1 0
=
⇠0
0 2
⇠
1
=
q
1+
1
,
2
=
q
1
1+
(2)
.
The fact that | | > 1 makes 1 and 2 imaginary justifies
the notion of a speed limit in Special Relativity: |v|  c.
In this paper, I consider the implications of having the
strict inequality |v| < c hold.
y
x1'
x
ct x
ξ=
2
ct
x2 '
x
a
η=
ct −x
2
b
FIG. 1. Slope restrictions lead to properties of x(ct).
II.
SPACE-TIME TRAJECTORIES
A function f (k) of a variable k is defined such that
for any value of that variable k, which f takes as an
input, the output f (k) has a single value. For instance,
f (k) = k 2 has a single value for each k, while two di↵erent
values of k are allowed to give the same value of f (k), e.x.
k and k.
The motion of a particle is described by a path in the
(ct, x) plane. If we can write the position as a function
of ct, then while a particle can be at the same position
at di↵erent times, at any time it can only have a single
position.
While daily experience makes this property of x and ct
seem intuitive, such a restriction need not hold for any
two physical quantities. Figure 1a illustrates how the relationship between a particle’s x and y position does not
dy
have this restriction. In this case, the slope dx
represents
a direction, which only takes on a physical meaning if
there is something that distinguishes that direction, e.x.
the amount of sunlight hitting you if you move in or out
of a shadow by traveling that way. Indeed, if one were
unable to distinguish x from y, we could rotate our coordinates to arbitrarily change the slope.
I will show that setting a limit on the magnitude of dx
dt
implies that x is a single-valued function of t, which we
can write as x(ct). Figure 1b shows that the transformations of Equation 2 can modify the slope of the position
axis in di↵erent reference frames. Since:
2
1
=
1
1+
>0
(3)
for | | < 1, and the x axis has slope 1 in the rest frame,
lines of constant time for any reference frame will have a
negative slope when in plotted the (⌘, ⇠) coordinates of a
given rest frame.
The restriction that | | < 1 also implies that the the
trajectory of a particle in (⌘, ⇠) is strictly increasing as
a function of ⌘. If its slope were zero at any point, the
particle would be traveling at c in the x̂ direction. If
it’s slope were infinite, the particle would be traveling at
c in the +x̂ direction. At any instant, the slope must be
between these two values.
I will show that this implies that x = x(ct) for any
reference frame by contradiction (the red curve in Figure 1b). If there exists a reference frame in which x0
takes on more than one value for a given ct0 then for a
trajectory that is continuous (the particle does not jump
between points in space and time) there will be some position along the particle’s path in the (⌘, ⇠) plane that
has a tangent parallel to the average slope, which is the
slope of the x0 axis. This comes from an application of
the mean value theorem of calculus.
I previously showed that the slope of any x0 axis is
negative, so this means that the slope of the trajectory
in (⌘, ⇠) would also be negative, which is not allowed by
| | < 1. This contradiction tells us that x is single-valued
as a function of time in any reference frame. Special
Relativity thus implies that the particle cannot occupy
two positions at the same time.
Motivating Special Relativity using Linear Algebra
Sabrina Gonzalez Pasterski
(Dated: July 24, 2013)
I present a geometrical derivation of results from Special Relativity for a single spatial coordinate.
I.
POSTULATES
In this paper, I derive results from Special Relativity
using symmetry and the following postulates:
1. The speed of light is constant in any reference
frame.
2. The transformation of space-time coordinates when
changing between reference frames is linear.
x
ξ=
ct x
2
m=
1 β
1− β
m= β
ct
a
η=
ct −x
2
b
ct x
ct ' x '
FIG. 1. Choice of basis to diagonalize the transformation.
II.
TRANSFORMING COORDINATES
When describing the motion of a particle along the x
axis, it is convenient to plot position as a function of
time. Figure 1a shows such a plot. The green line has
slope = vc and describes the motion of a particle which
moves at a constant velocity v and passes through x = 0
at t = 0. The two diagonals describe the paths of photons
traveling at speed c in the ±x̂ directions.
According to Postulate 1, if we change the velocity of
our reference frame by “boosting” along the x axis, the
speed of light will still be c, meaning that in the new
reference frame, the paths of photons will still have slope
±1. If, as per Postulate 2, the transformation is linear,
these diagonals will be eigenvectors of the transformation. Figure 1b rotates the (ct, x) coordinates by 45 to
the (⌘, ⇠) basis that diagonalizes the boost. In this basis, a boost in velocity amounts to applying the linear
transformation:
✓ 0◆ ✓
◆✓ ◆
⌘
⌘
1 0
=
(1)
⇠0
0 2
⇠
for some eigenvalues 1 (v) and 2 (v) which determine
the scaling of the axes during a boost by v.
In this (⌘, ⇠) basis, the slope of a particle traveling at
a constant velocity is m = 11+ . A boost into a frame in
which this particle has zero velocity must take this slope
to 1:
⇠0
=
⌘0
2
1
⇠
=
⌘
2
1
2
m=1 !
=
1
1
.
1+
(2)
Now consider a switch in the sign of v by adding a second particle moving in the opposite direction. If both the
+v and v particles pass through x = 0 at t = 0, their
positions at any time will be reflections of one another
across the m = 1 diagonal in the (⌘, ⇠) plane, which corresponds to the time axis. If scaling ⌘ by 1 (v) and ⇠ by
2 (v) brings the +v particle’s space-time coordinate at a
given ct to a particular ct0 on the m = 1 diagonal during
a +v boost, then scaling ⌘ by 2 (v) and ⇠ by 1 (v) will
bring the corresponding space-time coordinate of a v
particle’s path to the same point on the m = 1 diagonal.
Since this is equivalent to performing a v boost instead,
1 ( v) = 2 (v).
Because 1 (v) determines the scaling of the ⌘ axis during a boost by v, a subsequent boost by v should undo
this rescaling, giving 1 ( v) = 1/ 1 (v). This says that
1 (v) 2 (v) = 1: area elements are invariant under a
boost. Solving for 1 and 2 gives:
s
s
1+
1
(3)
1 =
2 =
1
1+
where the limit of ! 1 for v ! 0 sets the overall sign
of the eigenvalues.
From the definition of ⌘ and ⇠:
⌘=
ct x
p
2
⇠=
ct + x
p ,
2
(4)
x)
ct)
(5)
this transformation reduces to:
ct0 = (ct
x0 = (x
in the (ct, x) basis, with
= p
1
1
2
. This completes a
derivation of the standard Lorentz transformation for a
single spatial coordinate in Special Relativity.
III.
APPLICATIONS
Figure 2 illustrates the e↵ect of a boost and gives a
geometrical picture from which the velocity addition formula, length contraction, time dilation, the invariant interval, and the relativistic Doppler e↵ect will be derived.
2
ξ=
ct x
2
x
ξ '=
m2
ct
m1
ct '
x'
m'
η=
LP
ct− x
2
η '=
L'
B
λ2
A
B
A
p
2 L . The length of the object in the moving
is 1
P
frame is thus contracted: L0 = LP , compared to the
proper length LP in the object’s rest frame.
ct ' x '
2
ct ' −x '
2
1
1
a
FIG. 2. Illustration of a boost in the (⌘, ⇠) basis.
✓
Velocity Addition
If one particle is traveling at v1 , another at v2 , the relative speed as seen from the reference frame of particle 1
will not generally be v2 v1 . To get the correct result,
take a triangle with one vertex at the origin, one at the
point (1, m1 ), and one at the point (1, m2 ), as illustrated
by the blue lines in Figure 2a. Next, boost to a frame
where m1 is along the diagonal, corresponding to the rest
frame of particle 1 (Figure 2b).
In this frame, the point (1, m2 ) transforms to:
q
1+ 1
1
1
✓
◆
1
m0
=
q
1+ 1
1
1
✓
1
1
1
m2
1+ 1
◆
ct0 =
=
1
&
1
1+
1
A
(9)
2
!
t0 = t P .
(10)
The fact that area elements are invariant tells us that:
1
1
(cdt
2
/ dx2
(7)
1 2
dx) ⇥ (cdt + dx)
(11)
c2 dt2
is invariant under boosts. Plotting the coordinates of
two events, A and B, in the (⌘, ⇠) plane, the area of
a rectangular envelope with these two events at opposite
corners (the light blue regions in Figure 2) is proportional
to the invariant interval between these events:
s2 =
2
2
2
x
c t .
Length Contraction
Figure 2 also illustrates length contraction. A solid
object at rest traces out a diagonal ribbon parallel to the
slope m = 1 time axis. Let one edge be at x = 0 and
the other
be at x = LP . This gives ⇠ = ⌘ and ⇠ =
p
⌘
2LP , corresponding to the top and bottom green
lines in Figure 2a, respectively. When the reference frame
is boosted by v in Figure 2b, the edges are described by:
1
⇠ =
⌘0
1+
@q
1+
1
(6)
as the relative velocity. The velocity addition formula
from Special Relativity for two particles moving along
the same axis follows from taking 1 !
1.
0
0q
The Invariant Interval
d⌘ ⇥ d⇠ =
2
&
ctP
p
2
⌘0 + ⇠0
ctP
p
= p
2
1
III.4.
.
◆
This result is known as time dilation. The time between
two events is longer in a reference frame where those
events occur at di↵erent x positions.
Solving for m gives:
III.2.
0
0
in Figure 2b. Here, the time separation is the distance
between the x0 axis and the dashed brown line:
0
0
Time Dilation
Proper time is the time between two events at the same
x. This corresponds to the distance between the origin
and the brown cross in Figure 2a. The points (0, 0) and
ct
pP (1, 1) transform to:
2
b
III.1.
III.3.
λ1
1
⇠ =
⌘0
1+
0
s
1
1+
⇥
p
2LP .
(8)
Physical length is measured at constant time. This is
shown by the red arrows, which mark the separation,
as measured along the x and x0 axes, between the top
and bottom green lines in the stationary and boosted
frames. The line ⇠ 0 = ⌘ 0 in Figure 2b intersects the
top
at (0, 0) and the bottom green line at
p green line
2⇥ L
pP (1, 1). The distance between these points
1
2
III.5.
Relativistic Doppler E↵ect
In the (⌘, ⇠) plane, horizontal lines correspond to photons traveling in the x̂ direction, while vertical lines
correspond to photons traveling in the +x̂ direction. The
frequency observed by a person at x = 0 is inversely proportional to the distance between intersections of these
gridlines and the time axis (the m = 1 diagonals shown
in gray in Figure 2).
q
Since a boost in the +x̂ direction stretches ⌘ by 11+ ,
the vertical gridlines in the boosted frame are further
apart than in the rest frame. The frequency of light moving
q in the +x̂ direction is thus redshifted by a factor of
1
1+
as the observer moves away from the source.
Since the horizontal gridlines are closer together in the
boosted frame, the frequency of light moving
in the x̂
q
direction is blue shifted by a factor of
1+
1
.
Visualization for Spin-1/2 Inner Products
Sabrina Gonzalez Pasterski
(Dated: July 23, 2013)
I present a geometrical visualization for the magnitude of the inner product of two spin-1/2 states.
z
A spin-1/2 quantum state can be visualized as a vector
on the Bloch sphere, pointing in a direction described by
the angles (✓, ) in spherical coordinates. These angular
coordinates parameterize the quantum spin state. For
instance, the states | i and | i corresponding to spins
pointing in the ~v and ~v directions:
~v = (sin ✓1 cos
1 , sin ✓1
sin
1 , cos ✓1 )
~v = (sin ✓2 cos
2 , sin ✓2
sin
2 , cos ✓2 )
y
θ
(1)
x
are represented in quantum mechanics by:
| i = cos
✓1
|0i
2
+ ei
1
sin
✓1
|1i
2
| i = cos
✓2
|0i
2
+ ei
2
sin
✓2
|1i
2
FIG. 1. Illustration of the calculation in Equation 4.
(2)
where the two orthogonal basis states |0i and |1i correspond to spin up and spin down along the ẑ direction.
One aspect that makes visualizing these states counterintuitive is that a spin polarized along the +ẑ direction is
orthogonal to a spin polarized along the ẑ direction. In
physical space, the dot product of these two unit vectors
would be 1, not zero.
When we visualize light passing through a polarizer,
we find that tilting a filter by 45 cuts the intensity of an
initially polarized beam in half, tilting by 90 cuts it out
completely, while flipping by a full 180 has no e↵ect. For
spin-1/2 particles, such as electrons, the magnetic moment
can be oriented either up (+ ~2 ) or down ( ~2 ) along a
particular direction. Unlike rotating a light polarizer by
90 , measuring a +ẑ spin along a perpendicular direction
gives spin up and down with equal probability.
This arises from the fact that the inner product between two states behaves di↵erently than the dot product
between the vectors in Equation 1:
|h | i|2 =
=
cos
✓
✓1
✓2
cos
+ ei(
2
2
cos
✓
✓1
✓2
cos
+ cos(
2
2
+ sin(
=
1
1
2 ) sin
2)
1
sin
2 ) sin
✓1
✓2
sin
2
2
1
[1 + sin ✓1 sin ✓2 cos(
2
1
✓1
✓2
sin
2
2
◆2
2)
2
✓1
✓2
sin
2
2
◆2
(3)
+ cos ✓1 cos ✓2 ] .
In this paper I derive a method to obtain the same
value using a geometrical visualization where vectors behave as they normally would in physical space. Take the
norm squared of the average of the two spin vectors:
~v + ~v
2
2
=
1
[(sin ✓1 cos
4
1
+ sin ✓2 cos
2)
+ (sin ✓1 sin
1
+ sin ✓2 sin
2)
+ (cos ✓1 + cos ✓2 )2 ]
1
=
[1 + sin ✓1 sin ✓2 cos(
2
1
2
2
(4)
2)
+ cos ✓1 cos ✓2 ]
The results of Equations 3 and 4 are equal. As a check,
we see that they both give zero for two vectors pointing
in opposite directions on the Bloch sphere. The algebra
can be simplified by taking the case where one spin points
along +ẑ, as illustrated in Figure 1.
In quantum mechanics the norm squared of an inner
product corresponds to a transition probability. The visualization presented here connects this notion of probability to what one would see in a classical double slit interference experiment, where superimposed electric field
vectors are added and then squared to get the intensity.
Here, the vectors are the spin orientations of two spin1/2 states, and the transition probability corresponds to
the likelihood of achieving a spin up measurement along
the ~v direction for an electron with a spin in the ~v
direction.
Hamilton’s Equations of Motion
Sabrina Gonzalez Pasterski
(Dated: July 14, 2013)
I motivate Hamilton’s equations of motion using a geometrical picture of contours in phase space.
The following considers a single cartesian coordinate x with conjugate momentum p.
I.
POSTULATES
1. There exists a function H(x, p) which is constant
along a particle’s trajectory in phase space and is
time-independent.
2. The momentum p is defined as p = mẋ.
3. Motion within phase space is characterized by incompressible fluid flow, so that the phase space ve~ · ~v = 0.
locity is divergence-less: r
p
handled by multiplying ~⌘ in Equation 2 by an unknown
function ↵(x, p), so that:

@H
@H
~v = ẋx̂ + ṗp̂ = ↵(x, p)
x̂
p̂ .
(3)
@p
@x
FIG. 1. Illustration of contours of H(x, p) in phase space.
DERIVATION
@ ṗ
@ ẋ
@x + @p
@↵ @H
@↵ @H
@x @p
@p @x
~ = @H x̂ + @H p̂
rH
@x
@p
(1)
points perpendicular to this contour. This is represented
by the red arrow in Figure 1. I can define a vector ~⌘
~ in the (x, p)-plane:
perpendicular to rH
~⌘ =
@H
x̂
@p
@H
p̂
@x
(2)
represented by the blue arrow in Figure 1. Being perpendicular to the gradient, which is perpendicular to the contour, we find that ~⌘ points along the contour of H(x, p).
Since a particle’s motion is restricted to contours of H
by Postulate 1, its instantaneous velocity in phase space
will be parallel to the contour it is on and, thus, ~⌘ . The
magnitude of the velocity is not fixed; however, it can be
p̂
#
(4)
=0
(5)
This expression, which is equivalent to saying {↵, H} =
0, tells us that ↵ is a constant of the motion using geo~
metrical logic. It is equivalent to the statement that r↵
~
and rH are parallel if both are nonzero since:
=
At any position on a contour of H(x, p), the gradient:
@H
@x
@H
@p
when @H
v in
@p 6= 0. In what follows, I use the form of ~
Equation 3 and Postulate 2 to verify that ↵ is a function
of x and p that does not depend explicitly on time, since
p
ẋ = m
sets the overall speed.
Using Postulate 3, the divergence of the phase space
velocity field is zero, giving:
~ ⇥ rH
~
r↵
=
II.
so that we could eliminate
"
p
~v = ẋx̂ + ṗp̂ =
x̂
m
~ · ~v =
r
=
x
p
m,
From Postulate 2, ẋ =
↵(x, p):
x̂
p̂
@↵
@x
@H
@x
@↵
@p
@H
@p
h
@↵ @H
@x @p
⇠ˆ
0
0
@↵ @H
@p @x
(6)
i
⇠ˆ = ~0
where a third dimension ⇠ has been added for convenience
which is perpendicular to the (x, p)-plane. If the gradients of ↵(x, p) and H(x, p) are everywhere parallel, then
the contours of ↵(x, p) and H(x, p) will coincide since
the contour of a function is at each point perpendicular
to its gradient. A contour of H is thus also a contour of ↵.
Since ↵(x, p) is constant along a particle’s path, setting
↵ = 1 amounts to rescaling the value of H(x, p) on each
contour, which does not change the implications of Postulate 1. Equation 3 thus gives us Hamilton’s equations
of motion:
ẋ = + @H
@p
@H
ṗ =
@x
(7)
from which Postulate 2 gives us:
p2
+ V (x)
(8)
2m
for some function V (x) interpreted as the potential.
H(x, p) =
ISBN: 978-0-9863685-9-2