A Theory for the Time Arrow
Physical laws for elementary particles can be described by the quantum dynamics equation
where is the quantum state of a particle in Hilbert space and is the Hamiltonian.
A probability density function over at time , is given by , where is a position basis representation. An entropy can be associated with these probability densities characterizing the position information of a particle. Coherent states are localized wave packets and may describe the spatial distribution for some particle states. We show that due to a dispersion property of Hamiltonians in quantum physics, the entropy of coherent states increases over time. We investigate a partition of the Hilbert space into four sets based on whether the entropy is (i) increasing but not constant, (ii) decreasing but not constant, (iii) constant, (iv) oscillating.
We then postulate that quantum theory of elementary particles is equipped with a law that entropy (weakly) increases in time and thus states in set (ii) are disallowed, and the states in set (iii) can not complete an oscillation period. There is a key role of the conjugate process transforming states that are allowed into states that are not, and vice-versa.
Then, according to this law, quantum theory is not time reversible unless the state is in the partition (iii), e.g., stationary states (eigentstates of the Hamiltonian). This law in quantum theory limits physical scenarios beyond conservation laws, providing causality reasoning by defining an arrow of time.
- 1 Introduction
- 2 Time Evolution and Coherent States
- 3 Entropy and Time Arrow
- 4 Entropy-Partition of the Hilbert Space
- 5 The Two-Particle System
- 6 Conclusions
- A Schrödinger Equation in Energy-Momentum Space
- B Dirac Equation in Energy-Momentum Space
- C Covariance Properties of the Time Evolution of Coherent States
Much of the behavior of physical systems follows conservation laws, obtained by applying Noether’s theorem  to the symmetry transformations of the Lagrangian that models the system. However, conservation laws do not account for an arrow of time and thus, can not account for causality. Indeed, both classical and quantum physics laws are time-reversible.
A time arrow appears in physics only when statistics of multiple particles is introduced as one derives the entropy function from the distribution of microstates, that is, microscopic states described by the position and momentum of the particles. Entropy is a measure of the number of possible microstates of a system, consistent with the thermodynamic properties of the macrostate. The second law of thermodynamics, first introduced by Clausius , postulates that the entropy increases over time, technically increasing in a weakly manner, that it could be constant for any time interval.
Since such a law is not applicable to individual particles, and there is no law to provide an arrow of time for a one-particle system, one cannot fully account for causality events. For example, how to answer the question: “What causes an excited electron in the hydrogen atom to jump to the ground state while emitting radiation?” While transition probabilities from Fermi’s golden rule [7, 8] yield a high degree of accuracy, this rule cannot be a causality explanation. Otherwise, an energy perturbation method would be the source for the arrow of time. Similarly we can ask: “Why do nuclear decays occur?”, and again, despite accurate prediction, we do not have a causality explanation. Consider also the question: “Why do high speed particle colliding with each other transform into new particles?” This is the case for electron-positron collision leading to two photons output. While conservation laws must be respected, there is no accounting of causality. If none of the events described above happened, conservation laws would be satisfied as well. Many other events that satisfy conservation laws may never occur.
Quantum theory introduces probability as intrinsic to the description of a one-particle system. A probability is assigned to a specific value of , where is an observable such as position, energy, or momentum. The observables can form a discrete set , or a continuous set . Such probability can then be associated with a measure of information about . The more concentrated is the probability around a few given values of , the more information is provided about the state of the particle with respect to the observable. For a discrete set with possible outputs, Shannon entropy, , is a measure of such information. The larger is the entropy, the smaller is the information about the particle state of .
Extending the concept of entropy to continuous variables, continuous distributions and to quantum physics has proven to be challenging. For example, von Neumann’s entropy  requires the existence of classical statistics elements (mixed states) in order not to be zero, and consequently assigns zero probability to all one-particle systems. Therefore, we do not use it. Attempts to take the limit of the discrete Shannon entropy as the number of output states goes to infinity and the interval between them goes to zero require the introduction of the distribution of the discretization lattice itself and the removal of infinity constants leading to a negative Kullback-Liebler Divergence (see Jaynes ). Following Gibbs’s approach in classical physics, a density function in phase space, satisfies the Liouville equation leading to the conclusion that the entropy associated with is constant over time, as has been formulated by Gibbs . From such density probability in phase space, we can obtain
the probability densities of position and momentum, respectively, and where is the probability amplitude and is its Fourier transform. In quantum physics the discretizaton and finite nature of the volume is given by the uncertainty principle in position and momentum, as noted for example in . However, despite much work, including [19, 10, 12], it is unclear in quantum physics how to build and the meaning of a density function in phase space.
Related to the phase space description is the entropic uncertainty for position and momentum, where
with the entropy given by
and referred to as relative entropy. The entropic uncertainty was suggested by , proved by  and . A good presentation is given by , with a discrete version given in  and by Charles Peskin (unpublished communication, 2019). The entropic uncertainty is a tighter version of the inequality than the Heisenberg uncertainty principle based on variances, since , where is the variance of the probability density . The equality (minimum uncertainty) is obtained by the Gaussian distribution. For a Gaussian distribution, the entropy increases with the variance, and if the variance in position increases, the variance in momentum decreases. We could then speculate that as the entropy associated with the probability density increases, the entropy associated with the probability density decreases, and vice-versa. After all, the more information we have about the position of a particle, the less we know about the momentum, and vice-versa.
We then wonder whether in quantum physics, a free particle probability density evolution disperses, increasing the entropy. Is there a law analogous to the second law of thermodynamics? If so, how would it impact the description of the physical phenomena?
It is worth to mention that other entropy formula have been studied, such as the one recently proposed by Safranek et al. , which is a generalization of Boltzmann entropy to quantum physics, called “observational” entropy. In this paper we do not attempt to establish a formula of the entropy associated with the probability density function. We assume that it is possible to have a coherent measure of the information of a position over time, given a probability density function. Given this assumption, we then focus on the consequences of having such a measure. Whenever a computation of an entropy is considered subsequently, we apply the relative entropy formula given by (4).
1.1 Our Contributions
As we have just discussed, we consider an entropy associated with a given probability density. Such entropy is a measure of the localization information of the density function, where a uniform distribution, generated from plane waves probability amplitudes, has the smallest localization information (maximum entropy) and a Dirac delta distribution has the largest localization information (minimum entropy). These two distributions are not defined in Hilbert space but are idealizations of the two opposite limits of localization content.
We postulate a law, the entropy law, that the location information measured by the entropy, increases over time. “Increases” is meant as in the standard definition to allow for staying constant for any period of time. Our insight into why physical laws cause a position information loss over time is due to the dispersion property of the free particle Hamiltonian, such as the one for Schrödinger’s or Dirac’s theory. This dispersion property of quantum probability amplitudes impacts the probability density functions. We show that a class of probability density function solutions describing wave packets will disperses over time.
An entropy measure and a time interval induce a partition of the Hilbert space into four sets. One of them whether the entropy is (i) increasing but not constant, and another if the entropy is (ii) decreasing but not constant. An involution from set (i) onto the set (ii) is established, which sheds light on the role of the conjugate operator and the time arrow.
The proposed entropy law restricts the set of physically-valid states. For example, the entire set (ii) is disallowed. We show that, if a state belongs to set (i) and evolves for a time interval, then the result of applying the conjugate operator to such evolved state will be a state in set (ii). Thus, the entropy law does not allow such a conjugation process for states in (i). This law also sheds light on states that are superposition of stationary states, leading to oscillation on the density function as well as on the entropy. According to the law, a decay from the superposition of states to either one unique stationary states or to other forms of states would be needed. We speculate that free neutrinos, in superposition of states, oscillate during flight, may transform themselves according to this law.
In light of the discussion above it is natural to review the role of the conjugate operator in the understanding of anti-particles. We propose to replace the Feynman-Stueckelberg interpretation that a negative energy solution of the Dirac equation for a particle running backward in time is equivalent to an antiparticle running forward in time, by the following statement: There is an equivalence between describing a particle by the probability amplitude and its motion equation, and describing a particle by its conjugate and the evolution by the adjoint of the motion equation. The default choice of representation is the one that has a positive energy with the time parameter going forward.
Finally, we study simple scenarios of a two particle system. In particular we simulate the collision of two fermions as well as two bosons. As the particles evolve over time, the entropy of each individual particle’s probability amplitude alone increases, but as the distance between them is reduced, and interference occurs, there is an entropy decreasing effect. These two effects compete for the entropy behavior during the evolution. We show that for slow speed collision, there is time for the dispersion effect of the Hamiltonian to dominate and the resulting entropy may increase over time, and for faster speed collision, less time for dispersion, the interference effect dominates resulting in the entropy to decrease at some close distance.
We speculate that high speed collision of particles, such as , produce new particles so that the entropy increases, while respecting conservation laws.
The paper is organized as follows. Section 2 is a review of selected topics in quantum physics focusing on technical achievements that relate directly to our postulate and its consequences. Section 3 develops the postulate that a time arrow is included in the laws of quantum theory. Section 4 expands the study of the partition of the Hilbert space, according to the entropy evolution. Section 5 examine the entropy for the case of two-particle collision. Section 6 concludes the paper.
2 Time Evolution and Coherent States
We start by reviewing how time evolution is modeled in quantum physics, and in doing so we fix the setting and the notation. Quantum physics describes the evolution of a one-particle system as a state in Hilbert space that evolves over time parameterized by . The evolution is governed by the motion equation and its solution
where is the reduced Plank constant and is the Hamiltonian, which is a Hermitian operator characterizing the evolution of the quantum state and which commutes with itself at different times: . Since is Hermitian, the conjugate transpose of (5) is
To avoid clumsy notation, we use the notation for operators only when it is needed to disambiguate the operator from the eigenvalue .
One can represent the state in different bases according to the eigenstates of the operator of choice. For the position operator, , the state description is or equivalently, , where are the eigenstates of the position operator, or . Note that even if the position operator is not associated with a physical measurement, we still rely on the position representation for the state.
In the Schrödinger equation for a non-relativistic particle the Hamiltonian is the sum of the kinetic and the potential energies, or , and
where is the particle’s mass, is the gradient operator, is the Laplacian operator, and the operator in the space representation.
The relativistic equation for a fermion is
where is the mass of the particle at rest, is the speed of light, the wave function is a bi-spinor, a four-entry vector structure transforming as a bi-spinor, also referred as a Dirac spinor. The index varies over time and the three spatial coordinates with the special-relativity Minkowsky metric , and are the matrices satisfying the Clifford algebra. The Hamiltonian associated with (8) is
We will refer to as the wave function or probability amplitude. In the case of the Schrödinger model it is a complex-valued functions, while in the case of the Dirac model it is a complex-valued bispinor.
The Born rule states that
is the probability density function for both complex-valued Schrödinger waves and for Dirac spinors, and therefore . For the Dirac spinors , where is the Hermitian of .
2.1 Fourier Space: Phase Velocity, Group Velocity, and the Hessian
A standard Fourier method transforms a function from the spatial basis representation to the momentum basis representation (or , since ) and vice-versa. In particular, . Extending the standard spatial Fourier method to time, we obtain the Fourier transform from a space-time representation to an energy-momentum representation. We adopt the special relativity metric for the scalar product of vectors and . More precisely, we write the inverse Fourier transform with Minkowsky metric, a four dimensional transformation, as
where . Associating the energy with , the Fourier space is the energy-momentum space, that is, . Note that the energy values are eigenvalues of the Hamiltonian and so the integral (11) in the variable can have regions of discrete sums.
The free particle Hamiltonians are and for the Schrödinger equation (7) and the Dirac equation (8), respectively. These are descriptions in position-time space, and we can also write them in Fourier space. Both Hamiltonians are functions of the momentum operator and therefore can be diagonalized in the basis (spatial Fourier domain), to obtain respectively (see Appendix A and Appendix B for derivations)
The group velocity becomes respectively
The Hessian gives a measure of dispersion of the wave. The Hessian for the Schrödinger equation is the identity matrix scaled by . This matrix is positive and the larger is the mass of the particle, the smaller are the eigenvalues and the dispersion. For the Dirac equation the eigenvalues are
where is a measure of the kinetic energy in mass units and the second eigenvalue has multiplicity two. Thus, for both equations the Hessian is positive definite for positive energy solutions. For , the larger is the mass of the particle, the smaller are the eigenvalues and the dispersion. However, for and for fast particles where , the larger are the mass, the larger are the eigenvalue and the dispersion.
2.2 Antiparticles and Conjugate Solutions
The eigenvalues of the Dirac energy-momentum matrix equation (derived in Appendix B) are , each in multiples of two. There are four eigenvectors associated with each linear equation and they are typically described in terms of 2D spinors
where are two normalized vectors in 2D, with for spin up and for spin down. In this representation, the four orthogonal eigenvector solutions and of the Dirac matrix equation in energy-momentum space are
The global phase for the negative energy eigenvectors is arbitrary and is introduced only for the convenience of the manipulations that follow. Thus, the four orthogonal solutions (22) in space-time are
where we assign the index and to each pair of solutions to indicate the sign of in the phase of the exponential term. The solutions will yield the two antiparticle solutions as follows. First, consider the adjoint of
|and after using the adjoint representation ,|
|and after changing variables (reversing momentum): ,|
The antiparticle solution solution is then a bi-spinor solution
in the Weyl chiral basis, and . The Dirac spinors have the same probability density functions as , but with the charge and momentum reversed. They are referred to as antiparticle waves or fields (when they are promoted to operators). Thus, the time evolutions of the momentum solutions are given by
Instead of the Feynman-Stueckelbert interpretation for antiparticles we consider and to be equivalent representations of the particle as they both yield the same density function and all expected values of observables. Each one evolves according to their motion equation and the adjoint equation, respectively. See Figure 1 for an illustration and a more detailed elaboration of this quantum-representation equivalence. The choice for a particle representation is the one with positive energy and the time going forward.
2.3 Coherent States and Ladder Operators
The ladder operators per spatial dimension , and their adjoints , are defined as
where the spatial index is , and there is a natural length parameter associated with a natural frequency . Similarly to the commutation properties for the position and momentum operator , we have .
Coherent states per spatial dimension , are the eigenstates of the ladder operator , . They attain the minimum uncertainty principle. They were first derived by Schrödinger . They are represented as follows
|and therefore, up to a global phase|
where , is a positive integer and represents the state with the specific number . The number operators satisfy . The amplitudes are spatially localized and are in Hilbert space. They also represent the eigenfunctions of the ground state of the quantum harmonic oscillator, with natural frequency , which models the electromagnetic field Hamiltonian. Furthermore, they are also an overcomplete representation of all the functions in Hilbert space, as we vary and . This set of functions differs from the Fourier basis () as their real components decay over space. Such property places them in Hilbert space and will be exploited in our development.
2.3.1 Bosons and Fermions
The ladder operators described above yield the number operator, . States are then created from the lowest energy one , using the creation operator, as
where , , and .
The description above is valid for bosons. In the following, . When describing fermions, the ladder operators satisfy instead the anti-commuting rule (or Poisson brackets) including with themselves . Thus, . The number operator per dimension is also given by . However, while for bosons, we have for fermions.
Ladder operators and their eigenstates (coherent states) belong to the foundations of quantum field theory and will be used here as the foundation for a particle description.
3 Entropy and Time Arrow
The entire description of one quantum particle is through a probability amplitude evolution (or wave function) whose magnitude square is a probability density function, . As discussed in the introduction, we consider the measure position information of such density function to be the relative entropy (4) to the state of one quantum particle, and extends to many particles as follows
Note that we are focusing on the entropy associated to the information about position, and referring to it by the general term entropy.
In classical statistical physics the entropy provides an arrow of time through the second law of thermodynamics. Given a probability density function in quantum theory and the entropy associated with it, is there an analogous law to the second law of thermodynamics? Inspired by the thermodynamic law, we postulate such an entropy law in quantum theory.
Postulate 1 (the entropy law).
The (position) entropy of a physical solution to a quantum equation is an increasing function of time.
As in the standard definitions, “increasing,” in contrast to “strictly increasing,” means “weakly increasing.”
This postulate aims to treat quantum physics as an alternative statistical theory, equipped with an arrow of time via a law analogous to the second law of thermodynamics. In this way, a strictly increasing over time in most quantum solutions, implies the irreversibility of natural processes, and thus the asymmetry between the past and the future. Although quantum states evolve through unitary operators, for evolutions in which is increasing but not constant, the inverse evolution must be ruled out.
We will use the following simple fact throughout the paper, and therefore state it as a lemma.
A state and its conjugate have the same entropy.
The probability densities for and are and , respectively. As these two probability densities are equal, so are their associated entropies. ∎
3.1 Stationary States
Stationary states are eigenstates of the Hamiltonian.
Let be a time-invariant Hamiltonian and its eigenstate. Then the entropy during the evolution of is time invariant. Such an evolution might be time reversible.
A quantum eigenstate of the Hamiltonian with eigenvalue is described as a wave function and evolves as . Thus the probability probability density is . It is time invariant and so is the associated entropy. As the entropy, being constant, is an increasing function of time, the potential time reversibility of the state is not precluded by Postulate 1. ∎
Stationary states include all plane wave solutions , though they are not elements of the Hilbert space due to normalization.
3.2 Time-Evolution of Waves and a Dispersion Transform
We now consider initial solutions that are localized in space, , where is the mean value of according to the probability distribution , and the phase term gives a momentum shift of . Assume the variance of , , is finite. The evolution of is according to a given Hamiltonian with a dispersion relation . We can represent the initial state in momentum space as where is the Fourier transform of . By the Fourier properties also has a finite variance, , within the center momentum . Evolving the wave function in momentum space according to the dispersion relation, and taking the inverse Fourier transform, we write
Since fades away from exponentially, we expand the dispersion formula in Taylor series
where , , and are the phase velocity, the group velocity, and the Hessian of the dispersion relation , respectively.
Then, inserting this approximation back into (40), we get
where denotes a convolution, contains a phase term and normalizes the amplitude, and denotes a normal distribution. We can interpret this evolution as describing a wave moving with phase velocity , group velocity , and being blurred by a time varying complex valued symmetric matrix . We refer to this transformation/evolution of the initial wave as the quantum dispersion transform.
The probability density associated with this wave function is given by
The relative entropy (4), which is computed by integration over the whole space, will be independent of translations of the coordinate by , so to analyze the entropy, we can consider the simplified density function
This simplified form of the dispersion transform, ignoring the translation center , is useful to study the entropy of such evolution as a quantum dispersion process.
Consider the coherent states, that is, the eigenstates of the ladder operators (37) in position space, expanded to three dimensions and translated to a center position , i.e.,