# Preparational Uncertainty Relations for Continuous Variables

## Abstract

A smooth function of the second moments of continuous variables gives rise to an uncertainty relation if it is bounded from below. We present a method to systematically derive such bounds by generalizing an approach applied previously to a single continuous variable. New uncertainty relations are obtained for multi-partite systems that allow one to distinguish entangled from separable states. We also investigate the geometry of the “uncertainty region” in the -dimensional space of moments. It is shown to be a convex set, and the points on its boundary are found to be in one-to-one correspondence with pure Gaussian states of minimal uncertainty. For a single degree of freedom, the boundary can be visualized as one sheet of a “Lorentz-invariant” hyperboloid in the three-dimensional space of second moments.

## 1 Introduction

Uncertainty relations express limitations on the precision with which
one can measure specific properties of a quantum system, such as position
and momentum of a quantum particle. These relations come in different
flavours. They may express the inability to *prepare* a quantum
system in a state for which incompatible properties possess exact
values. Alternatively, *error-disturbance* uncertainty relations
refer to the constraints encountered when attempting to extract precise
values through measurements on a single system. Both cases point to
the uncertainty inherent in the quantum description of the world.

Heisenberg was the first to realize, in 1927, that uncertainty relations exist for quantum systems [1]. His physical arguments were quickly developed by Kennard [2], Weyl [3], Robertson [4] and Schrödinger [5]. Except for Heisenberg’s paper, the focus of these contributions was on preparational uncertainty, not yet clearly distinguished from measurement uncertainty. In 1965, Arthurs and Kelly presented a model of joint measurement of position and momentum [6], laying the foundations for interest in error-disturbance uncertainty relations, which has grown considerably over the last\textls[-20] two decades. Different approaches rely on different concepts of error, which has led to lively debates [7, 8].

In recent years, the discussion of uncertainty relations has turned from conceptual aspects to applications, in line with the overall thrust of quantum information. For example, the first protocol of quantum cryptography, known as BB84 [9], is based on pairs of mutual unbiased bases that are known to come with maximal preparational uncertainty. It is also possible to use variance-based uncertainty relations to formulate criteria which detect entangled states of bi-partite systems [10, 11].

This work investigates the structure of preparational uncertainty relations in quantum systems with more than one continuous variable, i.e., . Examples are given by a point particle moving in a plane () or in three-dimensional space (; alternatively, one may consider particles each moving along a real line, each with configuration space . Our main goals are (i) to obtain lower bounds for given smooth functions depending on the second moments of a system with continuous variables, (ii) turn these bounds into criteria that enable us to detect entangled states, and (iii) to understand the geometric structure of uncertainty functionals in the space of second moments, spanned by the independent elements of the covariance matrix.

Using a variational technique originally introduced by Jackiw [12], we will generalize an approach that has been carried out successfully for quantum systems with a single particle-type degree of freedom, i.e., [13]. Encouraged by the new uncertainty relations obtained in this way for a single continuous variable, we are particularly interested in the possibility to create inequalities that are capable of detecting entangled states in systems with two or more continuous variables. Tools to detect entanglement are crucial for the implementation of any protocol in quantum information that relies on entangled states. For continuous variables, quantum optical methods are available to reliably check variance-based entanglement criteria, allowing one to verify that a required entangled state has indeed been created [14, 15, 16].

In Section 2, we will introduce uncertainty functionals for continuous variables depending on second moments and describe a method to determine their extrema and, subsequently, their minima. Section 3 applies the approach to simple cases, leading to new uncertainty relations, some of which may be used to signal the presence of entangled states. A useful geometrical picture of the uncertainty region—i.e., the covariance matrices represented in the space of second moments—is derived in Section 4. The final section contains a brief summary.

## 2 Lower Bounds of Uncertainty Functionals

### 2.1 Extrema of Uncertainty Functionals

To describe a quantum system with continuous variables, one associates pairs of canonical operators obeying the commutation relations

(1) |

We will arrange the momentum and position operators of the -th degree of freedom, and , respectively, into a column vector ,

(2) |

with components . The pure states of the quantum systems considered here are represented by unit vectors , elements of an infinite-dimensional Hilbert space . Of the second moments

(3) |

only are independent. We assume (without loss of generality)
that all first moments vanish, which follows from the invariance
of the second moments under rigid phase-space translations. The second
moments form the *covariance matrix*
associated with the pure state .

With and for (), we obtain the variance of momentum (position) of the -th degree of freedom, while for , we obtain their covariance; all other values of the indices , correspond to moments that mix different degrees of freedom. Occasionally, we will denote the variances of the -th momentum and position with and , respectively, and their covariance by .

Given a real function of the second moments for continuous variables, , we wish to establish whether it has a non-trivial lower bound . If it does, the statement provides an uncertainty relation.

Following an idea of Jackiw [12] (see also [17, 18, 19]), we define * *an*
uncertainty functional* associated with the function by

(4) |

where the Lagrange multiplier ensures that any solutions
will be given by a normalised state. We first list all *local*
second moments for each degree of freedom (the two variances and the
covariance), followed by the *non-local* moments which involve
different degrees of freedom. A variation of such a functional will,
in analogy to the one-dimensional case (cf. [13, 20]), lead to an eigenvalue equation quadratic in position and momentum
operators. Let us briefly spell out the derivation in the more general
setting.

First, we compare the value of the functional in the state with its value in the state , where is an arbitrary normalised state. Expanding it up to a second order in the small parameter , we find

(5) |

where the expression

(6) |

denotes a Gâteaux derivative. The stationary points of the functional are characterised by the vanishing of the first-order term in the expansion (5),

(7) |

More explicitly, this condition reads

(8) |

where the sum runs over the values and . Since Equation (8) should hold for arbitrary variations of the ket and its dual (which are independent), the expression in round brackets as well as its complex conjugate must vanish identically.

The functional derivatives of the second moments are

(9) |

resulting in a *Euler-Lagrange*-type equation

(10) |

The value of the multiplier can be found by multiplying this equation with the bra from the left and solving for . Substituting its value back into Equation (10), one finds the nonlinear eigenvector-eigenvalue equation

(11) |

or, in matrix notation,

(12) |

where the matrix is defined in terms of the first partial derivatives of the function : its diagonal elements are equal to , while the off-diagonal ones are given by with , using the standard convention to denote partial derivatives by subscripts. As an example, the eigenvalue equation becomes, for ,

### 2.2 Consistency Conditions

To solve Equation (12), we initially assume that the matrix
of partial derivatives is *constant*, i.e., we suppress
its dependence on the state . If we further require that
is positive definite, then Williamson’s theorem [21, 22]
guarantees the existence of a symplectic matrix
that puts into a diagonal form, i.e.,

(14) |

where the diagonal matrix is defined by ,
and the positive real numbers , , are
the *symplectic eigenvalues* of [23, 24, 22].
We recall that a symplectic matrix of order satisfies ,
where is uniquely determined by the commutation
relations, ,
.

Multiplying both sides of Equation (12) with the metaplectic unitary operator from the left, defined by the relation

(15) |

we find that its left-hand-side can be expressed as

(16) |

Thus, we have transformed the quadratic operator on the left-hand-side of Equation (12) into a Hamiltonian operator given by a sum of decoupled harmonic oscillators. The solutions of Equation (18) are given by tensor products of number states for each degree of freedom:

(19) |

Note that the constraint

(20) |

must be satisfied by all potential extremal states.

Recall that we have treated the matrix elements of the matrix
introduced in Equation (12) as constants, on which
the unitary transformation and hence the states
in Equation (19) now depend. To achieve consistency, we
determine the expectation value of the covariance matrix in the solution . A set of coupled equations in matrix form results
for the extremal second moments, which we will call the *consistency
conditions*. Explicitly, we find

(21) |

where denotes the Kronecker product of the column vector with its transpose, . Using the identity (15) in the form , we can express the covariance matrix in the form

(22) |

with the matrix

(23) |

having elements

(24) |

Recalling that the components of the vector are position and momentum operators, it is not difficult to see that the only non-zero matrix elements of are on its diagonal, i.e.,

(25) |

Using the property , which holds for
any diagonal matrix, we finally obtain the *consistency conditions*
for continuous variables,

(26) |

These conditions select the extrema that are compatible with the specific function of the second moments considered. The constraint given in (20) can be rewritten as

(27) |

and it is easy to check that this condition is trivially satisfied if the consistency conditions (26) hold.

The take-away message from the conditions (26) can be summarised
as follows: *a function of the second moments of positions
and momenta has an extremum in a pure state if there
exists a symplectic matrix ** that diagonalises
the covariance matrix and, at the same time, the transpose
of its inverse, , diagonalises
the matrix of the partial derivatives of the function
*.

According to (26), the determinant of the covariance matrix for extremal states of the uncertainty functional takes the value

(28) |

Clearly, the minimum is achieved when each oscillator resides in its ground state,

(29) |

corresponding to in Equation (28).

No pure -particle state can give rise to a covariance matrix violating the inequality (29). This universally valid constraint generalizes the single-particle inequality derived by Robertson and Schrödinger to particles, expressing it elegantly as a condition on the determinant of the covariance matrix of a state. Supplying (28) with the lower-dimensional Robertson-Schrödinger-type inequalities that need to be obeyed in by each subsystem of dimension to , we get the general uncertainty statement for more than one degrees of freedom, usually expressed in the form,

(30) |

Alternatively, this requirement can be expressed in terms of inequalities for the symplectic eigenvalues of the covariance matrix [22, 24].

We conclude this section by explicitly working out the consistency conditions for one degree of freedom, . In this case, we obtain the matrices and , with symplectic matrices and given by

(31) |

respectively, and real parameters

(32) |

The consistency conditions now take the simple form

(33) |

or finally,

(34) |

Therefore, the formalism developed here correctly reproduces the findings of [13].

## 3 Inequalities for Two or More Continuous Variables

### 3.1 Inequalities without Correlation Terms

Let us now examine the consistency conditions for more than one degree
of freedom while allowing only *product* states. Correlations between
the degrees of freedom being absent, the functional will only depend
on the *local *second moments, i.e., ;
the moments mixing the degrees of freedom are always zero
in a separable state. For simplicity, we only consider in some
detail, the generalisation to being straightforward.

Using matrices and defined in (31), we construct two symplectic matrices and as follows:

(35) |

Their product, describes the action of the factorised unitary operator

(36) |

when solving the eigenvalue Equation (12). The consistency conditions become

(37) |

with

(38) |

so that we finally obtain

(39) |

In Equation (38), the matrices , denote the collection of partial derivatives of the function with respect to the moments of the -th degree of freedom. Therefore, the consistency conditions for functionals of product states reduce to a pair of one-dimensional ones that must be solved simultaneously.

The generalisation to degrees of freedom is straightforward: for each extra degree of freedom, a matrix must be added to the diagonal of the block matrix . After introducing the suitably generalized matrices and , Equation (39) describes the consistency conditions for separable quantum states. It is often useful to express Equation (39) as

(40) |

with .

The simplest example of a factorized uncertainty relation is given by the product of two one-dimensional Robertson-Schrödinger inequalities, following from the functional

(41) |

The resulting inequality,

(42) |

corresponds to the boundary described by Equation (29) in the absence of correlations, to be discussed in more detail in Section 4. Note that this inequality is only invariant under transformations instead of those of the group that leave invariant the Robertson–Schrödinger-type inequality for two degrees of freedom. However, the matrix inequality is invariant under any symplectic transformation and serves as the required generalisation.

Starting from the functional

(43) |

we arrive -after solving (39)- at

(44) |

which cannot be obtained by a combination of inequalities for .
It is *stronger* than the (factorized) “Heisenberg”-type
inequality for more than two observables

(45) |

first mentioned in a paper by Robertson [25], but *weaker*
than (42). An inequality is said to be *weaker*
than the inequality if *fewer* states saturate than
.

Mixing products of variances related to different degrees of freedom also leads to non-trivial inequalities such as

(46) |

For and , one obtains

which resembles the inequality for the sum of two one-dimensional Heisenberg inequalities,

(47) |

but differs fundamentally from it.

### 3.2 Inequalities with Correlation Terms

Dropping the limitation to product states, we now turn to functionals that involve terms to which different degrees of freedom contribute. To begin, let us consider a linear combination of second moments,

for which the matrix takes the form

(48) |

It is positive definite whenever the coefficients obey the conditions and , which we assume from now on. The symplectic matrix that brings to diagonal form is given by (cf. [26]):

(49) |

where

(50) |

The consistency conditions (26) can be solved in closed form, leading to the covariance matrix at the extrema

(51) |

with elements explicitly given by

(52) | ||||

(53) |

and

(54) |

One can check that the expressions on the right-hand side of Equations (52) and (53) are positive, while

(55) |

also hold, as required. In fact, these two inequalities are never saturated by the extremal states, although one can get arbitrarily close if is zero, while tends to infinity (or vice versa).

Substituting the extremal values of the second moments back into the functional, we find

(56) |

implying the following inequality, satisfied by any quantum state:

(57) |

Pure separable states are known to satisfy the relation

(58) |

Now consider the limit in (57) which, however, breaks the positive definiteness of : its right-hand-side tends to zero and the terms on the left are just the sum of the variances of the Einstein-Podolsky-Rosen-type (EPR) operators and [10, 11]. In this case, the pair of inequalities (57) and (58) form the prototypical example of using uncertainty relations for entanglement detection. More specifically, whenever the sum of the variances of and in a given state violates the bound of (58), then the state is entangled. Although inequality (58) provides only a sufficient condition for inseparability of an arbitrary state, it can become a sufficient and necessary condition for pure Gaussian states, if recast in an appropriate form [10].

Returning to inequality (57) in the case of arbitrary , it is not immediately obvious whether it can be used to detect entangled states. However, let us define four EPR-type operators:

(59) |

with eight real parameters , which are constrained by the relations

(60) |

Now, we can write Equation (57) as

(61) |

reducing to the inequality

(62) |

if the the system resides in a separable state. Since its right-hand-side is always greater than or equal to the bound in (61), the violation of (62) indicates the presence of an entangled state.

Clearly, inequality (61) is more general than the corresponding one for the pair of operators and , as the former reduces to the latter in the limit and thus extends a known result [10].

As a final example, consider the sum of the variances of the EPR-type
operators for *three* degrees of freedom, ,
, ,
which is in general only bounded by zero. However, the lower possible
value achievable in a *separable *state is given by the inequality

(63) |

readily obtained from the solution of Equation (39). Again, violations of (63) detect the presence of entangled degrees of freedom.

It is, of course, possible to minimise other functions than the sum of the variances, leading to different entanglement-detecting inequalities that we will discuss elsewhere.

## 4 The Uncertainty Region

In this section, we will develop a geometric view of quantum uncertainty
for a system with continuous variables. To do so, we associate
a direction of the space with each of the second moments . Then, any quantum state
gives rise to a point in the *space of second moments*,
which has dimension .

Some points in the space will represent moments
of quantum states while others will not. The accessible part of the
space is called the *uncertainty region*, as the points it contains
are in one-to-one correspondence with admissible covariance matrices
. This region is bounded by
a -dimensional surface given by the relation

(64) |

where is the standard symplectic matrix of order .

### 4.1 More Than One Continuous Variable:

We will show now that the uncertainty region in the space
is a *convex *set, by affirming (i) that its *boundary*
(64) is convex and (ii) that all points of
the uncertainty region emerge as expectations taken in *pure*
states. In other words, the uncertainty region has no “pure-state
holes.” This property justifies our initial decision to search for
extrema of uncertainty functionals among pure states only: no other
extrema would result had we included mixed states. On the boundary
of the uncertainty region, the relationship between quantum states and
their moments is unique (up to rigid translations) while (iii) points
inside the uncertainty region can also be obtained from infinitely
many different convex combinations of pure (or mixed) states.

#### The Uncertainty Region Has a Convex Boundary

The region defined by Equation (29)
is a *convex* set in the -dimensional space of second
moments. To see this, we consider two covariance matrices
and that are located on its boundary given by (64),
i.e., they satisfy

(65) |

We recall that covariance matrices are positive definite, , and that they must have sufficiently large symplectic eigenvalues in order to stem from quantum states. Convexity holds if the (positive definite) convex combination of two covariance matrices,

(66) |

either lies on the boundary of the uncertainty region or in its interior. This property follows from the fact that the matrix function

(67) |

is convex [27], i.e., the inequality

(68) |

holds for any pair of strictly positive definite matrices, . Rewriting (65) in the form

(69) |

one immediately finds that

(70) |

Since

(71) |

follows, and we have shown that the convex combination of two covariance matrices on the boundary of the uncertainty region cannot produce a point outside of it. Equality holds in (71) only if or . Therefore, states on the boundary cannot be written as mixtures, which means that the states on the boundary must be pure states.

Clearly, the argument just given extends to convex combinations of
covariance matrices located *inside* the uncertainty region:
no such combination will produce a covariance matrix on its boundary
or outside of it.

#### The Uncertainty Region Has No Pure-State Holes

We determined the conditions for uncertainty functionals to have extrema
by evaluating them on all *pure* states of quantum particles.
We now show that the inclusion of mixed states as potential extrema
does not change our findings. It is sufficient to show that all points
of the uncertainty region defined by the inequality (29)
correspond to covariance matrices that stem from pure states.

Recall that any admissible covariance matrix can be diagonalised according to Williamson’s theorem [21, 23] using a suitable symplectic transformation. Let us order its finite symplectic eigenvalues to from smallest to largest and choose an integer such that holds. Suppose now that the -th subsystem resides in the pure state

(72) |

The variances of position and momentum take the values

(73) |

where we use the fact that the expectations of the operators and vanish (cf. remark after Equation (3)). Thus, a suitable value of the parameter leads to the desired entries on the diagonal of the covariance matrix, and the covariance of position and momentum and equals zero. In addition, the remaining off-diagonal matrix elements—associated with the bilinear operators for —also vanish in the product state

(74) |

Consequently, there is a pure product state, namely ,
to generate any desired *diagonal* covariance matrix—which is sufficient to create any admissible *non-diagonal* covariance
matrix, \textls[-15]simply by undoing the symplectic transformation used to diagonalize
the initially given covariance matrix.

The map from the set of pure states to the interior of the space of moments is, of course, many-to-one. This can be seen directly by recalling that each admissible covariance matrix can also be obtained from a Gaussian state characterized by a quadratic form determined by the matrix .

#### All Moments Arise as Convex Combinations of Two Pure States

Given any point inside the uncertainty region, one can find infinitely many convex combinations of two pure Gaussian states on the boundary that produce the desired moments. Here is one way to construct such pairs. Consider any two-dimensional Euclidean plane that passes through the origin of the space of moments, , and the given point inside the uncertainty region. The intersection of its boundary with the plane is a one-dimensional set of points that divides the plane into two regions corresponding to acceptable covariance matrices (forming the uncertainty region) and the rest. This line inherits convexity from the boundary in the space since any two points on the curve are, of course, also located on the high-dimensional boundary.

To conclude the argument, we only need to identify two points on the boundary such that the line connecting them goes through the point representing the desired set of moments. It is geometrically obvious that there exist infinitely many pairs of points on the boundary that satisfy this requirement. This situation is illustrated in Figure 1 in Section 4.2.3 for a single continuous variable where the boundary of the uncertainty region is known to be a hyperbola.

### 4.2 One Continuous Variable:

It is instructive to study the properties of the uncertainty region for a single continuous variable since the space of moments has only three dimensions. Even in the absence of entangled states, the uncertainty region has a number of interesting features as it resembles the Bloch ball used to visualize the states of a qubit. For one continuous variable, each point inside the uncertainty region is characterized uniquely by a triple of numbers, the states on the convex boundary are the only pure states, and the decomposition of mixed states into pairs of pure states is clearly not unique. The group of transformations that leave the uncertainty region invariant play the role of the transformations mapping the Bloch ball to itself.

We simplify the notation to discuss the case Renaming the elements of the covariance matrix according to

(75) |

the consistency conditions (34) take the form

(76) |

and

(77) |

The third constraint is *universal *since it does not depend
on the function that characterizes an uncertainty functional
. It will be convenient to use the variables

(78) |

to parametrize the points in the *three*-dimensional space of
second moments*,* with coordinates .
For each non-negative integer, the third condition