Quantum Non-Markovianity: Characterization, Quantification and Detection

# Quantum Non-Markovianity: Characterization, Quantification and Detection

Ángel Rivas, Susana F. Huelga and Martin B. Plenio Departamento de Física Teórica I, Facultad de Ciencias Físicas, Universidad Complutense, 28040 Madrid, Spain.
Institut für Theoretische Physik, Universität Ulm, Albert-Einstein-Allee 11, 89073 Ulm, Germany.
Center for Integrated Quantum Science and Technologies, Albert-Einstein-Allee 11, 89073 Ulm, Germany.
###### Abstract

We present a comprehensive and up to date review on the concept of quantum non-Markovianity, a central theme in the theory of open quantum systems. We introduce the concept of quantum Markovian process as a generalization of the classical definition of Markovianity via the so-called divisibility property and relate this notion to the intuitive idea that links non-Markovianity with the persistence of memory effects. A detailed comparison with other definitions presented in the literature is provided. We then discuss several existing proposals to quantify the degree of non-Markovianity of quantum dynamics and to witness non-Markovian behavior, the latter providing sufficient conditions to detect deviations from strict Markovianity. Finally, we conclude by enumerating some timely open problems in the field and provide an outlook on possible research directions.

Review Article

## 1 Introduction

In recent years, renewed attention has been paid to the characterization of quantum non-Markovian processes. Different approaches have been followed and several methods have been proposed which in some cases yield inequivalent conclusions. Given the considerable amount of literature that has built up on the subject, we believe that the time is right to summarize most of the existing results in a review article that clarifies both the underlining structure and the interconnections between the different approaches.

On the one hand, we are fully aware of the risk we take by writing a review on a quite active research field, with new results continuously arising during the writing of this work. We do hope, on the other hand, that possible shortcomings will be well balanced by the potential usefulness of such a review in order to, hopefully, clarify some misconceptions and generate further interest in the field.

Essentially, the subject of quantum non-Markovianity addresses two main questions, namely:

1. What is a quantum Markovian process and hence what are non-Markovian processes? (characterization problem).

2. If a given process deviates from Markovianity, by how much does it deviate? (quantification problem).

In this work we examine both questions in detail. More specifically, concerning the characterization problem, we adopt the so-called called divisibility property as a definition of quantum Markovian processes. As this is not the only approach to non-Markovianity, in Section 3.4 we introduce and discuss other proposed definitions and compare them to the divisibility approach. In this regard, we would like to stress that it is neither our intention nor is the field at a stage that allows us to decide on a definitive definition of quantum Markovian processes. It is our hope however that we will convince the reader that the strong analogy between the definition for non-Markovianity taken in this work and the classical definition of Markov processes, and the ensuing good mathematical properties which will allow us to address the characterization problem in simple terms, represents a fruitful approach to the topic. Concerning the quantification problem, we discuss most of the quantifiers present in the literature, and we classify them into measures and witnesses of non-Markovianity, depending on whether they are able to detect every non-Markovian dynamics or just a subset. Given the large body of literature that explores the application of these methods to different physical realizations, we have opted for keeping the presentation mainly on the abstract level and providing a detailed list of references. However, we have also included some specific examples for the sake of illustration of fundamental concepts.

This work is organized as follows. In Section 2, we recall the classical concept of Markovian process and some of its main properties. This is crucial in order to understand why the divisibility property provides a good definition of quantum Markovianity. In Section 3 we introduce the concept of quantum Markovian process by establishing a step by step parallelism with the classical definition, and explain in detail why these quantum processes can be considered as memoryless. Section 4 gives a detailed review of different measures of non-Markovianity and Section 4 describes different approaches in order to construct witnesses able to detect non-Markovian dynamics. Finally, Section 6 is devoted to conclusions and to outline some of the problems which remain open in this field and possible future research lines.

## 2 Markovianity in Classical Stochastic Processes

In order to give a definition of a Markov process in the quantum regime, it is essential to understand the concept of Markov process in the classical setting. Thus, this section is devoted to revise the definition of classical Markov processes and sketch the most interesting properties for our purposes without getting too concerned with mathematical rigor. More detailed explanations on the foundations of stochastic processes can be found in [1, 2, 3, 4, 5, 6].

### 2.1 Definition and properties

Consider a random variable defined on a classical probability space , where is a given set (the sample space), (the possible events) is a algebra of subsets of , containing itself, and the probability is a additive function with the property that , (cf. [1, 2, 3, 4, 5, 6]). In order to avoid further problems when considering conditional probabilities (see for example the Borel-Kolmogorov paradox [7]) we shall restrict attention from now on to discrete random variables, i.e. random variables which take values on a finite set denoted by .

A classical stochastic process is a family of random variables . Roughly speaking, this is nothing but a random variable depending on a parameter which usually represents time. The story starts with the following definition.

###### Definition 2.1 (Markov process).

A stochastic process is a Markov process if the probability that the random variable takes a value at any arbitrary time , provided that it took the value at some previous time , is uniquely determined, and not affected by the possible values of at previous times to . This is formulated in terms of conditional probabilities as follows

 P(xn,tn|xn−1,tn−1;…;x0,t0)=P(xn,tn|xn−1,tn−1),for all {tn≥tn−1≥…≥t0}⊂I, (1)

and informally it is summarized by the statement that “a Markov process does not have memory of the history of past values of ”. This kind of stochastic processes are named after the Russian mathematician A. Markov [8].

From the previous definition (1) it is possible to work out further properties of Markov processes. For instance, it follows immediately from (1) that for a Markov process

 E(xn,tn|xn−1,tn−1;…;x0,t0)=E(xn,tn|xn−1,tn−1),for all {tn≥tn−1≥…≥t0}⊂I, (2)

where denotes the so-called conditional expectation.

In addition, Markov processes satisfy another remarkable property. If we take the joint probability for any three consecutive times and apply the definition of conditional probability twice we obtain

 P(x3,t3;x2,t2;x1,t1) = P(x3,t3|x2,t2;x1,t1)P(x2,t2;x1;t1) (3) = P(x3,t3|x2,t2;x1,t1)P(x2,t2|x1,t1)P(x1,t1).

Since the Markov condition (1) implies that , by taking the sum over and dividing both sides by we arrive at

 P(x3,t3|x1,t1)=∑x2∈XP(x3,t3|x2,t2)P(x2,t2|x1,t1), (4)

which is called the Chapman-Kolmogorov equation. Moreover, the next theorem gives an answer to the converse statement.

###### Theorem 2.1.

A family of conditional probabilities with satisfying (4) can always be seen as the conditional probabilities of a Markov process .

###### Proof.

The proof is by construction. Take some probabilities and define the two-point joint probabilities by

 P(xn,tn;xn−1,tn−1):=P(xn,tn|xn−1,tn−1)P(xn−1,tn−1).

Then, set

 P(xn,tn|xn−1,tn−1;…;x0,t0):=P(xn,tn|xn−1,tn−1),for all  {tn≥tn−1≥…≥t0}⊂I. (5)

and construct higher joint probabilities by using expressions analogous to Eq. (3). This construction is always possible as it is compatible with (4), which is the presupposed condition satisfied by . ∎

### 2.2 Transition matrices

In this section we shall focus on the evolution of one-point probabilities during a stochastic process. Thus, consider a linear map that connects the probability of a random variable , at different times and :

 P(x1,t1)=∑x0∈XT(x1,t1|x0,t0)P(x0,t0). (6)

Since and for every , we conclude that

 ∑x1∈XT(x1,t1|x0,t0)=1, (7) T(x1,t1|x0,t0)≥0,x1,x0∈X. (8)

Matrices fulfilling these properties are called stochastic matrices.

Consider to be the initial time of some (not necessarily Markovian) stochastic process . From the definition of conditional probability,

 (9)

and therefore for every . This relation is not valid in general for , . The reason is that is not fully defined for a general stochastic process; we need to know the value of for previous time instants as could be different from for . However that is not the case for Markov processes which satisfy the following result.

###### Theorem 2.2.

Consider a Markov process . Given any two time instants and we have

 T(x2,t2|x1,t1)=P(x2,t2|x1,t1). (10)
###### Proof.

It follows from the fact that we can write , as is well defined for any and . ∎

From this theorem and the Chapman-Kolmogorov equation (4) we obtain the following corollary.

###### Corollary 2.1.

Consider a Markov process , then for any , the transition matrix satisfies the properties

 ∑x2∈XT(x2,t2|x1,t1)=1, (11) T(x2,t2|x1,t1)≥0, (12) T(x3,t3|x1,t1)=∑x2∈XT(x3,t3|x2,t2)T(x2,t2|x1,t1). (13)

In summary, for a Markov process the transition matrices are the two-point conditional probabilities and satisfy the composition law Eq. (13). Essentially, Eq. (13) states that the evolution from to can be written as the composition of the evolution from to some intermediate time , and from this to the final time .

In case of non-Markovian processes, might be not well defined for . Nevertheless, if the matrix is invertible for every , then can be written in terms of well-defined quantities. Since the evolution from to (if it exists) has to be the composition of the backward evolution to the initial time and the forward evolution from to , we can write

 T(x2,t2|x1,t1) = ∑x0∈XT(x2,t2|x0,t0)T(x0,t0|x1,t1) (14) = ∑x0∈XP(x2,t2|x0,t0)[P(x1,t1|x0,t0)]−1.

In this case the composition law Eq. (13) is satisfied and Eq. (11) also holds. However, condition Eq. (12) may be not fulfilled, which prevents any interpretation of as a conditional probability and therefore manifests the non-Markovian character of such a stochastic process.

###### Definition 2.2 (Divisible process).

A stochastic process for which the associates transition matrices satisfy Eqs. (11), (12) and (13) is called divisible.

There are divisible processes which are non-Markovian. As an example (see [4, 6]), consider a stochastic process with two possible results , and just three discrete times (). Define the joint probabilities as

 (15)

By computing the marginal probabilities we obtain , and then

 P(x3,t3|x2,t2;x1,t1)=P(x3,t3;x2,t2;x1,t1)P(x2,t2;x1,t1)=(δx3,0δx2,0δx1,1+δx3,0δx2,1δx1,0+δx3,1δx2,0δx1,0+δx3,1δx2,1δx1,1). (16)

Therefore the process is non-Markovian as, for example, and . However the transition matrices can be written as

 T(x3,t3|x2,t2)=P(x3,t3;x2,t2)P(x2,t2)=12,

and similarly . Hence the conditions (11), (12) and (13) are clearly fulfilled. Other examples of non-Markovian divisible processes can be found in [9, 10, 11, 12, 13].

Despite the existence non-Markovian divisible processes, we can establish the following key theorem.

###### Theorem 2.3.

A family of transition matrices with which satisfies Eqs. (11), (12) and (13) can always be seen as the transition matrices of some underlying Markov process .

###### Proof.

Since the matrices satisfy (11) and (12), they can be understood as conditional probabilities , and since (13) is also satisfied, the process fulfils Eq. (4). Then the final statement follows from Theorem 2.1. ∎

Thus, we conclude that:

###### Corollary 2.2.

At the level of one-point probabilities, divisible and Markovian processes are equivalent. The complete hierarchy of time-conditional probabilities has to be known to make any distinctions.

### 2.3 Contractive property

There is another feature of Markov processes that will be useful in the quantum case. Consider a vector , where denotes its different components. Then its norm is defined as

 ∥v(x)∥1:=∑x|v(x)|. (17)

This norm is particularly useful in hypothesis testing problems. Namely, consider a random variable which is distributed according to either probability or probability . We know that, with probability , is distributed according to , and, with probability , is distributed according to . Our task consists of sampling just once with the aim of inferring the correct probability distribution of [ or ]. Then the minimum (averaged) probability to give the wrong answer turns out to be

 Pmin(fail)=1−∥w(x)∥12, (18)

where . The proof of this result follows the same steps as in the quantum case (see Section 3.3.1). Thus the -norm of the vector gives the capability to distinguish correctly between and in the two-distribution discrimination problem.

Particularly, in the unbiased case , we have

 ∥w(x)∥1=12∥p1(x)−p2(x)∥1,

which is known as the Kolmogorov distance, -distance, or variational distance between and .

In the identification of non-divisible processes, the -norm also plays an important role.

###### Theorem 2.4.

Let be the transition matrices of some stochastic process. Then such a process is divisible if and only if the -norm does not increase when is applied to every vector , , for all and ,

 ∥∥ ∥∥∑x1∈XT(x2,t2|x1,t1)v(x1)∥∥ ∥∥1≤∥v(x2)∥1,t1≤t2. (19)
###### Proof.

The “only if” part follows from the properties (11) and (12):

 ∥∥ ∥∥∑x1∈XT(x2,t2|x1,t1)v(x1)∥∥ ∥∥1 = ∑x2∈X∣∣ ∣∣∑x1∈XT(x2,t2|x1,t1)v(x1)∣∣ ∣∣ (20) ≤ ∑x1,x2∈XT(x2,t2|x1,t1)|v(x1)| = ∑x1∈X|v(x1)|=∑x2∈X|v(x2)|=∥v(x2)∥.

For the “if” part, as we mentioned earlier, if does exist, it always satisfies Eqs. (11) and (13). Take a vector to be a probability distribution for all , because of Eq. (11) we have

 ∥p(x1)∥1=∑x1∈Xp(x1)=∑x1,x2∈XT(x2,t2|x1,t1)p(x1). (21)

Since, by hypothesis, Eq. (19) holds for any vector, we obtain the following chain of inequalities

 ∥p(x1)∥1 = ∑x1,x2∈XT(x2,t2|x1,t1)p(x1)≤∑x2∈X∣∣ ∣∣∑x1∈XT(x2,t2|x1,t1)p(x1)∣∣ ∣∣ (22) ≤ ∑x2∈X|p(x2)|=∑x1∈X|p(x1)|=∥p(x1)∥.

Therefore,

 ∑x2∈X∣∣ ∣∣∑x1∈XT(x2,t2|x1,t1)p(x1)∣∣ ∣∣=∑x1,x2∈XT(x2,t2|x1,t1)p(x1),

for any probability , which is only possible if . Then Eq. (12) has to be satisfied. ∎

Because of this theorem and Eq. (13), increases monotonically with time for a divisible process. In this regard, if the random variable undergoes a Markovian process, the best chance to rightly distinguish between the two possible distributions and is to sample at time instants as close as possible to the initial time . However that is not the case if is subject to a non-divisible (and then non-Markovian) process. Then, in order to decrease the error probability, it could be better to wait until some time, say , where increases again (without exceeding its initial value). The fact that the error probability may decrease for some time after the initial time can be understood as a trait of underlying memory in the process. That is, the system retains some information about the probability of at , which arises at a posterior time in the evolution.

In summary, classical Markovian processes are defined via multi-time conditional probabilities, Eq. (1). However, if the experimenter only has access to one-point probabilities, Markovian processes become equivalent to divisible processes. The latter are more easily characterized, as they only depend on properties of transition matrices and the -norm.

## 3 Markovianity in Quantum Processes

After the succinct review of classical Markovian processes in the previous section, here we shall try to adapt those concepts to the quantum case. By the adjective “quantum” we mean that the system undergoing evolution is a quantum system. Our aim is to find a simple definition of a quantum Markovian process by keeping a close analogy to its classical counterpart. Since this is not straightforward, we comment first on some points which make a definition of quantum Markovianity difficult to formulate. For the sake of simplicity, in the following we shall consider finite dimensional quantum systems unless otherwise stated.

### 3.1 Problems of a straightforward definition

Since the quantum theory is a statistical theory, it seems meaningful to ask for some analogue to classical stochastic processes and particularly Markov processes. However, the quantum theory is based on non-commutative algebras and this makes its analysis considerably more involved. Indeed, consider the classical definition of Markov process Eq. (1), to formulate a similar condition in the quantum realm we demand a way to obtain for quantum systems. The problem arises because we can sample a classical random variable without affecting its posterior statistics; however, in order to “sample” a quantum system, we need to perform measurements, and these measurements disturb the state of the system and affect the subsequent outcomes. Thus, does not only depend on the dynamics but also on the measurement process, and a definition of quantum Markovianity in terms of it, even if possible, does not seem very appropriate. Actually, in such a case the Markovian character of a quantum dynamical system would depend on which measurement scheme is chosen to obtain . This is very inconvenient as the definition of Markovianity should be independent of what is required to verify it.

### 3.2 Definition in terms of one-point probabilities: divisibility

Given the aforementioned problems to construct in the quantum case, a different approach focuses on the study of one-time probabilities . For these, the classical definition of Markovianity reduces to the concept of divisibility (see Definition 2.2), and a very nice property is that divisibility may be defined in the quantum case without any explicit mention to measurement processes. To define quantum Markovianity in terms of divisibility may seem to lose generality, nevertheless Theorem 2.3 and Corollary 2.2 assert that this loss is innocuous, as divisibility and Markovianity are equivalent properties for one-time probabilities. These probabilities are the only ones that can be constructed in the quantum case avoiding the difficulties associated to measurement disturbance.

Let us consider a system in a quantum state given by some (non-degenerate) density matrix , the spectral decomposition yields

 ρ=∑xp(x)|ψ(x)⟩⟨ψ(x)|. (23)

Here the eigenvalues form a classical probability distribution, which may be interpreted as the probabilities for the system to be in the corresponding eigenstate ,

 P(|ψ(x)⟩):=p(x). (24)

Consider now some time evolution of the quantum system such that the spectral decomposition of the initial state is preserved; is mapped to

 ρ(t)=∑xp(x,t)|ψ(x)⟩⟨ψ(x)|∈S, (25)

where denotes the set of quantum states with the same eigenvectors as . Since this process can be seen as a classical stochastic process on the variable , which labels the eigenstates , we consider it to be divisible if the evolution of satisfies the classical definition of divisibility (Definition 2.2). In such a case, there are transition matrices , such that

 p(x1,t1)=∑x0∈XT(x1,t1|x0,t0)p(x0,t0), (26)

fulfilling Eqs. (11), (12) and (13). This Eq. (26) can be written in terms of density matrices as

 ρ(t1)=E(t1,t0)[ρ(t0)]. (27)

Here, is a dynamical map that preserves the spectral decomposition of and satisfies

 E(t1,t0)[ρ(t0)] =∑x0∈Xp(x0,t0)E(t1,t0)[|ψ(x0)⟩⟨ψ(x0)|] =∑x1,x0∈XT(x1,t1|x0,t0)p(x0,t0)|ψ(x1)⟩⟨ψ(x1)|. (28)

Furthermore, because of Eqs. (11), (12) and (13), preserves positivity and the trace of any state in and obeys the composition law

 E(t3,t1)=E(t3,t2)E(t2,t1),t3≥t2≥t1. (29)

On the other hand, since the maps are supposed to describe some quantum evolution, they are linear (there is not experimental evidence against this fact [15, 16, 17]). Thus, their action on another set of quantum states with different spectral projectors to is physically well defined provided that the positivity of the states of is preserved (i.e. any density matrix in is transformed in another valid density matrix). Hence, by consistence, we formulate the following general definition of a P-divisible process.

###### Definition 3.1 (P-divisible process).

We say that a quantum system subject to some time evolution characterized by the family of trace-preserving linear maps is P-divisible if, for every and , is a positive map (preserve the positivity of any quantum state) and fulfils Eq. (29).

The reason to use the terminology “P-divisible” (which stands for positive-divisible) instead of “divisible” comes from the difference between positive and complete positive maps which is essential in quantum mechanics. More explicitly, a linear map acting on a matrix space is a positive map if for ,

 A≥0⇒Υ(A)≥0, (30)

i.e. transforms positive semidefinite matrices into positive semidefinite matrices. In addition, is said to be completely positive if for any matrix space such that , and ,

 B≥0⇒Υ⊗\mathds1(B)≥0. (31)

These concepts are properly extended to the infinity dimensional case [18].

Complete positive maps are much easier to characterize than maps that are merely positive [19, 20]; they admit the so-called Kraus representation, , and it can be shown that if Eq. (31) is fulfilled with , it is also true for any such that .

It is well-know that the requirement of positivity alone for a dynamical map presents difficulties. Concretely, in order to keep the positivity of density matrices in presence of entanglement with another extra system we must impose complete positivity instead of positivity [14, 21, 22, 23, 24, 25, 26]. Thus, now we are able to give a definition of quantum Markovian process.

###### Definition 3.2 (Markovian quantum process).

We shall say that a quantum system subject to a time evolution given by some family of trace-preserving linear maps is Markovian (or divisible [27]) if, for every and , is a complete positive map and fulfills the composition law Eq. (29).

For the sake of comparison, the following table shows the clear parallelism between classical transition matrices and quantum evolution families in a Markovian process.

Before we move on, it is worth to summarize the argument leading to the definition of Markovian quantum process, as it is the central concept of this work. Namely, since a direct definition from the classical condition Eq. (1) is problematic because of quantum measurement disturbance, we focus on one-time probabilities. For those, classical Markovian processes and divisible processes are equivalent, thus we straightforward formulate the divisibility condition for quantum dynamics preserving the spectral decomposition of certain set of states . Then the Markovian (or divisibility) condition for any quantum evolution follows by linearity when taking into account the completely positive requirement in the quantum evolution. We have sketched this reasoning in the scheme presented in figure 1.

Finally, we review a fundamental result regarding differentiable quantum Markovian processes (i.e. processes such that the limit is well-defined). In this case, there is a mathematical result which is quite useful to characterize Markovian dynamics.

###### Theorem 3.1 (Gorini-Kossakowski-Susarshan-Lindblad).

An operator is the generator of a quantum Markov (or divisible) process if and only if it can be written in the form

 dρ(t)dt=Lt[ρ(t)]=−i[H(t),ρ(t)]+∑kγk(t)[Vk(t)ρ(t)V†k(t)−12{V†k(t)Vk(t),ρ(t)}], (32)

where and are time-dependent operators, with self-adjoint, and for every and time .

This theorem is a consequence of the pioneering work by A. Kossakowski [28, 29] and co-workers [30], and independently G. Lindblad [31], who analyzed the case of time-homogeneous equations, i.e. time-independent generators . For a complete proof including possible time-dependent see [26, 32].

### 3.3 Where is the memoryless property in quantum Markovian processes?

As mentioned before, the motivation behind Definition 3.2 for quantum Markovian processes has been to keep a formal analogy with the classical case. However, it is not immediately apparent that the memoryless property present in the classical case is also present in the quantum domain. There are at least two ways to visualize this property which is hidden in Definition 3.2. As discussed below, one is based on the contractive properties of the completely positive maps and the other resorts to a collisional model of system-environment interactions.

#### 3.3.1 Contractive property of a quantum Markovian process

Similarly to the classical case (see Section 2.3), memoryless properties of quantum Markovian processes become quite clear in hypothesis testing problems [33, 34]. In the quantum case, we consider a system, with associated Hilbert space , whose state is represented by the density matrix with probability , and with probability . We wish to determine which density matrix describes the true state of the quantum system by performing a measurement. If we consider some general positive operator valued measure (POVM) (cf. [14]), where is the set of possible outcomes, we may split this set in two complementary subsets. If the outcome of the measurement is inside some , then we say that the state is . Conversely, if the result of the measurement belongs to the complementary set such that , we say that the state is . Let us group the results of this measurement in another POVM given by the pair , with .

Thus, when the true state is (which happens with probability ) we erroneously identify the state as with probability

 ∑j∈Ac\Tr[ρ1Πx] = \Tr[ρ1(∑x∈AcΠx)]=\Tr[ρ1(I−T)]. (33)

On the other hand, when the true state is (which happens with probability ), we erroneously identify the state as with probability

 ∑j∈A\Tr[ρ2Πx]=\Tr[ρ2(∑x∈AΠx)]=\Tr[ρ2T]. (34)

The problem in one-shot two-state discrimination is to examine the trade-off between the two error probabilities and . Thus, consider the best choice of that minimizes the total averaged error probability

 min0≤T≤I{(1−q)\Tr[ρ2T]+q\Tr[ρ1(I−T)]} =min0≤T≤I{q+\Tr[(1−q)ρ2T−qρ1T]} =q−max0≤T≤I[\Tr(ΔT)], (35)

where is a Hermitian operator, with trace vanishing in the unbiased case . is sometimes called Helstrom matrix [35]. We have the following result.

###### Theorem 3.2.

With the best choice of , the minimum total error probability in the one-shot two-state discrimination problem becomes

 Pmin(fail)=min0≤T≤I{(1−q)\Tr[ρ2T]+q\Tr[ρ1(I−T)]}=1−∥Δ∥12, (36)

where is the trace norm of the Helstrom matrix .

Thus, note that when or we immediately obtain zero probability of wrongly identifying the true state.

###### Proof.

The proof follows the same steps as for the unbiased case (see [14, 36]). The spectral decomposition allows us to write , with positive operators where are the positive eigenvalues of and the negative ones. Then it is clear that for

 \Tr(ΔT)=\Tr(Δ+T)−\Tr(Δ−T)≤\Tr(Δ+T)≤\Tr(Δ+), (37)

so that

 Pmin(fail)=q−max0≤T≤I[\Tr(ΔT)]=q−\Tr(Δ+). (38)

On the other hand, because are orthogonal projections (in other words as ), the trace norm of is

 ∥Δ∥1=∥Δ+∥1+∥Δ−∥1=\Tr(Δ+)+\Tr(Δ−). (39)

Since

 \Tr(Δ+)−\Tr(Δ−)=\Tr(Δ)=2q−1, (40)

we have

 ∥Δ∥1=2\Tr(Δ+)+(1−2q). (41)

Using this relation in (38) we straightforwardly obtain the result (36). ∎

Thus the trace norm of gives our capability to distinguish correctly between and in the one-shot two-state discrimination problem.

On the other hand, the following theorem connects trace-preserving and positive maps with the trace norm. It was first proven by Kossakowski in references [28, 29], while Ruskai also analyzed the necessary condition in [37].

###### Theorem 3.3.

A trace preserving linear map is positive if and only if for any Hermitian operator acting on ,

 ∥E(Δ)∥1≤∥Δ∥1. (42)
###### Proof.

Assume that is positive and trace preserving, then for every positive semidefinite the trace norm is also preserved, . Consider not to be necessarily positive semidefinite, then by using the same decomposition as in the proof of Theorem 3.2, , we have

 ∥E(Δ)∥1 =∥E(Δ+)−E(Δ−)∥1 ≤∥E(Δ+)∥1+∥E(Δ−)∥1=∥Δ+∥1+∥Δ−∥1=∥Δ∥1, (43)

where the penultimate equality follows from the positivity of . Therefore, fulfils Eq. (42).

Conversely, assume that satisfies Eq. (42) and preserves the trace, then for a positive semidefinite we have the next chain of inequalities:

 ∥Δ∥1=\Tr(Δ)=\Tr[E(Δ)]≤∥E(Δ)∥1≤∥Δ∥1,for Δ≥0,

hence . Since if and only if , we obtain that .

There is a clear parallelism between this theorem and Theorem 2.4 for classical stochastic processes. As a result, quantum Markov processes are also characterized in the following way.

###### Theorem 3.4.

A quantum evolution is Markovian if and only if for all and , ,

 ∥∥[E(t2,t1)⊗\mathds1](~Δ)∥∥1≤∥~Δ∥1, (44)

for any Hermitian operator acting on .

###### Proof.

Since for a quantum Markovian process is completely positive for any , the map is positive, and the results follows from Theorem 3.3. ∎

Therefore, similarly to the classical case, a quantum Markovian process increases monotonically the averaged probability , Eq. (36), to give the wrong answer in one-shot two-state discrimination problem. More concretely, consider a quantum system “” which evolves from to the current time instant , through some dynamical map . This system was prepared at in the state with probability and