###### Abstract

The positive and not completely positive maps of density matrices, which are contractive maps, are discussed as elements of a semigroup. A new kind of positive map (the purification map), which is nonlinear map, is introduced. The density matrices are considered as vectors, linear maps among matrices are represented by superoperators given in the form of higher dimensional matrices. Probability representation of spin states (spin tomography) is reviewed and -tomogram of spin states is presented. Properties of the tomograms as probability distribution functions are studied. Notion of tomographic purity of spin states is introduced. Entanglement and separability of density matrices are expressed in terms of properties of the tomographic joint probability distributions of random spin projections which depend also on unitary group parameters. A new positivity criterion for hermitian matrices is formulated. An entanglement criterion is given in terms of a function depending on unitary group parameters and semigroup of positive map parameters. The function is constructed as sum of moduli of -tomographic symbols of the hermitian matrix obtained after action on the density matrix of composite system by a positive but not completely positive map of the subsystem density matrix. Some two-qubit and two-qutritt states are considered as examples of entangled states. The connection with the star-product quantisation is discussed. The structure of the set of density matrices and their relation to unitary group and Lie algebra of the unitary group are studied. Nonlinear quantum evolution of state vector obtained by means of applying purification rule of density matrices evolving via dynamical maps is considered. Some connection of positive maps and entanglement with random matrices is discussed and used.

guess

## Section 1 Introduction

The states in quantum mechanics are associated with vectors in Hilbert space [1] (it is better to say with rays) in the case of pure states. For mixed state, one associates the state with density matrix [2, 3]. In classical mechanics (statistical mechanics), the states are associated with joint probability distributions in phase space. There is an essential difference in the concept of states in classical and quantum mechanics. This difference is clearly pointed out by the phenomenon of entanglement. The notion of entanglement [4] is related to the quantum superposition rule of the states of subsystems for a given multipartite system. For pure states, the notion of entanglement and separability can be given as follows.

If the wave function [5] of a state of a bipartite system is represented as the product of two wave functions depending on coordinates of the subsystems, the state is simply separable; otherwise, the state is simply entangled. An intrinsic approach to the entanglement measure was suggested in [6]. The measure was introduced as the distance between the system density matrix and the tensor product of the associated states. For the subsystems, the association being realized via partial traces. There are several other different characteristics and measures of entanglement considered by several authors [7–13]. For example, there are measures related to entropy (see, [14–24]). Also linear entropy of entanglement was used in [25–27], “concurrences” in [28, 29] and “covariance entanglement measure” in [30]. Each of the entanglement measures describes some degree of correlation between the subsystems’ properties.

The notion of entanglement is not an absolute notion for a given system but depends on the decomposition into subsystems. The same quantum state can be considered as entangled, if one kind of division of the system into subsystems is given, or as completely disentangled, if another decomposition of the system into subsystems is considered.

For instance, the state of two continuous quadratures can be entangled in Cartesian coordinates and disentangled in polar coordinates. Coordinates are considered as measurable observables labelling the subsystems of the given system. The choice of different subsystems mathematically implies the existence of two different sets of the subsystems’ characteristics (we focus on bipartite case). We may consider the Hilbert space of states or . The Hilbert space for the total system is, of course, the same but the index means that there are two sets of operators and , which select subsystem states 1 and 2. The index means that there are two other sets of operators and , which select subsystem states and The operators and have specific properties. They are represented as tensor products of operators acting in the space of states of the subsystem 1 (or 2) and unit operators acting in the subsystem 2 (or 1). In other words, we consider the space , which can be treated as the tensor product of spaces and or and . In the subsystems and , there are basis vectors and , and in the subsystems and there are basis vectors and The vectors and form the sets of basis vectors in the composite Hilbert space, respectively. These two sets are related by means of unitary transformation. An example of such a composite system is a bipartite spin system.

If one has spin- [the space ] and spin- [the space
] systems, the combined system can be treated as having basis

Another basis in the composite-system-state space can be considered in the form , where is one of the numbers and . The basis is related to the basis by means of the unitary transform given by Clebsch–Gordon coefficients . From the viewpoint of the given definition, the states are entangled states in the original basis. Another example is the separation of the hydrogen atom in terms of parabolic coordinates used while discussing the Stark effect.

The spin states can be described by means of the tomographic map [31–33]. For bipartite spin systems, the states were described by the tomographic probabilities in [34, 35]. Some properties of the tomographic spin description were studied in [36]. In the tomographic approach, the problems of the quantum state entanglement can be cast into the form of some relations among the probability distribution functions. On the other hand, to have a clear picture of entanglement, one needs a mathematical formulation of the properties of the density matrix of the composite system, a description of the linear space of the composite system states. Since a density matrix is hermitian, the space of states may be embedded as a subset of the Lie algebra of the unitary group, carrying the adjoint representation of , where is the dimension of the spin states of two spinning particles. Thus one may try to characterize the entanglement phenomena by using various structures present in the space of the adjoint representation of the group.

The aim of this paper is to give a review of different aspects of density matrices and positive maps and connect entanglement problems with the properties of tomographic probability distributions and discuss the properties of the convex set of positive states for composite system by taking into account the subsystem structures. We used [6] the Hilbert–Schmidt distance to calculate the measure of entanglement as the distance between a given state and the tensor product of the partial traces of the density matrix of the given state. In [37] another measure of entanglement as a characteristic of subsystem correlations was introduced. This measure is determined via the covariance matrix of some observables. A review of different approaches to the entanglement notion and entanglement measures is given in [38] where the approach to describe entanglement and separability of composite systems is based, e.g., on entropy methods.

Due to a variety of approaches to the entanglement problem, one needs to understand better what in reality the word “entanglement” describes. Is it a synonym of the word “correlation” between two subsystems or does it have to capture some specific correlations attributed completely and only to the quantum domain?

The paper is organized as follows.

In section 2 we discuss division of composite systems onto subsystems and relation of the density matrix to adjoint representation of unitary group in generic terms of vector representation of matrices; we study also completely positive maps of density matrices. In section 3 we consider a vector representation of probability distribution functions and notion of distance between the probability distributions and density matrices. In section 4 we present definition of separable quantum state of a composite system and criterion of separability. In section 5 the entanglement is considered in terms of operator symbols. Particular tomographic probability representation of quantum states and tomographic symbols are reviewed in section 6. Symbols of multipartite states are studied in section 7. In section 8 spin tomography is reviewed. An example of qubit state is done in section 9. The unitary spin tomogram is introduced in section 10 while in section 11 dynamical map and corresponding quantum evolution equations are discussed as well as examples of concrete positive maps. Conclusions and perspectives are presented in section 12.

## Section 2 Composite system

In this section, we review the meaning and notion of composite system in terms of additional structures on the linear space of state for the composite system.

### 2.1 Difference of states and observables

In quantum mechanics, there are two basic aspects, which are associated with linear operators acting in a Hilbert space. The first one is related to the concept of quantum state and the second one, to the concept of observable. These two concepts of state and observable are paired via a map with values in probability measures on the real line. Often states are described by Hermitian nonnegative, trace-class, matrices. The observables are described by Hermitian operators. Though both states and observables are identified with Hermitian objects, there is an essential difference between the corresponding objects. The observables have an additional product structure. Thus we may consider the product of two linear operators corresponding to observables.

For the states, the notion of product is redundant. The product of two states is not a state. For states, one keeps only the linear structure of vector space. For finite -dimensional system, the Hermitian states and the Hermitian observables may be mapped into the Lie algebra of the unitary group . But the states correspond to nonnegative Hermitian operators. Observables can be associated with both types of operators, including nonnegative and nonpositive ones. The space of states is therefore a convex-linear space which, in principle, is not equipped with a product structure. Due to this, transformations in the linear space of states need not preserve any product structure. In the set of observables, one has to be concerned with what is happening with the product of operators when some transformations are performed. State vectors can be transformed into other state vectors. Density operators also can be transformed. We will consider linear transformations of the density operators. The density operator has nonnegative eigenvalues. In any representation, diagonal elements of density matrix have physical meaning of probability distribution function.

Density operator can be decomposed as a sum of eigenprojectors with coefficients equal to its eigenvalues. Each one of the projectors defines a pure state. There exists a basis in which every eigenprojector of rank one is represented by a diagonal matrix of rank one with only one matrix element equal to one and all other matrix elements equal to zero. Other density matrices with similar properties belong to the orbit of the unitary group on the starting eigenprojector. Depending on the number of distinct nonzero values determines the class of the orbit. Since density matrices of higher rank belong to an appropriate orbit of a convex sum of the different diagonal eigenprojectors (in special basis), we may say that generic density matrices belong to the orbits of the unitary group acting on the diagonal density matrices which belong to the Cartan subalgebra of the Lie algebra of the unitary group. Any convex sum of density matrices can be treated as a mean value of a random density matrix. The positive coefficients of the convex sum can be interpreted as a probability distribution function which makes the averaging providing the final value of the convex sum. The set of density matrices may be identified with the union of the orbits of the unitary group acting on diagonal density matrices considered as elements of the Cartan subalgebra.

### 2.2 Matrices as vectors, density operators and superoperators

When matrices represent states it may be convenient to identify them with vectors. In this case, a density matrix can be considered as a vector with additional properties of its components. If the identifications are done elegantly, we can see the real Hilbert space of density matrices in terms of vectors with real components. In this case, linear transforms of the matrix can be interpreted as matrices called superoperators. It means that density matrices–vectors undergoing real linear transformations are acted on by the matrices representing the action of the superoperators of the linear map. This construction can be continued. Thus we can get a chain of vector spaces of higher and higher dimensions. Let us first introduce some extra constructions of the map of a matrix onto a vector. Given a rectangular matrix with elements , where and , one can consider the matrix as a vector with components constructed by the following rule:

(\theequation) | |||

Thus we construct the map

We have introduced the linear operator which maps the matrix onto a vector . Now we introduce the inverse operator which maps a given column vector in the space with dimension onto a rectangular matrix. This means that given a vector , we relabel its components by introducing two indices and The relabeling is accomplished according to (2.2). Then we collect the relabeled components into a matrix table. Eventually we get the map

(\theequation) |

The composition of these two maps

(\theequation) |

acts as the unit operator in the linear space of vectors.

Given a matrix the map considered can also be applied. The matrix can be treated as an -dimensional vector and, vice versa, the vector of dimension may be mapped by this procedure onto the matrix.

Let us consider a linear operator acting on the vector and related to a linear transform of the matrix . First, we study the correspondence of the linear transform of the form

(\theequation) |

to the transform of the vector

(\theequation) |

One can show that the matrix is determined by the tensor product of the matrix and unit matrix, i.e.,

(\theequation) |

Analogously, the linear transform of the matrix of the form

(\theequation) |

induces the linear transform of the vector of the form

(\theequation) |

where the matrix reads

(\theequation) |

Similarity transformation of the matrix of the form

(\theequation) |

induces the corresponding linear transform of the vector of the form

(\theequation) |

where the matrix reads

(\theequation) |

Starting with vectors, one may ask how to identify on them a product structure which would make into an algebra homomorphism. An associative algebraic structure on the vector space may be defined by imitating the procedure one uses to define star-products on the space of functions on phase space. One can define the associative product of two -vectors and using the rule

(\theequation) |

where

(\theequation) |

If one applies a linear transform to the vectors , , of the form

and requires the invariance of the star-product kernel, one finds

The kernel (structure constants) which determines the associative star-product satisfies a quadratic equation. Thus if one wants to make the correspondence of the vector star-product to the standard matrix product (row by column), the matrix must be constructed appropriately. For example, if the vector star-product is commutative, the matrix corresponding to the -vector can be chosen as a diagonal matrix. This consideration shows that the map of matrices on the vectors provides the star-product of the vectors (defining the structure constants or the kernel of the star-product) and, conversely, if one starts with vectors and uses matrices with the standard multiplication rule, it will be the map to be determined by the structure constants (or by the kernel of the vector star-product).

The constructed space of matrices associated with vectors enables one to enlarge the dimensionality of the group acting in the linear space of matrices in comparison with the standard one, i.e., we may relax the requirement of invariance of the product structure. In general, given a matrix the left action, the right action, and the similarity transformation of the matrix are related to the complex group . On the other hand, the linear transformations in the linear space of -vectors obtained by using the introduced map are determined by the matrices belonging to the group . There are transformations on the vectors which cannot be simply represented on matrices. If is a linear homogeneous function of the matrix , we may represent it by

Under rather clear conditions, can be expressed in terms of its nonnormalized left and right eigenvectors:

being an index for eigenvalues, which corresponds to

There are possible linear transforms on the matrices and corresponding linear transforms on the induced vector space which do not give rise to a group structure but possess only the structure of algebra. One can describe the map of matrices (source space) onto vectors (target space) using specific basis in the space of the matrices. The basis is given by the matrices with all matrix elements equal to zero except the element in the th row and th column which is equal to unity. One has the obvious property

(\theequation) |

In our procedure, the basis matrix is mapped onto the basis column-vector , which has all components equal to zero except the unity component related to the position in the matrix determined by the numbers and . Then one has

(\theequation) |

For example, for similarity transformation of the finite matrix , one has

(\theequation) |

Now we will define the notion of ‘composite’ vector which corresponds to dividing a quantum system into subsystems.

We will use the following terminology.

In general, the given linear space of dimensionality has a structure of a bipartite system, if the space is equipped with the operator and the matrix (obtained by means of the map) has matrix elements in factorizable form

(\theequation) |

This corresponds to the special case of nonentangled states. Otherwise, one needs

In fact, to consider in detail the entanglement phenomenon, in the bipartite system of spin-1/2, one has to introduce a hierarchy of three linear spaces. The first space of pure spin states is the two-dimensional linear space of complex vectors

(\theequation) |

In this space, the scalar product is defined as follows:

(\theequation) |

So it is a two-dimensional Hilbert space. We do not equip this space with a vector star-product structure. In the primary linear space, one introduces linear operators which are described by 22 matrices . Due to the map discussed in the previous section, the matrices are represented by 4-vectors belonging to the second complex 4-dimensional space. The star-product of the vectors determined by the kernel is defined in such a manner in order to correspond to the standard rule of multiplication of the matrices.

In addition to the star-product structure, we introduce the scalar product of the vectors and , in view of the definition

(\theequation) |

which is the trace formula for the scalar product of matrices.

This means introducing the real metric in the standard notation for scalar product

(\theequation) |

where the matrix is of the form

(\theequation) |

The scalar product is invariant under the action of the group of nonsingular 44 matrices , which satisfy the condition

(\theequation) |

The product of matrices satisfies the same condition since

Thus, the space of operators in the primary two-dimensional space of spin states is mapped onto the linear space which is equipped with a scalar product (metric Hilbert space structure) and an associative star-product (kernel satisfying the quadratic associativity equation). In the linear space of the 4-vectors , we introduce linear operators (superoperators), which can be associated with the algebra of 44 complex matrices.

Let us now focus on density matrices. This means that our matrix is considered as a density matrix which describes a quantum state. We consider here the action of the unitary transformation of the density matrices and corresponding transformations on the vector space. If one has the structure of a bipartite system, we also consider the action of local gauge transformation both in the “source space” of density matrices and in the “target space” of the corresponding vectors.

The density matrix has matrix elements

(\theequation) |

Since the density matrix is hermitian, it can always be identified as an element of the convex subset of the linear space associated with the Lie algebra of group, on which the group acts with the adjoint representation

(\theequation) |

The system is said to be bipartite if the space of representation is equipped with an additional structure. This means that for

where, for simplicity, , one can make first the map of matrix onto -dimensional vector according to the previous procedure, i.e., one equips the space by an operator . Given this vector one makes a relabeling of the vector components according to the rule

(\theequation) |

i.e., obtaining again the quadratic matrix

(\theequation) |

The unitary transform (\theequation) of the density matrix induces a linear transform of the vector of the form

(\theequation) |

There exist linear transforms (called positive maps) of the density matrix, which preserve its trace, hermicity, and positivity. In some cases, they have the following form introduced in [39]

(\theequation) |

where are unitary matrices and are positive numbers.

If the initial density matrix is diagonal, i.e., it belongs to the Cartan subalgebra of the Lie algebra of the unitary group, the diagonal elements of the obtained matrix give a “smoother” probability distribution than the initial one. A generic transformation preserving previously stated properties may be given in the form (see [39, 40])

(\theequation) |

For example, if are taken as square roots of orthogonal projectors onto complete set of state, the map provides the map of the density matrix onto diagonal density matrix which has the same diagonal elements as has. In this case, the matrices have the only nonzero matrix element which is equal to one. Such a map may be called “decoherence map” because it removes all nondiagonal elements of the density matrix killing any phase relations. In quantum information terminology, one uses also the name “phase damping channel.” More general map may be given if one takes as generic diagonal density matrices, in which eigenvalues are obtained by circular permutations from the initial one. Due to this map, one has a new matrix with the same diagonal matrix elements but with changed nondiagonal elements. The purity of this matrix is smaller then the purity of the initial one. This means that the map is contractive. All matrices with the same diagonal elements up to permutations belong to a given orbit of the unitary group.

For a large number of terms with randomly chosen matrices in the sum in (\theequation), the above map gives the most stochastic density matrix

Its four-dimensional matrix for the qubit case has four matrix elements different from zero. These matrix elements are equal to one. They have the labels , , , . The map with two nonzero matrix elements provides pure-state density matrix from any . The transform (\theequation) is the partial case of the transform (\theequation). We discuss the transforms separately since they are used in the literature in the presented form.

One can see that the constructed map of density matrices onto vectors provides the corresponding transforms of the vectors, i.e.,

(\theequation) |

and

(\theequation) |

It is obvious that the set of linear transforms of vectors, which preserve their properties of being image of density matrices, is essentially larger than the standard unitary transform of the density matrices.

Formulae (\theequation) and (\theequation) mean that the positive map superoperators acting on the density matrix in the vector representation are described by matrices

(\theequation) |

and

(\theequation) |

respectively.

The positive map is called “noncompletely positive” if

This map is related to a possible “nonphysical” evolution of a subsystem.

Formula (\theequation) can be considered in the context of random matrix representation. In fact, the matrix can be interpreted as the weighted mean value of the random matrix . The dependence of matrix elements and positive numbers on index means that we have a probability distribution function and averaging of the random matrix by means of the distribution function. So the matrix reads

(\theequation) |

Let us consider an example of a 22 unitary matrix. We can consider a matrix of the group of the form

(\theequation) |

The 44 matrix takes the form

(\theequation) |

The matrix elements of the matrix are the means

(\theequation) | |||||

The moduli of these matrix elements are smaller than unity.

The determinant of the matrix reads

(\theequation) |

If one represents the matrix in block form

(\theequation) |

then

(\theequation) |

and

(\theequation) |

where is the Pauli matrix.

One can check that the product of two different matrices can be cast in the same form. This means that the matrices form a 9-parameter compact semigroup. It means that the product of two matrices from the set (semigroup) belongs to the same set. It means that composition is inner like the one for groups. There is a unity element in the semigroup, however, there exist elements which have no inverse. In our case, these elements are described, e.g., by the matrices with zero determinant. Also the elements, which are matrices with nonzero determinants, have no inverse elements in this set, since the map corresponding to the inverse of these matrices is not positive one. For example, in the case and , one has the matrices

(\theequation) |

The determinant of the matrix in this case is equal to zero. All the matrices have the eigenvector

(\theequation) |

i.e.,

(\theequation) |

This eigenvector corresponds to the density matrix

(\theequation) |

which is obviously invariant of the positive map.

For random matrix, one has correlations of the random matrix elements, e.g.,

The matrix

(\theequation) |

maps the vector

(\theequation) |

onto the vector

(\theequation) |

This means that the positive map (\theequation) connects the positive density matrix with its transpose (or complex conjugate). This map can be presented as the connection of the matrix with its transpose of the form

There is no unitary transform connecting these matrices.

There is noncompletely positive map in the -dimensional case, which is given by the generalized formula (for some )

In quantum information terminology, it is called “depolarizing channel.”

For the qubit case, matrix form of this map reads

(\theequation) |

Thus we constructed the matrix representation of the positive map of density operators of the spin-1/2 system. This particular set of matrices realize the representation of the semigroup of real numbers . If one considers the product , the result belongs to the semigroup. Only two elements 1 and of the semigroup have the inverse. These two elements form the finite subgroup of the semigroup. The semigroup itself without element can be embedded into the group of real numbers with natural multiplication rule. Each matrix has an inverse element in this group but all the parameters of the inverse elements live out of the segment . The group of the real numbers is commutative. The matrices of the nonunitary representation of this group commute too. It means that we have nonunitary reducible representation of the semigroup which is also commutative. To construct this representation, one needs to use the map of matrices on the vectors discussed in the previous section. Formulae (\theequation) and (\theequation) can be interpreted also in the context of the random matrix representation, but we use the uniform distribution for averaging in this case. So one has equality (\theequation) in the form

(\theequation) |

and the equality

(\theequation) |

which provides constraints for the random matrices used.

Using the random matrix formalism, the positive (but not completely positive) maps can be presented in the form

One can characterize the action of positive map on a density matrix by the parameter

As a remark we note that in [39] the positive maps (\theequation) and (\theequation) were used to describe the non-Hamiltonian evolution of quantum states for open systems.

We have to point out that, in general, such evolution is not described by first-order-in-time differential equation. As in the previous case, if there are additional structures for the matrix in the form

(\theequation) |

which means associating with the initial linear space two extra linear spaces where are considered as vector components in the -dimensional linear space and , are vector components in -dimensional vector space, we see that one has bipartite structure of the initial space of state [bipartite structure of the space of adjoint representations of the group ].

Usually the adjoint representation of any group is defined per se without any reference to possible substructures. Here we introduce the space with extra structure. In addition to being the space of the adjoint representation of the group , it has the structure of a bipartite system. The generalization to multipartite (-partite) structure is straightforward. One needs only the representation of positive integer in the form

(\theequation) |

If one considers the more general map given by superoperator (\theequation) rewritten in the form

the number of parameters determining the matrix can be easily evaluated. For example, for ,

where the matrix elements are complex numbers, the normalization condition provides four constraints for the real and imaginary parts of the matrix elements of the following matrix:

namely,

Due to the structure of the matrix , there are six complex parameters

or 12 real parameters.

The geometrical picture of the positive map can be clarified if one considers the transform of the positive density matrix into another density matrix as the transform of an ellipsoid into another ellipsoid. A generic positive transform means a generic transform of the ellipsoid, which changes its orientation, values of semiaxis, and position in the space. But the transform does not change the ellipsoidal surface into a hyperboloidal or paraboloidal surface. For pure states, the positive density matrix defines the quadratic form which is maximally degenerated. In this sense, the “ellipsoid” includes all its degenerate forms corresponding to the density matrix of rank less than (in -dimensional case). The number of parameters defining the map in the -dimensional case is equal to .

The linear space of Hermitian matrices is also equipped with the commutator structure defining the Lie algebra of the group . The kernel that defines this structure (Lie product structure) is determined by the kernel that determines the star-product.

## Section 3 Distributions as vectors

In quantum mechanics, one needs the concept of distance between the quantum states. In this section, we consider the notion of distance between the quantum states in terms of vectors. First, let us discuss the notion of distance between conventional probability distributions. This notion is well known in the classical probability theory.

Given the probability distribution , , one can introduce the vector in the form of a column with components The vector satisfies the condition

(\theequation) |

This set of vectors does not form a linear space but only a convex subset. Nevertheless, in this set one can introduce a distance between two distributions by using the one suggested by the vector space structure of the ambient space:

(\theequation) |

Of course, one may use other identifications of distributions with vectors.

Since all , one can introduce as components of the vector . The can be thought of as a column with nonnegative components. Then the distance between the two distributions takes the form

(\theequation) |

The two different definitions (\theequation) and (\theequation) can be used as distances between the distributions.

Let us discuss now the notion of distance between the quantum states determined by density matrices. In the density-matrix space (in the set of linear space of the adjoint representation), one can introduce distances analogously. The first case is

(\theequation) |

and the second case is

(\theequation) |

In fact, the distances introduced can be written naturally as norms of vectors associated to density matrices

(\theequation) |

and

(\theequation) |

respectively.

In the above expressions, we use scalar product of vectors and as well as scalar products of vectors and , respectively.

Both definitions immediately follow by identification of either matrices and with vectors according to the map of the previous sections or matrices and with vectors. Since the density matrices and have nonnegative eigenvalues, the matrices and are defined without ambiguity. This means that the vectors and are also defined without ambiguity. It is obvious that using this construction and introducing linear map of positive vectors , one induces nonlinear map of density matrices. Other analogous functions, in addition to square root function, can be used to create other nonlinear positive maps.

## Section 4 Separable systems and separability criterion

According to the definition, the system density matrix is called separable (for composite system) but not simply separable, if there is decomposition of the form

(\theequation) |

This is Hilbert’s problem of biquadrates. Is a positive biquadratic the positive sum of products of positive quadratics? In this formula, one may use also sum over two different indices. Using another labelling in such sum over two different indices, this sum can be always represented as the sum over only one index. The formula does not demand orthogonality of the density operators and for different . Since every density matrix is a convex sum of pure density matrices, one could demand that and be pure. This formula can be interpreted in the context of random matrix representation reading

(\theequation) |

where and are considered as random density matrices of the subsystems and , respectively. One can use the clarified structure of the density matrix set as the union of orbits obtained by action of the unitary group on projectors of rank one with matrix form containing only one nonzero matrix element. Then the separable density matrix of bipartite composite system can be always written as the sum of tensor products (or corresponding mean tensor product), i.e., in (\theequation) the factors are state projectors. Each of tensor products contains random unitary matrices of local transforms of the fixed local projector for one subsystem and for the second subsystem. It means that an arbitrary projector of rank one of a subsystem can be always presented in the product form , where is a unitary local transform and is a fixed projector.

There are several criteria for the system to be separable. We suggest in the next sections a new approach to the problem of separability and entanglement based on the tomographic probability description of quantum states. The states which cannot be represented in the form (\theequation) by definition are called entangled states [38]. Thus the states are entangled if in formula (\theequation) at least one coefficient (or more) is negative which means that the positive ones can take values greater than unity.

Let us discuss the condition for the system state to be separable. According to the partial transpose criterion [41], the system is separable if the partial transpose of the matrix (\theequation) gives a positive density matrix. This condition is necessary but not sufficient. Let us discuss this condition within the framework of positive-map matrix representation. For example, for a spin-1/2 bipartite system, we have shown that the map of a density matrix onto its transpose belongs to the matrix semigroup of matrices . One should point out that this map cannot be obtained by means of averaging with all positive probability distributions . On the other hand, it is obvious that the generic criterion, which contains the Peres criterion as a partial case, can be formulated as follows.

Let us map the density matrix of a bipartite system onto vector . Let the vector be acted upon by an arbitrary matrix, which represents the positive maps in subsystems and . Thus we get a new vector

(\theequation) |

Let us construct the density matrix using the inverse map of the vectors onto matrices. If the initial density matrix is separable, the new density matrix must be positive (and separable).

In the case of the bipartite spin-1/2 system, by choosing and with being the matrix coinciding with the matrix , we obtain the Peres criterion as a partial case of the criterion of separability formulated above. Thus, our criterion means that the separable matrix keeps positivity under the action of the tensor product of two semigroups. In the case of the bipartite spin-1/2 system, the 1616 matrix of the semigroup tensor product of positive contractive maps (\theequation) is determined by 24 parameters. Among these parameters, one can have some correlations.

Let us discuss the positive map (\theequation) which is determined by the semigroup for the -dimensional system. It can be realized also as follows.

The Hermitian generic matrix can be mapped onto essentially real -vector by the map described above. The complex vector is mapped onto the real vector by multiplying by the unitary matrix , i.e.,

(\theequation) |

The matrix is composed from unity blocks and the blocks

(\theequation) |

where corresponds to a column and corresponds to a row in the matrix .

For example, in the case , one has the vector of the form

(\theequation) |

One has the equalities

(\theequation) |

The semigroup preserves the trace of the density matrix. Also the discrete transforms, which are described by the matrix with diagonal matrix blocks of the form

(\theequation) |

preserve positivity of the density matrix.

For the spin case, the semigroup contains 12 parameters.

Thus, the direct product of the semigroup (\theequation) and the discrete group of the transform defines positive map preserving positivity of the density operator. One can include also all the matrices which correspond to other not completely positive maps. The considered representation contains only real vectors and their real positive transforms. This means that one can construct representation of semigroup of positive maps by real matrices.

## Section 5 Symbols, star-product and entanglement

In this section, we describe how entangled states and separable states can be studied using properties of symbols and density operators of different kinds, e.g., from the viewpoint of the Wigner function or tomogram. The general scheme of constructing the operator symbols is as follows [36].

Given a Hilbert space and an operator acting on this space, let us suppose that we have a set of operators acting transitively on parametrized by -dimensional vectors . We construct the -number function (we call it the symbol of the operator ) using the definition

(\theequation) |

Let us suppose that relation (\theequation) has an inverse, i.e., there exists a set of operators acting on the Hilbert space such that

(\theequation) |

One needs a measure in to define the integral in above formulae. Then, we will consider relations (\theequation) and (\theequation) as relations determining the invertible map from the operator onto the function . Multiplying both sides of Eq. (\theequation) by the operator and taking the trace, one can satisfy the consistency condition for the operators and

(\theequation) |

The consistency condition (\theequation) follows from the relation

(\theequation) |

The kernel in (\theequation) is equal to the standard Dirac delta-function, if the set of functions is a complete set.

In fact, we could consider relations of the form

(\theequation) |

and

(\theequation) |

The most important property of the map is the existence of the associative product (star-product) of the functions.

We introduce the product (star-product) of two functions and corresponding to two operators and by the relationships

(\theequation) |

Since the standard product of operators on a Hilbert space is an associative product, i.e., , it is obvious that formula (\theequation) defines an associative product for the functions , i.e.,

(\theequation) |

Using formulae (\theequation) and (\theequation), one can write down a composition rule for two symbols and , which determines the star-product of these symbols. The composition rule is described by the formula

(\theequation) |

The kernel in the integral of (\theequation) is determined by the trace of the product of the basic operators, which we use to construct the map

(\theequation) |

The kernel function satisfies the composition property .

## Section 6 Tomographic representation

In this section, we will consider an example of the probability representation of quantum mechanics [42]. In the probability representation of quantum mechanics, the state is described by a family of probabilities [43–45]. According to the general scheme, one can introduce for the operator the function , where

which we denote here as depending on the position and the parameters and of the reference frame

(\theequation) |

We call the function the tomographic symbol of the operator . The operator is given by