Partition Reduction for Lossy Data Compression Problem

Partition Reduction for Lossy Data Compression Problem

Marek Śmieja Institute of Computer Science
Department of Mathematics and Computer Science
Jagiellonian University
Lojasiewicza 6, 30-348, Krakow, Poland
Email: marek.smieja@ii.uj.edu.pl
   Jacek Tabor Institute of Computer Science
Department of Mathematics and Computer Science
Jagiellonian University
Lojasiewicza 6, 30-348, Krakow, Poland
Email: jacek.tabor@ii.uj.edu.pl
Abstract

We consider the computational aspects of lossy data compression problem, where the compression error is determined by a cover of the data space. We propose an algorithm which reduces the number of partitions needed to find the entropy with respect to the compression error. In particular, we show that, in the case of finite cover, the entropy is attained on some partition. We give an algorithmic construction of such partition.

I Introduction

The basic description of lossy data compression consists of the quantization of the data space into partition and (binary) coding for this partition. Based on the approach of A. Rényi’s [1, 2] and E. C. Posner et al. [3, 4, 5], we have recently presented an idea of the entropy which allows to combine these two steps [6]. The main advantage of our description over classical ones is that we consider general probability spaces without metric. It gives us more freedom to define the error of coding.

In this paper we concentrate on the calculation of the entropy defined in [6]. We propose an algorithm which allows to reduce drastically the computational effort to perform the lossy data coding procedure.

To explain precisely our results let us first introduce basic definitions and give their interpretations. In this paper, if not stated otherwise, we always assume that is a subprobability space111We assume that is a measurable space and .. As it was mentioned, the procedure of lossy data coding consists of the quantization of data space into partition and binary coding for this partition. We say that family is a partition if it is countable family of measurable, pairwise disjoint subsets of such that

(1)

During encoding we map every given point to the unique if and only if . Binary coding for the partition can be simply obtained by Huffman coding of elements of .

The statistical amount of information given by optimal lossy coding of by elements of partition is determined by the entropy of which is [7]:

(2)

where , for and is the Shannon function.

The coding defined by a given partition causes specific level of error. To control the maximal error, we fix an error-control family which is just a measurable cover of . Then we consider only such partitions which are finner than i.e. we desire that for every there exists such that . If this is the case then we say that is -acceptable and we write .

To understand better the definition of the error-control family let us consider the following example.

Example I.1.

Let be a family of all intervals in with length . Every -acceptable partition consists of sets with diameter at most . Then, after encoding determined by such partition, every symbol can be decoded at least with the precision . The above error-control family was considered by A. Rényi [1, 2] in his definition of the entropy dimension. As the natural extensions he also studied the error-control families built by all balls with given radius in general metric spaces. Similar approach was also used by E. C. Posner [3, 4, 5] in his definition of -entropy222E. C. Posner considered in fact -entropy which differs slightly from our approach..

In the case of general measures, it seems to be more natural to vary the lengths of intervals from the error-control family. Less probable events should be coded with lower precision (longer intervals) while more probable ones with higher accuracy (shorter intervals). Our approach allows to deal easily with such situations.

To describe the best lossy coding determined by -acceptable partitions, we define the entropy of as

(3)

Let us observe what is the main difficulty in the application of this approach to the lossy data coding:

Example I.2.

Let us consider the data space and the error-control family . In such simple situation there exists uncountable number of -acceptable partitions which have to be considered to find .

In this paper, we show how to reduce the aforementioned problem to the at most countable one. In the next section, we propose an algorithm which for a given partition , allows to construct -acceptable partition with not greater entropy than , where denotes the sigma algebra generated by (see Algorithm II.1).

As a consequence we obtain that the entropy can be realized only by partitions (see Corollary III.1). In the case of finite error-control families , we get an algorithmic construction of optimal -acceptable partition. More precisely, if is an error-control family then there exists sets such that (see Corollary III.3):

(4)

Ii Algorithm for Partition Reduction

In this section we present an algorithm which for a given -acceptable partition constructs -acceptable partition with not greater entropy. We give the detailed explanation that .

We first establish the notation: for a given family of subsets of and set , we denote:

(5)

Let be an error-control family on and let be a -acceptable partition of . We build a family according to the following algorithm:  Algorithm II.1. 

  initialization
     
     
     
  while  do
     Let be such that
        
     Let be an arbitrary set
        which satisfies
     
     
     
  end while

 

We are going to show that Algorithm II.1 produces the partition with not greater entropy than . Before that, for the convenience of the reader, we first recall two important facts, which we will refer to in further considerations.

Observation II.1.

Given numbers and such that , we have:

(6)
Proof:

For the proof we refer the reader to [7, Section 6] where similar problem is illustrated for . ∎

Consequence of Lebesgue Theorem (see [8]) Let be summable i.e. and be a sequence of functions such that , for . If is pointwise convergent, for every , then is summable and

(7)

Let us move to the analysis of Algorithm II.1. We first check what happens in the single iteration of the algorithm.

Lemma II.1.

We consider an error-control family and a -acceptable partition of . Let be such that:

(8)

If is chosen so that then

(9)
Proof:

Clearly, if then the inequality (9) holds trivially. Thus we assume that .

Let us observe that it is enough to consider only elements of with non zero measure – the number of such sets can be at most countable. Thus, let us assume that (the case when is finite can be treated in similar manner).

For simplicity we put . For every , we consider the sequence of sets, defined by

(10)

Clearly, for , we have

(11)
(12)
(13)
(14)
(15)

To complete the proof it is sufficient to derive that for every , we have:

(16)

and

(17)

Let be arbitrary. Then from (13) and (14), we get

(18)
(19)
(20)
(21)

Making use of Observation II.1, we obtain

(22)
(23)

which proves (16).

To derive (17), we first use inequality (16). Then

(24)
(25)

By (15),

(26)

To calculate , we will use the Consequence of Lebesgue Theorem. We consider a sequence of functions

(27)

Let us observe that the Shannon function is increasing on and decreasing on . Thus for a certain ,

(28)

and

(29)

for every . Since then

(30)

Moreover,

(31)

for every .

As the sequence of functions satisfies the assumptions of the Consequence of Lebesgue Theorem then, we get

(32)
(33)

Consequently, we have

(34)
(35)

which completes the proof. ∎

We are ready to summarize the analysis of Algorithm II.1. We present it in the following two theorems.

Theorem II.1.

Let be an error-control family on and let be a -acceptable partition of . Family constructed by the Algorithm II.1 is a partition of .

Proof:

Directly from the Algorithm II.1, we get that is countable family of pairwise disjoint sets.

Let us assume that , since the case when is finite family is straightforward. To prove that

(36)

we will use the Consequence of Lebesgue Theorem.

For every , we define a function by

(37)

Clearly,

(38)

and

(39)

To see that the sequence is pointwise convergent, we apply the indirect reasoning. Let and let be such that, for every ,

(40)

We put . We assume that we have already chosen sets . Since then , for every . Hence, we have

(41)

as is a family of pairwise disjoint sets. Consequently,

(42)

which is the contradiction. The sequence is convergent.

Finally, making use of Lebesgue Theorem, we obtain

(43)
(44)
(45)

Theorem II.2.

Let be an error-control family on and let be a -acceptable partition of . Partition built by Algorithm II.1 satisfies:

(46)
Proof:

If then the inequality (46) is straightforward. Thus let us discuss the case when .

We denote , since at most countable number of elements of partition can have positive measure (the case when is finite follows similarly). We will use the notation introduced in Algorithm II.1.

Directly from Lemma II.1, we obtain

(47)

Consequently, for every , we get

(48)

Our goal is to show that

(49)

for every .

Making use of (48), we have

(50)
(51)
(52)

for every .

We will calculate using the Consequence of Lebesgue Theorem for a sequence of functions , defined by

(53)

Similarly to the proof of Lemma II.1, we may assume that there exists such that

(54)

and

(55)

for every . Moreover,

(56)

for every since is a partition of .

Making use of the Consequence of Lebesgue Theorem, we get

(57)

Consequently, for every , we have

(58)
(59)
(60)

which completes the proof. ∎

Iii Concluding Remarks

We have seen that in computing the entropy with respect to the error-control family it is sufficient to consider only partitions constructed from the sigma algebra generated by . Thus, we may rewritten the definition of the entropy with respect to :

Corollary III.1.

We have:

(61)

Let us observe that Algorithm II.1 shows how to find a -acceptable partition with the entropy arbitrarily close to :

Corollary III.2.

Let be an error-control family of . For any number , there exists partition such that

(62)
Proof:

For simplicity let us assume that (the case when is finite or uncountable follows in similar way). Then the partition , which satisfies the assertion, is of the form:

(63)

for specific permutation of natural numbers. ∎

When is a finite family then the entropy of is always attained on some partition . More precisely, we have:

Corollary III.3.

Let be element error-control family, where . Then there exist sets , for specific , such that

(64)

To see that the entropy with respect to arbitrary, possibly infinite, error-control family does not have to be attained on any partition, we use trivial example from [6, Example II.1]:

Example III.1.

Let us consider the open segment with sigma algebra generated by all Borel subsets of , Lebesgue measure and error control family, defined by

(65)

One can verify that but clearly , for every -acceptable partition .

As an open problem we leave the following question:

Problem III.1.

Let be an error-control family. We assume that if there exists such that , for every , then also . We ask if the entropy with respect to is realized by some -acceptable partition .

References

  • [1] A. Rényi, “On measures of entropy an information,” Proc. Fourth Berkeley Symp. on Math. Statist.and Prob., vol. 1, pp. 647–561, 1961.
  • [2] ——, “On the dimension and entropy of probability distributions,” Acta Mathematica Hungarica, vol. 10, no. 1–2, pp. 193–215, 1959.
  • [3] E. C. Posner, E. R. Rodemich, and H. Rumsey, “Epsilon entropy of stochastic processes,” The Annals of Mathematical Statistics, vol. 38, pp. 1000–1020, 1967.
  • [4] E. C. Posner and E. R. Rodemich, “Epsilon entropy and data compression,” The Annals of Mathematical Statistics, vol. 42, pp. 2079–2125, 1971.
  • [5] ——, “Epsilon entropy of stochastic processes with continuous paths,” The Annals of Probability, vol. 1, no. 4, pp. 674–689, 1973.
  • [6] M. Śmieja and J. Tabor, “Entropy of the mixture of sources and entropy dimension,” to appear in IEEE Transactions on Information Theory, vol. 58, no. 5, 2012.
  • [7] C. E. Shannon, “A mathematical theory of communication,” The Bell System Technical Journal, vol. 27, pp. 379–423, 623–656, 1948.
  • [8] J. F. C. Kingman and S. J. Taylor, Introduction to measures and probability.   Cambridge University Press, 1966.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
29008
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description