Is Natural Language a Perigraphic Process?The Theorem about Facts and Words Revisited

As we discuss, a stationary stochastic process is nonergodic when a random persistent topic can be detected in the infinite random text sampled from the process, whereas we call the process strongly nonergodic when an infinite sequence of independent random bits, called probabilistic facts, is needed to describe this topic completely. Replacing probabilistic facts with an algorithmically random sequence of bits, called algorithmic facts, we adapt this property back to ergodic processes. Subsequently, we call a process perigraphic if the number of algorithmic facts which can be inferred from a finite text sampled from the process grows like a power of the text length. We present a simple example of such a process. Moreover, we demonstrate an assertion which we call the theorem about facts and words. This proposition states that the number of probabilistic or algorithmic facts which can be inferred from a text drawn from a process must be roughly smaller than the number of distinct word-like strings detected in this text by means of the PPM compression algorithm. We also observe that the number of the word-like strings for a sample of plays by Shakespeare follows an empirical stepwise power law, in a stark contrast to Markov processes. Hence we suppose that natural language considered as a process is not only non-Markov but also perigraphic.
Keywords: stationary processes, PPM code, mutual information, power laws, algorithmic information theory, natural language

1 Introduction

One of motivating assumptions of information theory [1, 2, 3] is that communication in natural language can be reasonably modeled as a discrete stationary stochastic process, namely, an infinite sequence of discrete random variables with a well defined time-invariant probability distribution. The same assumption is made in several practical applications of computational linguistics, such as speech recognition [4] or part-of-speech tagging [5]. Whereas state-of-the-art stochastic models of natural language are far from being satisfactory, we may ask a more theoretically oriented question, namely:

What can be some general mathematical properties of natural language treated as a stochastic process, in view of empirical data?

In this paper, we will investigate a question whether it is reasonable to assume that natural language communication is a perigraphic process.

To recall, a stationary process is called ergodic if the relative frequencies of all finite substrings in the infinite text generated by the process converge in the long run with probability one to some constants—the probabilities of the respective strings. Now, some basic linguistic intuition suggests that natural language does not satisfy this property, cf. [3, Section 6.4]. Namely, we can probably agree that there is a variation of topics of texts in natural language, and these topics can be empirically distinguished by counting relative frequencies of certain substrings called keywords. Hence we expect that the relative frequencies of keywords in a randomly selected text in natural language are random variables depending on the random text topic. In the limit, for an infinitely long text, we may further suppose that the limits of relative frequencies of keywords persist to be random, and if this is true then natural language is not ergodic, i.e., it is nonergodic.

In this paper we will entertain first a stronger hypothesis, namely, that natural language communication is strongly nonergodic. Informally speaking, a stationary process will be called strongly nonergodic if its random persistent topic has to be described using an infinite sequence of probabilistically independent binary random variables, called probabilistic facts. Like nonergodicity, strong nonergodicity is not empirically verifiable if we only have a single infinite sequence of data. But replacing probabilistic facts with an algorithmically random sequence of bits, called algorithmic facts, we can adapt the property of strong nonergodicity back to ergodic processes. Subsequently, we will call a process perigraphic if the number of algorithmic facts which can be inferred from a finite text sampled from the process grows like a power of the text length. It is a general observation that perigraphic processes have uncomputable distributions.

It is interesting to note that perigraphic processes can be singled out by some statistical properties of the texts they generate. We will exhibit a proposition, which we call the theorem about facts and words. Suppose that we have a finite text drawn from a stationary process. The theorem about facts and words says that the number of independent probabilistic or algorithmic facts that can be reasonably inferred from the text must be roughly smaller than the number of distinct word-like strings detected in the text by some standard data compression algorithm called the Prediction by Partial Matching (PPM) code [6]. It is important to stress that in this theorem we do not relate the numbers all facts and all word-like strings, which would sound trivial, but we compare only the numbers of independent facts and distinct word-like strings.

Having the theorem about facts and words, we can also discuss some empirical data. Since the number of distinct word-like strings for texts in natural language follows an empirical stepwise power law, in a stark contrast to Markov processes, consequently, we suppose that the number of inferrable random facts for natural language also follows a power law. That is, we suppose that natural language is not only non-Markov but also perigraphic.

Whereas in this paper we fill several important missing gaps and provide an overarching narration, the basic ideas presented in this paper are not so new. The starting point was a corollary of Zipf‘s law and a hypothesis by Hilberg. Zipf‘s law is an empirical observation that in texts in natural language, the frequencies of words obey a power law decay when we sort the words according to their decreasing frequencies [7, 8]. A corollary of this law, called Heaps‘ law [9, 10, 11, 12], states that the number of distinct words in a text in natural language grows like a power of the text length. In contrast to these simple empirical observations, Hilberg‘s hypothesis is a less known conjecture about natural language that the entropy of a text chunk of an increasing length [13] or the mutual information between two adjacent text chunks [14, 15, 16, 17] obey also a power law growth. In paper [18], it was heuristically shown that if Hilberg‘s hypothesis for mutual information is satisfied for an arbitrary stationary stochastic process then texts drawn from this process satisfy also a kind of Heaps‘ law if we detect the words using the grammar-based codes [19, 20, 21, 22]. This result is a historical antecedent of the theorem about facts and words.

Another important step was a discovery of some simple strongly nonergodic processes, satisfying the power law growth of mutual information, called Santa Fe processes, discovered by Debowski in August 2002, but first reported only in [23]. Subsequently, in paper [24], a completely formal proof of the theorem about facts and words for strictly minimal grammar-based codes [22, 25] was provided. The respective related theory of natural language was later reviewed in [26, 27] and supplemented by a discussion of Santa Fe processes in [28]. Some drawback of this theory at that time was that strictly minimal grammar-based codes used in the statement of the theorem about facts and words are not computable in a polynomial time [25]. This precluded an empirical verification of the theory.

To state the relative novelty, in this paper we are glad to announce a new stronger version of the theorem about facts and words for a somewhat more elegant definition of inferrable facts and the PPM code, which is computable almost in a linear time. For the first time, we also present two cases of the theorem: one for strongly nonergodic processes, applying Shannon information theory, and one for general stationary processes, applying algorithmic information theory. Having these results, we can supplement them finally with a rudimentary discussion of some empirical data.

The organization of this paper is as follows. In Section 2, we discuss some properties of ergodic and nonergodic processes. In Section 3, we define strongly nonergodic processes and we present some examples of them. Analogically, in Section 4, we discuss perigraphic processes. In Section 5, we discuss two versions of the theorem about facts and words. In Section 6, we discuss some empirical data and we suppose that natural language may be a perigraphic process. In Section 7, we offer concluding remarks. Moreover, three appendices follow the body of the paper. In Appendix A, we prove the first part of the theorem about facts and words. In Appendix B, we prove the second part of this theorem. In Appendix C, we show that that the number of inferrable facts for the Santa Fe processes follows a power law.

2 Ergodic and nonergodic processes

We assume that the reader is familiar with some probability measure theory [29]. For a real-valued random variable on a probability space , we denote its expectation

(1)

Consider now a discrete stochastic process , where random variables take values from a set of countably many distinct symbols, such as letters with which we write down texts in natural language. We denote blocks of consecutive random variables and symbols . Let us define a binary random variable telling whether some string has occurred in sequence on positions from to ,

(2)

where

(3)

The expectation of this random variable,

(4)

is the probability of the chosen string, whereas the arithmetic average of consecutive random variables is the relative frequency of the same string in a finite sequence of random symbols .

Process is called stationary (with respect to a probability measure ) if expectations do not depend on position for any string . In this case, we have the following well known theorem, which establishes that the limiting relative frequencies of strings in infinite sequence exist almost surely, i.e., with probability :

Theorem 1 (ergodic theorem, cf. e.g. [30])

For any discrete stationary process , there exist limits

(5)

with expectations .

In general, limits are random variables depending on a particular value of infinite sequence . It is quite natural, however, to require that the relative frequencies of strings are almost surely constants, equal to the expectations . Subsequently, process will be called ergodic (with respect to a probability measure ) if limits are almost surely constant for any string . The standard definition of an ergodic process is more abstract but is equivalent to this statement [30, Lemma 7.15].

The following examples of ergodic processes are well known:

  1. Process is called IID (independent identically distributed) if

    (6)

    All IID processes are ergodic.

  2. Process is called Markov (of order ) if

    (7)

    A Markov process is ergodic in particular if

    (8)

    For a sufficient and necessary condition see [31, Theorem 7.16].

  3. Process is called hidden Markov if for a certain Markov process and a function . A hidden Markov process is ergodic in particular if the underlying Markov process is ergodic.

Whereas IID and Markov processes are some basic models in probability theory, hidden Markov processes are of practical importance in computational linguistics [4, 5]. Hidden Markov processes as considered there usually satisfy condition (8) and therefore they are ergodic.

Let us call a probability measure stationary or ergodic, respectively, if the process is stationary or ergodic with with respect to the measure . Suppose that we have a stationary measure which generates some data . We can define a new random measure equal to the relative frequencies of blocks in the data . It turns out that the measure is almost surely ergodic. Formally, we have this proposition.

Theorem 2 (cf. [32, Theorem 9.10])

Any process with a stationary measure is almost surely ergodic with respect to the random measure given by

(9)

Moreover, from the random measure we can obtain the stationary measure by integration, . The following result asserts that this integral representation of measure is unique.

Theorem 3 (ergodic decomposition, cf. [32, Theorem 9.12])

Any stationary probability measure can be represented as

(10)

where is a unique measure on stationary ergodic measures.

In other words, stationary ergodic measures are some building blocks from which we can construct any stationary measure. For a stationary probability measure , the particular values of the random ergodic measure are called the ergodic components of measure .

Consider for instance, a Bernoulli() process with measure

(11)

where and . This measure will be contrasted with the measure of a mixture Bernoulli process with parameter uniformly distributed on interval ,

(12)

Measure (11) is a measure of an IID process and is therefore ergodic, whereas measure (2) is a mixture of ergodic measures and hence it is nonergodic.

3 Strongly nonergodic processes

According to our definition, a process is ergodic when the relative frequencies of any strings in a random sample in the long run converge to some constants. Consider now the following thought experiment. Suppose that we select a random book from a library. Counting the relative frequencies of keywords, such as bijection for a text in mathematics and fossil for a text in paleontology, we can effectively recognize the topic of the book. Simply put, the relative frequencies of some keywords will be higher for books concerning some topics whereas they will be lower for books concerning other topics. Hence, in our thought experiment, we expect that the relative frequencies of keywords are some random variables with values depending on the particular topic of the randomly selected book. Since keywords are some particular strings, we may conclude that the stochastic process that models natural language should be nonergodic.

The above thought experiment provides another perspective onto nonergodic processes. According to the following theorem, a process is nonergodic when we can effectively distinguish in the limit at least two random topics in it. In the statement, function assumes values or when we can identify the topic, whereas it takes value when we are not certain which topic a given text is about.

Theorem 4 (cf. [23])

A stationary discrete process is nonergodic if and only if there exists a function and a binary random variable such that and

(13)

for any position .

A binary variable satisfying condition (13) will be called a probabilistic fact. A probabilistic fact tells which of two topics the infinite text generated by the stationary process is about. It is a kind of a random switch which is preset before we start scanning the infinite text, compare a similar wording in [33]. To keep the proofs simple, here we only give a new elementary proof of the ’’‘‘ statement of Theorem 4. The proof of the ’’‘‘ part applies some measure theory and follows the idea of Theorem 9 from [23] for strongly nonergodic processes, which we will discuss in the next paragraph.

  • Proof: (only ) Suppose that process is nonergodic. Then there exists a string such that for with some positive probability. Hence there exists a real number such that and

    (14)

    Define and , where

    (15)

    Since almost surely and satisfies (14), convergence also holds almost surely. Applying the Lebesgue dominated convergence theorem we obtain

    (16)

As for books in natural language, we may have an intuition that the pool of available book topics is extremely large and contains many more topics than just two. For this reason, we may need not a single probabilistic fact but rather a sequence of probabilistic facts to specify the topic of a book completely. Formally, stationary processes requiring an infinite sequence of independent uniformly distributed probabilistic facts to describe the topic of an infinitely long text will be called strongly nonergodic.

Definition 1 (cf. [23, 24])

A stationary discrete process is called strongly nonergodic if there exist a function and a binary IID process such that and

(17)

for any position and any index .

As we have stated above, for a strongly nonergodic process, there is an infinite number of independent probabilistic facts with a uniform distribution on the set . Formally, these probabilistic facts can be assembled into a single real random variable , which is uniformly distributed on the unit interval . The value of variable identifies the topic of a random infinite text generated by the stationary process. Thus for a strongly nonergodic process, we have a continuum of available topics which can be incrementally identified from any sufficiently long text. Put formally, according to Theorem 9 from [23] a stationary process is strongly nonergodic if and only if its shift-invariant -field contains a nonatomic sub--field. We note in passing that in [23] strongly nonergodic processes were called uncountable description processes.

In view of Theorem 9 from [23], the mixture Bernoulli process (2) is some example of a strongly nonergodic process. In this case, the parameter plays the role of the random variable . Showing that condition (17) is satisfied for this process in an elementary fashion is a tedious exercise. Hence let us present now a simpler guiding example of a strongly nonergodic process, which we introduced in [23, 24] and called the Santa Fe process. Let be a binary IID process with . Let be an IID process with assuming values in natural numbers with a power-law distribution

(18)

The Santa Fe process with exponent is a sequence , where

(19)

are pairs of a random number and the corresponding probabilistic fact . The Santa Fe process is strongly nonergodic since condition (17) holds for example for

(20)

Simply speaking, function returns or when an unambiguous value of the second constituent can be read off from pairs and returns when there is some ambiguity. Condition (17) is satisfied since

(21)

Some salient property of the Santa Fe process is the power law growth of the expected number of probabilistic facts which can be inferred from a finite text drawn from the process. Consider a strongly nonergodic process . The set of initial independent probabilistic facts inferrable from a finite text will be defined as

(22)

In other words, we have where is the largest number such that for all . To capture the power-law growth of an arbitrary function , we will denote the Hilberg exponent defined

(23)

where for and for , cf. [34]. In contrast to paper [34], for technical reasons, we define the Hilberg exponent only for an exponentially sparse subsequence of terms rather than all terms . Moreover, in [34], the Hilberg exponent was considered only for mutual information , defined later in equation (50). We observe that for the exact power law growth with we have . More generally, the Hilberg exponent captures an asymptotic power-law growth of the sequence. As shown in Appendix C, for the Santa Fe process with exponent we have the asymptotic power-law growth

(24)

This property distinguishes the Santa Fe process from the mixture Bernoulli process (2), for which the respective Hilberg exponent is zero, as we discuss in Section 6.

4 Perigraphic processes

Is it possible to demonstrate by a statistical investigation of texts that natural language is really strongly nonergodic and satisfies a condition similar to (24)? In the thought experiment described in the beginning of the previous section we have ignored the issue of constructing an infinitely long text. In reality, every book with a well defined topic is finite. If we want to obtain an unbounded collection of texts, we need to assemble a corpus of different books and it depends on our assembling criteria whether the books in the corpus will concern some persistent random topic. Moreover, if we already have a single infinite sequence of books generated by some stationary source and we estimate probabilities as relative frequencies of blocks of symbols in this sequence then by Theorem 2 we will obtain an ergodic probability measure almost surely.

In this situation we may ask whether the idea of the power-law growth of the number of inferrable probabilistic facts can be translated somehow to the case of ergodic measures. Some straightforward method to apply is to replace the sequence of independent uniformly distributed probabilistic facts , being random variables, with an algorithmically random sequence of particular binary digits . Such digits will be called algorithmic facts in contrast to variables being called probabilistic facts.

Let us recall some basic concepts. For a discrete random variable , let denote the random variable that takes value when takes value . We will introduce the pointwise entropy

(25)

where stands for the natural logarithm. The prefix-free Kolmogorov complexity of a string is the length of the shortest self-delimiting program written in binary digits that prints out string [35, Chapter 3]. is the founding concept of the algorithmic information theory and is an analogue of the pointwise entropy. To keep our notation analogical to (25), we will write the algorithmic entropy

(26)

If the probability measure is computable then the algorithmic entropy is close to the pointwise entropy. On the one hand, by the Shannon-Fano coding for a computable probability measure, the algorithmic entropy is less than the pointwise entropy plus a constant which depends on the probability measure and the dimensionality of the distribution [35, Corollary 4.3.1]. Formally,

(27)

where is a certain constant depending on the probability measure . On the other hand, since the prefix-free Kolmogorov complexity is also the length of a prefix-free code, we have

(28)

It is also true that for sufficiently large almost surely [36, Theorem 3.1]. Thus we have shown that the algorithmic entropy is in some sense close to the pointwise entropy, for a computable probability measure.

Next, we will discuss the difference between probabilistic and algorithmic randomness. Whereas for an IID sequence of random variables with we have

(29)

similarly an infinite sequence of binary digits is called algorithmically random (in the Martin-Löf sense) when there exists a constant such that

(30)

for all [35, Theorem 3.6.1]. The probability that the aforementioned sequence of random variables is algorithmically random equals —for example by [36, Theorem 3.1], so algorithmically random sequences are typical realizations of sequence .

Let be a stationary process. We observe that generalizing condition (17) in an algorithmic fashion does not make much sense. Namely, condition

(31)

is trivially satisfied for any stationary process for a certain computable function and an algorithmically random sequence . It turns out so since there exists a computable function such that , where is the binary expansion of the halting probability , which is a lower semi-computable algorithmically random sequence [35, Section 3.6.2].

In spite of this negative result, the power-law growth of the number of inferrable algorithmic facts corresponds to some nontrivial property. For a computable function and an algorithmically random sequence of binary digits , which we will call algorithmic facts, the set of initial algorithmic facts inferrable from a finite text will be defined as

(32)

Subsequently, we will call a process perigraphic if the expected number of algorithmic facts which can be inferred from a finite text sampled from the process grows asymptotically like a power of the text length.

Definition 2

A stationary discrete process is called perigraphic if

(33)

for some computable function and an algorithmically random sequence of binary digits .

Perigraphic processes can be ergodic. The proof of Theorem 20 from Appendix C can be easily adapted to show that some example of a perigraphic process is the Santa Fe process with sequence replaced by an algorithmically random sequence of binary digits . This process is IID and hence ergodic.

We can also easily show the following proposition.

Theorem 5

Any perigraphic process has an uncomputable measure .

  • Proof: Assume that a perigraphic process has a computable measure . By the proof of Theorem 13 from Appendix A, we have

    (34)

    Since for a computable measure we have inequality (27) then

    (35)

    Since we have obtained a contradiction with the assumption that the process is perigraphic, measure cannot be computable.

5 Theorem about facts and words

In this section, we will present a result about stationary processes, which we call the theorem about facts and words. That proposition states that the expected number of independent probabilistic or algorithmic facts inferrable from the text drawn from a stationary process must be roughly less than the expected number of distinct word-like strings detectable in the text by a simple procedure involving the PPM compression algorithm. This result states, in particular, that an asymptotic power law growth of the number of inferrable probabilistic or algorithmic facts as a function of the text length produces a statistically measurable effect, namely, an asymptotic power law growth of the number of word-like strings.

To state the theorem about facts and words formally, we need first to discuss the PPM code. Let us denote strings of symbols , adopting an important convention that is the empty string for . In the following, we consider strings over a finite alphabet, say, . We define the frequency of a substring in a string as

(36)

Now we may define the Prediction by Partial Matching (PPM) probabilities.

Definition 3 (cf. [6])

For and , we put

(37)

Quantity is called the conditional PPM probability of order of symbol given string . Next, we put

(38)

Quantity is called the PPM probability of order of string . Finally, we put

(39)

Quantity is called the (total) PPM probability of the string .

Quantity is an incremental approximation of the unknown true probability of the string , assuming that the string has been generated by a Markov process of order . In contrast, quantity is a mixture of such Markov approximations for all finite orders. In general, the PPM probabilities are probability distributions over strings of a fixed length. That is:

  • and ,

  • and ,

  • and .

In the following, we define an analogue of the pointwise entropy

(40)

Quantity will be called the length of the PPM code for the string . By nonnegativity of the Kullback-Leibler divergence, we have for any random block that

(41)

The length of the PPM code or the PPM probability repsectively have two notable properties. First, the PPM probability is a universal probability, i.e., in the limit, the length of the PPM code consistently estimates the entropy rate of a stationary source. Second, the PPM probability can be effectively computed, i.e., the summation in definition (39) can be rewritten as a finite sum. Let us state these two results formally.

Theorem 6 (cf. [37])

The PPM probability is universal in expectation, i.e., we have

(42)

for any stationary process .

  • Proof: For stationary ergodic processes the above claim follows by an iterated application of the ergodic theorem as shown in Theorem 1.1 from [37] for so called measure , which is a slight modification of the PPM probability. To generalize the claim for nonergodic processes, one can use the ergodic decomposition theorem but the exact proof requires a too large theoretical overload to be presented within the framework of this paper.

Theorem 7

The PPM probability can be effectively computed, i.e., we have

(43)

where

(44)

is the maximal repetition of string .

  • Proof: We have for . Hence for and in view of this we obtain the claim.

Maximal repetition as a function of a string was studied, e.g., in [38, 39]. Since the PPM probability is a computable probability distribution then by (27) for a certain constant we have

(45)

Let us denote the length of the PPM code of order ,

(46)

As we can easily see, the code length is approximately equal to the minimal code length where the minimization goes over . Thus it is meaningful to consider this definition of the PPM order of an arbitrary string.

Definition 4

The PPM order is the smallest such that

(47)
Theorem 8

We have .

  • Proof: Follows by for .

Let us divert for a short while from the PPM code definition. The set of distinct substrings of length in string is

(48)

The cardinality of set as a function of substring length is called the subword complexity of string [38]. Now let us apply the concept of the PPM order to define some special set of substrings of an arbitrary string . The set of distinct PPM words detected in will be defined as the set for , i.e.,

(49)

Let us define the pointwise mutual information

(50)

and the algorithmic mutual information

(51)

Now we may write down the theorem about facts and words. The theorem states that the Hilberg exponent for the expected number of initial independent inferrable facts is less than the Hilberg exponent for the expected mutual information and this is less than the Hilberg exponent for the expected number of distinct detected PPM words plus the PPM order. (The PPM order is usually much less than the number of distinct PPM words.)

Theorem 9 (facts and words I, cf. [24])

Let be a stationary strongly nonergodic process over a finite alphabet. We have inequalities

(52)
  • Proof: The claim follows by conjunction of Theorem 12 from Appendix A and Theorem 18 from Appendix B.

Theorem 9 has also an algorithmic version, for ergodic processes in particular.

Theorem 10 (facts and words II)

Let be a stationary process over a finite alphabet. We have inequalities

(53)
  • Proof: The claim follows by conjunction of Theorem 13 from Appendix A and Theorem 18 from Appendix B.

The theorem about facts and words previously proven in [24] differs from Theorem 9 in three aspects. First of all, the theorem in [24] did not apply the concept of the Hilberg exponent and compared with rather than with . Second, the number of inferrable facts was defined as a functional of the process distribution rather than a random variable depending on a particular text. Third, the number of words was defined using a minimal grammar-based code rather than the concept of the PPM order. Minimal grammar-based codes are not computable in a polynomial time in contrast to the PPM order. Thus we may claim that Theorem 9 is stronger than the theorem about facts and words previously proven in [24]. Moreover, applying Kolmogorov complexity and algorithmic randomness to formulate and prove Theorem 10 is a new idea.

It is an interesting question whether we have an almost sure version of Theorems 9 and 10, namely, whether

(54)

for strongly nonergodic processes, or

(55)

for general stationary processes. We leave this question as an open problem.

6 Hilberg exponents and empirical data

It is advisable to show that the Hilberg exponents considered in Theorem 9 can assume any value in range and the difference between them can be arbitrarily large. We adopt a convention that the set of inferrable probabilistic facts is empty for ergodic processes, . With this remark in mind, let us inspect some examples of processes.

First of all, for Markov processes and their strongly nonergodic mixtures, of any order but over a finite alphabet, we have

(56)

This happens to be so since the sufficient statistic of text for predicting text is the maximum likelihood estimate of the transition matrix, the elements of which can assume at most distinct values. Hence , where is the cardinality of the alphabet and is the Markov order of the process. Similarly, it can be shown for these processes that the PPM order satisfies . Hence the number of PPM words, which satisfies inequality , is also bounded above. In consequence, for Markov processes and their strongly nonergodic mixtures, of any order but over a finite alphabet, we obtain

(57)

In contrast, Santa Fe processes are strongly nonergodic mixtures of some IID processes over an infinite alphabet. Being mixtures of IID processes over an infinite alphabet, they need not satisfy condition (57). In fact, as shown in [24, 28] and Appendix C, for the Santa Fe process with exponent we have the asymptotic power-law growth

(58)

The same equality for the number of inferrable probabilistic facts and the mutual information is also satisfied by a stationary coding of the Santa Fe process into a finite alphabet, see [28].

Let us also note that, whereas the theorem about facts and words provides an inequality of Hilberg exponents, this inequality can be strict. To provide some substance, in [28], we have constructed a modification of the Santa Fe process which is ergodic and over a finite alphabet. For this modification, we have only the power-law growth of mutual information

(59)

Since in this case, then the difference between the Hilberg exponents for the number of inferrable probabilistic facts and the number of PPM words can be an arbitrary number in range .

Now we are in a position to discuss some empirical data. In this case, we cannot directly measure the number of facts and the mutual information but we can compute the PPM order and count the number of PPM words. In Figure 1, we have presented data for a collection of 35 plays by William Shakespeare111Downloaded from the Project Gutenberg, https://www.gutenberg.org/. and a random permutation of characters appearing in this collection of texts. The random permutation of characters is an IID process over a finite alphabet so in this case we obtain

(60)

In contrast, for the plays of Shakespeare we seem to have a stepwise power law growth of the number of distinct PPM words. Thus we may suppose that for natural language we have more generally

(61)

If relationship (61) holds true then natural language cannot be a Markov process of any order. Moreover, in view of the striking difference between observations (60) and (61), we may suppose that the number of inferrable probabilistic or algorithmic facts for texts in natural language also obeys a power-law growth. Formally speaking, this condition would translate to natural language being strongly nonergodic or perigraphic. We note that this hypothesis arises only as a form of a weak inductive inference since formally we cannot deduce condition (33) from mere condition (61), regardless of the amount of data supporting condition (61).

Figure 1: The PPM order and the cardinality of the PPM vocabulary versus the input length for William Shakespeare‘s First Folio/35 Plays and a random permutation of the text‘s characters.

7 Conclusion

In this article, a stationary process has been called strongly nonergodic if some persistent random topic can be detected in the process and an infinite number of independent binary random variables, called probabilistic facts, is needed to describe this topic completely. Replacing probabilistic facts with an algorithmically random sequence of bits, called algorithmic facts, we have adapted this property back to ergodic processes. Subsequently, we have called a process perigraphic if the number of algorithmic facts which can be inferred from a finite text sampled from the process grows like a power of the text length.

We have demonstrated an assertion, which we call the theorem about facts and words. This proposition states that the number of independent probabilistic or algorithmic facts which can be inferred from a text drawn from a process must be roughly smaller than the number of distinct word-like strings detected in this text by means of the PPM compression algorithm. We have exhibited two versions of this theorem: one for strongly nonergodic processes, applying the Shannon information theory, and one for ergodic processes, applying the algorithmic information theory.

Subsequently, we have exhibited an empirical observation that the number of distinct word-like strings grows like a stepwise power law for a collections of plays by William Shakespeare, in a stark contrast to Markov processes. This observation does not rule out that the number of probabilistic or algorithmic facts inferrable from texts in natural language also grows like a power law. Hence we have supposed that natural language is a perigraphic process.

We suppose that the path of the future related research should lead through a further analysis of the theorem about facts and words and demonstrating an almost sure version of this statement.

Acknowlegdment

We wish to thank Jacek Koronacki, Jan Mielniczuk, and Vladimir Vovk for helpful comments.

Appendix A Facts and mutual information

In the appendices, we will make use of several kinds of information measures.

  1. First, there are four pointwise Shannon information measures:

    • entropy
      ,

    • conditional entropy
      ,

    • mutual information
      ,

    • conditional mutual information
      ,

    where is the probability of a random variable and is the conditional probability of a random variable given a random variable . The above definitions make sense for discrete-valued random variables and and an arbitrary random variable . If is a discrete-valued random variable then also and .

  2. Moreover, we will use four algorithmic information measures:

    • entropy
      ,

    • conditional entropy
      ,

    • mutual information
      ,

    • conditional mutual information
      ,

    where is the prefix-free Kolmogorov complexity of an object and is the prefix-free Kolmogorov complexity of an object given an object . In the above definitions, and must be finite objects (finite texts), whereas can be also an infinite object (an infinite sequence). If is a finite object then rather than being equal to , where , , and are the equality and the inequalities up to an additive constant [35, Theorem 3.9.1]. Hence

    (62)

In the following, we will prove a result for Hilberg exponents.

Theorem 11

Define . If the limit exists and is finite then

(63)

with an equality if for all but finitely many .

  • Proof: The proof makes use of the telescope sum

    (64)

    Denote . Since , it is sufficient to prove inequality (63) for . In this case, for all but finitely many for any . Then for , by the telescope sum (64) we obtain for sufficiently large that