Pushing undecidability of the isolation problemfor probabilistic automata

Pushing undecidability of the isolation problem
for probabilistic automata

Nathanaël Fijalkow LIAFA, CNRS & Université Denis Diderot - Paris 7, France

nath@liafa.jussieu.fr
   Hugo Gimbert LaBRI, CNRS, France

hugo.gimbert@labri.fr
   Youssouf Oualhadj LaBRI, Université Bordeaux 1, France
youssouf.oualhadj@labri.fr
Abstract

This short note aims at proving that the isolation problem is undecidable for probabilistic automata with only one probabilistic transition. This problem is known to be undecidable for general probabilistic automata, without restriction on the number of probabilistic transitions. In this note, we develop a simulation technique that allows to simulate any probabilistic automaton with one having only one probabilistic transition.

1 Introduction

Probabilistic automata. Rabin introduced probabilistic automata over finite words as a natural and simple computation model [Rab63]. A probabilistic automaton can be thought as a non-deterministic automaton, where non-deterministic transitions are chosen according to a fixed probabilistic distribution. Probabilistic automata drew attention and have been extensively studied (see [Buk80] for a survey).

The isolation problem. However, on the algorithmic side, most of the results are undecidability results. The isolation problem asks, given some probability , whether there exists words accepted with probability arbitrarily close to . Bertoni showed that this problem is undecidable [Ber74, BMT77].

Contribution. In this note, we prove that the isolation problem is undecidable, even for probabilistic automata having only one probabilistic transition. To do this, we develop a simulation technique that allows to simulate any probabilistic automaton with one having only one probabilistic transition.

Outline. Section 2 is devoted to definitions. In section 3, we develop a simulation technique, which allows to simulate any probabilistic automaton with one having only one probabilistic transition. Using this technique we show that the isolation problem is undecidable for this very restricted class of automata.

2 Definitions

Given a finite set of states , a probability distribution (distribution for short) over is a row vector of size with rational entries in such that . We denote by the distribution such that if and otherwise. A probabilistic transition matrix is a square matrix of size , such that for a state , is a distribution over .

Definition 1 (Probabilistic automaton)

A probabilistic automaton is a tuple , where is a finite set of states, is the finite input alphabet, are the probabilistic transition matrices, is the initial state and is the set of accepting states.

For each letter , is the probability to go from state to state when reading letter . A probabilistic transition is a couple such that for some .

A probabilistic automaton is said simple if for all , for all states and , we have .

Given an initial distribution and an input word , we define by induction on : we have , then for a letter in , we have and if , then .

We denote by the probability to reach the set from state when reading the word , that is .

Definition 2 (Value and acceptance probability)

The acceptance probability of a word by is . The value of , denoted , is the supremum acceptance probability: .

3 Simulation with one probabilistic transition

We first show how to simulate a probabilistic automaton with one having only one probabilistic transition, up to a regular language:

Proposition 1

For any simple probabilistic automata , there exists a simple probabilistic automaton over a new alphabet , with one probabilistic transition, and a morphism such that:

The morphism will not be onto, so this simulation works up to the regular language . We shall see that the automaton will not be able to check that a word read belongs to this language, which makes this restriction unavoidable in this construction.

We first give the intuitions behind the construction. Intuitively, while reading the word , the probabilistic automaton “throw parallel threads”. A computation of over can be viewed as a tree, where probabilistic transitions correspond to branching nodes.





Figure 1: An example of a computation

On the figure, reading from or from leads deterministically to the next state. Reading from leads at random to or to , hence the corresponding node is branching. Our interpretation is that two parallel threads are thrown. Let us make two observations:

  • threads are not synchronised: reading the fourth letter (an ), the first thread leads deterministically to the next state, while the second thread randomizes;

  • threads are merged so there are at most parallel threads: whenever two threads synchronize to the same state , they are merged. This happens in the figure after reading the fifth letter ().

The automaton we construct will simulate the threads from the beginning, and take care of the merging process each step.

Proof

We denote by the states of , i.e . The alphabet is made of two new letters ‘’ and ‘merge’ plus, for each letter and state , two new letters and , so that:

We now define the automaton . We duplicate each state , and denote the fresh copy by . Intuitively, is a temporary state that will be merged at the next merging process. States in are either a state from or its copy, or one of the three fresh states , and .

The initial state remains as well as the set of final states remains .

The transitions of are as follows:

  • for every letter and state , the new letter from state leads deterministically to state i.e ,

  • the new letter from state leads with probability half to and half to , i.e (this is the only probabilistic transition of );

  • the new letter from states and applies the transition function from reading : if the transition is deterministic, i.e for some state then and , else the transition is probabilistic i.e for some states , then and ;

  • the new letter merge activates the merging process: it consists in replacing by for all .

Whenever a couple (letter, state) does not fall in the previous cases, it has no effect. The gadget simulating a transition is illustrated in the figure.



Now we define the morphism by its action on letters:

The computation of while reading in is simulated by on , i.e we have:

This completes the proof. ∎

Let us remark that is indeed unable to check that a letter is actually followed by the corresponding : inbetween, it will go through and “forget” the state it was in.

We now improve the above construction: we get rid of the regular external condition. To this end, we will use probabilistic automata whose transitions have probabilities , , or . This is no restriction, as stated in the following lemma:

Lemma 1

For any simple probabilistic automata , there exists a probabilistic automaton whose transitions have probabilities , , or , such that for all in , we have:

Proof

We provide a construction to pick with probability half, using transitions with probability , , and . The construction is illustrated in the figure.



In this gadget, the only letter read is a fresh new letter . The idea is the following: to pick with probability half or , we sequentially pick with probability a third or two thirds. Whenever the two picks are different, if the first was a third, then choose , else choose . This happens with probability half each. We easily see that . ∎

Proposition 2

For any simple probabilistic automata , there exists a simple probabilistic automaton over a new alphabet , with one probabilistic transition, such that:

Thanks to the lemma, we assume that in , transitions have probabilities , , or .

We first deal with the case where . The new gadget used to simulate a transition is illustrated in the figure.



to
finish

The automaton reads words of the form , where ‘finish’ is a fresh new letter. The idea is to “skip”, or “delay” part of the computation of : each time the automaton reads a word , it will be skipped with some probability.

Simulating a transition works as follows: whenever in state , reading two times the letter ‘’ leads with probability half to , quarter to and quarter to . As before, from and , we proceed with the simulation. However, in the last case, we “wait” for the next letter ‘finish’ that will restart from . Thus each time a transition is simulated, the word being read is skipped with probability .

Delaying part of the computation allows to multiply the number of threads. We will use the accepted threads to check the extra regular condition we had before. To this end, as soon as a simulated thread is accepted in , it will go through an automaton (denoted in the construction) that checks the extra regular condition.

Proof

We keep the same notations. The alphabet is made of three new letters: ‘’, ‘merge’ and ‘finish’ plus, for each letter and state , two new letters and , so that:

We first define a syntactic automaton . We define a morphism by its action on letters:

Consider the regular language , and an automaton recognizing it.

We now define the automaton . We duplicate each state , and denote the fresh copy by . States in are either a state from or its copy, a state from or one of the four fresh states , , and wait.

The initial state remains , and the set of final states is .

The transitions of are as follows:

  • for every letter and state , the new letter from state leads deterministically to state i.e ,

  • the new letter from state leads with probability half to and half to , i.e (this is the only probabilistic transition of );

  • any other letter from state leads deterministically to , i.e ;

  • the new letter from state leads deterministically to , i.e ;

  • the new letter from states and applies the transition function from reading : if the transition is deterministic, i.e for some state then and , else the transition is probabilistic i.e for some states , then and ;

  • the new letter merge activates the merging process: it consists in replacing by for all ;

  • the new letter finish from state wait leads deterministically to ;

  • the new letter finish from state in leads deterministically to ;

  • the new letter finish from any other state is not defined (there is a deterministic transition to a bottom non-accepting state).

Transitions in are not modified. Whenever a couple (letter, state) does not fall in the previous cases, it has no effect.

We now show that this construction is correct.

We first prove that for all , there exists a sequence of words such that .

We have, for a distribution over :

It follows:

where . Hence:

The computation of while reading is simulated by on . This implies that , hence if , then .

Conversely, we prove that if , then . Let a word read by accepted with probability close to , we slice it as follows: , such that does not contain the letter finish. The key observation is that if , the word is accepted with probability at most . Hence we consider only the case . We assume without loss of generality that (otherwise we delete and proceed). In this case, a thread has been thrown while reading that reached , so the syntactic process started: it follows that for are in the image of . This implies that the simulation is sound: from we can recover a word in accepted with probability arbitrarily close to by .

The case where is any positive rational is handled similarly. We only need to ensure that the previous key observation still holds: a word of the form where does not contain finish cannot be accepted with probability more than . This is made possible by slightly modifying the simulation gadget, adding new intermediate states.

This completes the proof. ∎

We conclude:

Theorem 3.1

The isolation problem is undecidable for simple automata with one probabilistic transition.

References

  • [Ber74] Alberto Bertoni. The solution of problems relative to probabilistic automata in the frame of the formal languages theory. In GI Jahrestagung, pages 107–112, 1974.
  • [BMT77] Alberto Bertoni, Giancarlo Mauri, and Mauro Torelli. Some recursive unsolvable problems relating to isolated cutpoints in probabilistic automata. In International Colloquium on Automata, Languages and Programming, pages 87–94, 1977.
  • [Buk80] R. G. Bukharaev. Probabilistic automata. Journal of Mathematical Sciences, 13(3):359–386, 1980.
  • [Rab63] M. O. Rabin. Probabilistic automata. Information and Control, 6(3):230–245, 1963.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
32655
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description