Optimal Computation of Avoided Words
Abstract
The deviation of the observed frequency of a word from its expected frequency in a given sequence is used to determine whether or not the word is avoided. This concept is particularly useful in DNA linguistic analysis. The value of the standard deviation of , denoted by , effectively characterises the extent of a word by its edge contrast in the context in which it occurs. A word of length is a avoided word in if , for a given threshold . Notice that such a word may be completely absent from . Hence computing all such words naïvely can be a very timeconsuming procedure, in particular for large . In this article, we propose an time and space algorithm to compute all avoided words of length in a given sequence of length over a fixedsized alphabet. We also present a timeoptimal time and space algorithm to compute all avoided words (of any length) in a sequence of length over an alphabet of size . Furthermore, we provide a tight asymptotic upper bound for the number of avoided words and the expected length of the longest one. We make available an opensource implementation of our algorithm. Experimental results, using both real and synthetic data, show the efficiency of our implementation.
1 Introduction
The onetoone mapping of a DNA molecule to a sequence of letters suggests that DNA analysis can be modelled within the framework of formal language theory [10.2307/29774782]. For example, a region within a DNA sequence can be considered as a “word” on a fixedsized alphabet in which some of its natural aspects can be described by means of certain types of automata or grammars. However, a linguistic analysis of the DNA needs to take into account many distinctive physical and biological characteristics of such sequences: DNA contains coding regions that encode for polypeptide chains associated with biological functions; and noncoding regions, most of which are not linked to any particular function. Both appear to have many statistical features in common with natural languages [1994PhRvL..73.3169M].
A computational tool oriented towards the systematic search for avoided words is particularly useful for in silico genomic research analyses. The search for absent words is already undertaken in the recent past and several results exist [nullrly]. However, words which may be present in a genome or in genomic sequences of a specific role (e.g., protein coding segments, regulatory elements, conserved noncoding elements etc) but they are strongly underrepresented—as we can estimate on the basis of the frequency of occurrence of their longest proper factors—may be of particular importance. They can be words of nucleotides which are hardly tolerated because they negatively influence the stability of the chromatin or, more generally, the functional genomic conformation; they can represent targets of restriction endonucleases which may be found in bacterial and viral genomes; or, more generally, they may be short genomic regions whose presence in wide parts of the genome are not tolerated for less known reasons. The understanding of such avoidances is becoming an interesting line of research (for recent studies, see [DBLP:conf/spire/BelazzouguiC15, Rusinov2015]).
On the other hand, short words of nucleotides may be systematically avoided in large genomic regions or whole genomes for entirely different reasons: just because they play important signaling roles which restrict their appearance only in specific positions: consensus sequences for the initiation of gene transcription and of DNA replication are wellknown such oligonucleotides. Other such cases may be insulators, sequences anchoring the chromatin on the nuclear envelope like laminaassociated domains, short sequences like dinucleotide repeat motifs with enhancer activity, and several other cases. Again, we cannot exclude that this area of research could lead to the identification of short sequences of regulatory activities still unknown.
Brendel et al. in [brendel1986linguistics] initiated research into the linguistics of nucleotide sequences that focuses on the concept of words in continuous languages—languages devoid of blanks—and introduced an operational definition of words. The authors suggested a method to measure, for each possible word of length , the deviation of its observed frequency from the expected frequency in a given sequence. The values of the standard deviation, denoted by , were then used to identify words that are avoided among all possible words of length . The typical length of avoided (or of overabundant) words of the nucleotide language was found to range from 3 to 5 (tri to pentamers). The statistical significance of the avoided words was shown to reflect their biological importance. This work, however, was based on the very limited sequence data available at the time: only DNA sequences from two viral and one bacterial genomes were considered. Also note that might change when considering eukaryotic genomes, the complex dynamics and function of which might impose a more demanding analysis.
Our contribution. The computational problem can be described as follows. Given a sequence of length , an integer , and a real number , compute the set of avoided words of length , i.e. all words of length for which . We call this set the avoided words of length in . Brendel et al. did not provide an efficient solution for this computation [brendel1986linguistics]. Notice that such a word may be completely absent from . Hence the set of avoided words can be naïvely computed by considering all possible words, where is the size of the alphabet. Here we present an time and space algorithm for computing all avoided words of length in a sequence of length over a fixedsized alphabet. We also present a timeoptimal time and space algorithm to compute all avoided words (of any length) over an integer alphabet of size . Furthermore, we provide a tight asymptotic upper bound for the number of avoided words and the expected length of the longest one. We make available an opensource implementation of our algorithm. Experimental results, using both real and synthetic data, show its efficiency and applicability. Specifically, using our method we confirm that restriction endonucleases which target selfcomplementary sites are not found in eukaryotic sequences [Rusinov2015].
2 Terminology and Technical Background
2.1 Definitions and Notation
We begin with basic definitions and notation generally following [CHL07]. Let be a word of length over a finite ordered alphabet of size . For two positions and on , we denote by the factor (sometimes called subword) of that starts at position and ends at position (it is empty if ), and by the empty word, word of length 0. We recall that a prefix of is a factor that starts at position 0 () and a suffix is a factor that ends at position (), and that a factor of is a proper factor if it is not itself. A factor of that is neither a prefix nor a suffix of is called an infix of .
Let be a word, . We say that there exists an occurrence of in , or, more simply, that occurs in , when is a factor of . Every occurrence of can be characterised by a starting position in . Thus we say that occurs at the starting position in when . Further let denote the observed frequency, that is, the number of occurrences of in word . If for some word , then is called absent, otherwise, is called occurring.
By , , and we denote the observed frequency of the longest proper prefix , suffix , and infix of in , respectively. We can now define the expected frequency of word in as in Brendel et al. [brendel1986linguistics]:
(1) 
The above definition can be explained intuitively as follows. Suppose we are given , , and . Given an occurrence of in , the probability of it being preceded by is as precedes exactly of the occurrences of . Similarly, this occurrence of is also an occurrence of with probability . Although these two events are not always independent, the product gives a good approximation of the probability that an occurrence of at position implies an occurrence of at position . It can be seen then that by multiplying this product by the number of occurrences of we get the above formula for the expected frequency of .
Moreover, to measure the deviation of the observed frequency of a word from its expected frequency in , we define the standard deviation ( test) of as:
(2) 
For more details on the biological justification of these definitions see [brendel1986linguistics].
Using the above definitions and a given threshold, we are in a position to classify a word as either avoided or common in . In particular, for a given threshold , a word is called avoided if . In this article, we consider the following computational problem.
AvoidedWordsComputation Input: A word of length , an integer , and a real number Output: All avoided words of length in
2.2 Suffix Trees
In our algorithm, suffix trees are used extensively as computational tools. For a general introduction to suffix trees, see [CHL07].
The suffix tree of a nonempty word of length is a compact trie representing all suffixes of , the nodes of the trie which become nodes of the suffix tree are called explicit nodes, while the other nodes are called implicit. Each edge of the suffix tree can be viewed as an upward maximal path of implicit nodes starting with an explicit node. Moreover, each node belongs to a unique path of that kind. Then, each node of the trie can be represented in the suffix tree by the edge it belongs to and an index within the corresponding path.
We use to denote the pathlabel of a node , i.e., the concatenation of the edge labels along the path from the root to . We say that is pathlabelled . Additionally, is used to denote the worddepth of node . Node is a terminal node, if and only if, , ; here is also labelled with index . It should be clear that each occurring word in is uniquely represented by either an explicit or implicit node of . The suffixlink of a node with pathlabel is a pointer to the node pathlabelled , where is a single letter and is a word. The suffixlink of exists if is a nonroot internal node of .
In any standard implementation of the suffix tree, we assume that each node of the suffix tree is able to access its parent. Note that, once is constructed, it can be traversed to compute the worddepth for each node . The tree is traversed in a depthfirst manner, for each node . Let be the parent of . Then the worddepth is computed by adding to the length of the label of edge . If is the root then . Additionally, a depthfirst traversal of allows us to count, for each node , the number of terminal nodes in the subtree rooted at , denoted by , as follows. When internal node is visited, is computed by adding up of all the nodes , such that is a child of , and then is incremented by 1 if itself is a terminal node. If a node is a leaf then .
3 Useful Properties
In this section, we provide some useful insights of computational nature which were not considered by Brendel et al. [brendel1986linguistics]. By the definition of avoided words it follows that a word may be avoided even if it is absent from . In other words, may hold for either (occurring) or (absent).
This means that a naïve computation should consider all possible words. Then for each possible word , the value of can be computed via pattern matching on the suffix tree. In particular we can search for the occurrences of , , , and in time [CHL07]. In order to avoid this inefficient computation, we exploit the following crucial lemmas.
Definition 1 ([Barton2014])
An absent word of is minimal if and only if all its proper factors occur in .
Lemma 1
Any absent avoided word in is a minimal absent word of .
Proof
For to be a avoided word it must hold that
This implies that , which in turn implies that since . From , we conclude that and must hold. Since , , and , is a minimal absent word of : all proper factors of occur in . ∎
Lemma 2
Let be a word occurring in and be the suffix tree of . Then, if is a pathlabel of an implicit node of , .
Proof
Since occurs in it holds that and, hence, by the definition of , . Furthermore, by the definition of the suffix tree, since occurs in and is a pathlabel of an implicit node then . It thus follows that , and since , the claim holds. ∎
Lemma 3
The total number of avoided words of length in a word of length over an alphabet of size is bounded from above by ; in particular, this number is no more than .
Proof
By Lemma 1, every avoided word is either occurring or a minimal absent word. It is known that the total number of minimal absent words in is smaller than or equal to [Mignosi02]. Clearly, the occurring avoided words in are at most . Therefore the lemma holds. ∎
Example 1
Consider the word . Fig. 1 represents the suffix tree . Note that word GCG is represented by the explicit internal node ; whereas word TCT is represented by the implicit node along the edge connecting the node labelled 15 and the node labelled 9. Consider node in ; we have that , , and .
Example 2
Consider the word from Example 1, , and .

word , at position 7 of , is an occurring avoided word:

word is an absent avoided word:
4 Avoided Words Algorithm
In this section, we present Algorithm \AlgoAvoidedWords for computing all avoided words of length in a given word . The algorithm builds the suffix tree for word , and then prepares to allow constanttime observed frequency queries. This is mainly achieved by counting the terminal nodes in the subtree rooted at node for every node of . Additionally during this preprocessing, the algorithm computes the worddepth of for every node of . By Lemma 1, avoided words are classified as either occurring or minimal absent, therefore Algorithm \AlgoAvoidedWords calls Routines \AlgoAbsentAvoidedWords and \AlgoOccurringAvoidedWords to compute both classes of avoided words in . The outline of Algorithm \AlgoAvoidedWords is as follows.
AvoidedWords
T(x)\CallSuffixTreex \DOFOReach node \SETD(v)worddepth of v \SETC(v)number of terminal nodes in the subtree rooted at v \OD\CALLAbsentAvoidedWordsx,k,ρ \CALLOccurringAvoidedWordsx,k,ρ
4.1 Computing Absent Avoided Words
In Lemma 1, we showed that each absent avoided word is a minimal absent word. Thus, Routine \AlgoAbsentAvoidedWords starts by computing all minimal absent words in ; this can be done in time and space for a fixedsized alphabet or in time for large alphabets [Barton2014]. Let be a tuple representing a minimal absent word in , where for some minimal absent word of length , . Notice that this representation is unique.
AbsentAvoidedWords
A\CallMinimalAbsentWordsx \DOFOReach tuple such that \SETu_p\CallNodei,j \IF\CallIsImplicitu_p \SET(u,v)\CallEdgeu_p \SETf_pC(v) \ELSE\SETf_pC(u_p) \FI\SETu_i\CallNodei+1,j \IF\CallIsImplicitu_i \SET(u,v)\CallEdgeu_i \ACTf_i ←f_s ←C(v) \ELSE\SETf_iC(u_i) \SETu_s\CallChildu_i,α \SETf_sC(u_s) \FI\SETEf_p×f_s/f_i \IF(0  E)/(max{1,E}) ≤ρ \CALLReportx[i. .j]α \FI\OD
Intuitively, the idea is to check the length of every minimal absent word. If a tuple represents a minimal absent word of length , then the value of is computed to determine whether is an absent avoided word. Note that, if is a minimal absent word, then , , and occur in by Definition 1. Thus, there are three (implicit or explicit) nodes in pathlabelled , , and , respectively. The observed frequencies of , , and are already computed during the preprocessing of , which stores the number of terminal nodes in the subtree rooted at , for each node .
Notice that for an explicit node pathlabelled , the value represents the number of occurrences (observed frequency) of in ; whereas for an implicit node along the edge pathlabelled , then the number of occurrences of is equal to (and not ). The implementation of this procedure is given in Routine \AlgoAbsentAvoidedWords.
4.2 Computing Occurring Avoided Words
Lemma 2 suggests that for each occurring avoided word , is a pathlabel of an explicit node of . Thus, for each internal node such that and , Routine \AlgoOccurringAvoidedWords computes , where is a pathlabel of a child (explicit or implicit) node of . Note that if is a pathlabel of an explicit node then is a pathlabel of an explicit node of ; node is welldefined and it is the node pointed at by the suffixlink of . The implementation of this procedure is given in Routine \AlgoOccurringAvoidedWords.
OccurringAvoidedWords
N an empty stack \CALLPushN, root(x) \DOWHILE N is not empty \SETu\CallPopN \IFu is not labelled as discovered \CALLLabelu \FI\DOFOReach edge of \IFD(v) < k1 \CALLPushN,v \ELSEIFD(v) = k1 \SETf_pC(v) \SETf_iC(suffixlink[v]) \DOFOReach child of \SETf_wC(v’) \SETαL(v’)[k] \SETf_sC(\CallChildsuffixlink[v],α) \SETEf_p×f_s/f_i \IF(f_wE)/(max{1,E}) ≤ρ \CALLReportL(v’)[0. .k1]\FI\OD\FI\OD\OD
4.3 Algorithm Analysis
Lemma 4
Given a word , an integer , and a real number , Algorithm \AlgoAvoidedWords computes all avoided words of length in .
Proof
By definition, a avoided word is either an absent avoided word or an occurring one. Hence, the proof of correctness relies on Lemma 1 and Lemma 2. First, Lemma 1 indicates that an absent avoided word in is necessarily a minimal absent word. Routine \AlgoAbsentAvoidedWords considers each minimal absent word and verifies if is a avoided word of length .
Second, Lemma 2 indicates that for each occurring avoided word , is a pathlabel of an explicit node of . Routine \AlgoOccurringAvoidedWords considers each child of such node of worddepth , and verifies if its pathlabel is a avoided word. ∎
Lemma 5
Given a word of length over a fixedsized alphabet, an integer and a real number , Algorithm \AlgoAvoidedWords requires time and space ; for integer alphabets, it requires time .
Proof
Constructing the suffix tree of the input word takes time and space for word over a fixedsized alphabet [CHL07]. Once the suffix tree is constructed, computing arrays and by traversing requires time and space . Note that the pathlabels of the nodes of can by implemented in time and space as follows: traverse the suffix tree to compute for each node the smallest index of the terminal nodes of the subtree rooted at . Then .
Next, Routine \AlgoAbsentAvoidedWords requires time . It starts by computing all minimal absent words of , which can be achieved in time and space over a fixedsized alphabet [Barton2014]. The rest of the procedure deals with checking each of the minimal absent words of length . Checking each minimal absent word to determine whether it is a avoided word or not requires time . In particular, an time preprocessing of allows the retrieval of the (implicit or explicit) node in corresponding to the longest proper prefix of in time [GawrychowskiLN14]. Finally, Routine \AlgoOccurringAvoidedWords requires time . It traverses the suffix tree to allocate all explicit node of worddepth . Then for each such node, the procedure check every (explicit or implicit) child of worddepth . The total number of these children is at most . For every child node, the procedure checks whether its pathlabel is a avoided word in time .
For integer alphabets, the suffix tree can be constructed in time [farach1997optimal] and all minimal absent words can be computed in time [Barton2014]. The efficiency of Algorithm \AlgoAvoidedWords is then limited by the total number of words to be considered, which, by Lemma 3, is bounded from above by . ∎
Theorem 4.1
Algorithm \AlgoAvoidedWords solves Problem AvoidedWordsComputation in time and space . For integer alphabets, the algorithm solves the problem in time .
4.4 Optimal Computation of all Avoided Words
Although the biological motivation is yet to be shown for this, we show here how we can modify Algorithm \AlgoAvoidedWords so that it computes all avoided words (of all lengths) in a given word of length over an alphabet of size in time and space. We further show that this algorithm is in fact timeoptimal.
Lemma 6
The upper bound on the number of minimal absent words of a word of length over an alphabet of size is tight if .
Proof
Let , i.e. , and consider the word of length . All words of the form for are minimal absent words in . Hence has at least minimal absent words.
Let with , and consider the word , where . Note that . Further note that is a factor of for all and . Similarly, is a factor of for all and . Thus all proper factors of all the strings in set occur in . The only strings in though that occur in are the ones of the form , for all . Hence has at least minimal absent words. ∎
Lemma 7
The total number of avoided words in a word of length over an alphabet of size is bounded from above by and this bound is tight.
Proof
By Lemma 1, every avoided word is either occurring or a minimal absent word. The set of occurring avoided words in can be injected to the set of explicit nodes of by Lemma 2. It is well known that the number of explicit nodes of is [CHL07] (at most ) and hence it follows that the number of occurring avoided words is . Furthermore it is known that the total number of minimal absent words in is [Mignosi02]. Hence the number of avoided words is bounded from above by . Based on Lemma 6, we know that for any alphabet of size there exist words with minimal absent words. Consider such a word and some . Then every minimal absent word is avoided since for any such word , and hence . Thus the bound is attainable. ∎
It is clear that if we just remove the condition on the length of each minimal absent word in Line 2 of \AlgoAbsentAvoidedWords we then compute all absent avoided words in time and space . In order to compute all occurring avoided words in it suffices by Lemma 2 to investigate the children of explicit nodes. We can thus traverse the suffix tree and for each explicit internal node, check for all of its children (explicit or implicit) whether their pathlabel is a avoided word. We can do this in time as above. The total number of these children is at most , as this is the bound on the number of edges of [CHL07]. This modified algorithm is clearly timeoptimal for fixedsized alphabets as it then runs in time and space . The time optimality for integer alphabets follows directly from Lemmas 6 and 7. Hence we obtain the following result.
Theorem 4.2
Given a word of length over an integer alphabet of size and a real number , all avoided words in can be computed in time and space . This is timeoptimal if .
Lemma 8
The expected length of the longest avoided word in a word of length over an alphabet of size is when the letters are independent and identically distributed random variables uniformly distributed.
Proof
By Lemma 2 the length of the longest occurring word is bounded above by the worddepth of the deepest internal explicit node in incremented by 1. We note that the greatest worddepth of an internal node corresponds to the longest repeated factor in word . Moreover, for a word to be a minimal absent word, must appear at least twice in (in the occurrences of and ). Hence the length of the longest avoided word is bounded by the length of the longest repeated factor in incremented by 2. The expected length of the longest repeated factor in a word is known to be [SA] and hence the lemma follows.∎
5 Implementation and Experimental Results
Algorithm was implemented as a program to compute the avoided words of length in one or more input sequences. The program was implemented in the C++ programming language and developed under GNU/Linux operating system. The input parameters are a (Multi)FASTA file with the input sequences(s), an integer , and a real number . The output is a file with the set of avoided words of length per input sequence. The implementation is distributed under the GNU General Public License, and it is available at http://github.com/solonas13/aw. The experiments were conducted on a Desktop PC using one core of Intel Core i54690 CPU at 3.50GHz under GNU/Linux. The programme was compiled with g++ version 4.8.4 at optimisation level 3 (O3). We also implemented a bruteforce approach for the computation of avoided words. We mainly used it to confirm the correctness of our implementation. Here we do not plot the results of the bruteforce approach as it is easily understood that it is orders of magnitude slower than our approach.
To evaluate the time performance of our implementation, synthetic DNA () and proteins () data were used. The input sequences were generated using a randomised script. In the first experiment, our task was to establish that the performance of the program does not essentially depend on and ; i.e., the elapsed time of the program remains unchanged up to some constant with increasing values of and decreasing values of . As input datasets, for this experiment, we used a DNA and a proteins sequence both of length M (1 Million letters). For each sequence we used different values of and . The results, for elapsed time are plotted in Fig. 4. It becomes evident from the results that the time performance of the program remains unchanged up to some constant. The longer time required for the proteins sequences for small values of is explained by the increased number of branching nodes in this depth in the corresponding suffix tree due to the size of the alphabet (). To confirm this we counted the number of nodes considered by the algorithm to compute the avoided words for and for both sequences. The number of considered nodes for the DNA sequence was whereas for the proteins sequence it was .
In the second experiment, our task was to establish the fact that the elapsed time and memory usage of the program grow linearly with , the length of the input sequence. As input datasets, for this experiment, we used synthetic DNA and proteins sequences ranging from to M. For each sequence we used constant values for and : and . The results, for elapsed time and peak memory usage, are plotted in Fig. 7. It becomes evident from the results that the elapsed time and memory usage of the program grow linearly with . The longer time required for the proteins sequences compared to the DNA sequences for increasing is explained by the increased number of branching nodes in this depth () in the corresponding suffix tree due to the size of the alphabet (). To confirm this we counted the number of nodes considered by the algorithm to compute the avoided words for M for both the DNA and the proteins sequence. The number of nodes for the DNA sequence was whereas for the proteins sequence it was .
In the next experiment, our task was to evaluate the time and memory performance of our implementation with real data. As input datasets, for this experiment, we used all chromosomes of the human genome. Their lengths range from around M (chromosome 21) to around M (chromosome 1). For each sequence we used and . The results, for elapsed time and peak memory usage, are plotted in Fig. 10. The results with real data confirm that the elapsed time and memory usage of the program grow linearly with .
Real Application.
We computed the set of avoided words for (hexamers) and in the complete genome of E. coli and sorted the output in increasing order of their standard deviation. The most avoided words were extremely enriched in selfcomplementary (palindromic) hexamers. In particular, within the output of 28 avoided words, 23 were selfcomplementary; and the 17 most avoided ones were all selfcomplementary. For comparison, we computed the set of avoided words for and from an eukaryotic sequence: a segment of the human chromosome 21 (its leftmost segment devoid of N’s) equal to the length of the E. coli genome. In the output of 10 avoided words, no selfcomplementary hexamer was found. Our results confirm that the restriction endonucleases which target selfcomplementary sites are not found in eukaryotic sequences [Rusinov2015].
Our immediate target is to investigate the avoidance of words in Genomic Regulatory Blocks (GRBs) [Akalin2009] within the same organism and across evolution.