# Tight Conditional Lower Bounds for Longest Common Increasing Subsequence

###### Abstract

We consider the canonical generalization of the well-studied Longest Increasing Subsequence problem to multiple sequences, called -LCIS: Given integer sequences of length at most , the task is to determine the length of the longest common subsequence of that is also strictly increasing. Especially for the case of (called LCIS for short), several algorithms have been proposed that require quadratic time in the worst case.

Assuming the Strong Exponential Time Hypothesis (SETH), we prove a tight lower bound, specifically, that no algorithm solves LCIS in (strongly) subquadratic time. Interestingly, the proof makes no use of normalization tricks common to hardness proofs for similar problems such as LCS. We further strengthen this lower bound (1) to rule out time algorithms for LCIS, where denotes the solution size, (2) to rule out time algorithms for -LCIS, and (3) to follow already from weaker variants of SETH. We obtain the same conditional lower bounds for the related Longest Common Weakly Increasing Subsequence problem.

## 1 Introduction

The longest common subsequence problem (LCS) and its variants are computational primitives with a variety of applications, which includes, e.g., uses as similarity measures for spelling correction [37, 43] or DNA sequence comparison [39, 5], as well as determining the differences of text files as in the UNIX diff utility [28]. LCS shares characteristics of both an easy and a hard problem: (Easy) A simple and elegant dynamic-programming algorithm computes an LCS of two length- sequences in time [43], and in many practical settings, certain properties of typical input sequences can be exploited to obtain faster, “tailored” solutions (e.g., [27, 29, 7, 38]; see also [14] for a survey). (Hard) At the same time, no polynomial improvements over the classical solution are known, thus exact computation may become infeasible for very long general input sequences. The research community has sought for a resolution of the question “Do subquadratic algorithms for LCS exist?” already shortly after the formalization of the problem [21, 4].

Recently, an answer conditional on the Strong Exponential Time Hypothesis (SETH; see Section 2 for a definition) could be obtained: Based on a line of research relating the satisfiability problem to quadratic-time problems [44, 41, 15, 3] and following a breakthrough result for Edit Distance [9], it has been shown that unless SETH fails, there is no (strongly) subquadratic-time algorithm for LCS [1, 16]. Subsequent work [2] strengthens these lower bounds to hold already under weaker assumptions and even provides surprising consequences of sufficiently strong polylogarithmic improvements.

Due to its popularity and wide range of applications, several variants of LCS have been proposed. This includes the heaviest common subsequence (HCS) [32], which introduces weights to the problem, as well as notions that constrain the structure of the solution, such as the longest common increasing subsequence (LCIS) [46], LCSk [13], constrained LCS [42, 20, 8], restricted LCS [26], and many other variants (see, e.g., [19, 6, 33]). Most of these variants are (at least loosely) motivated by biological sequence comparison tasks. To the best of our knowledge, in the above list, LCIS is the only LCS variant for which (1) the best known algorithms run in quadratic time in the worst case and (2) its definition does not include LCS as a special case (for such generalizations of LCS, the quadratic-time SETH hardness of LCS [1, 16] would transfer immediately). As such, it is open to determine whether there are (strongly) subquadratic algorithms for LCIS or whether such algorithms can be ruled out under SETH. The starting point of our work is to settle this question.

### 1.1 Longest Common Increasing Subsequence (LCIS)

The Longest Common Increasing Subsequence problem on sequences (-LCIS) is defined as follows: Given integer sequences of length at most , determine the length of the longest sequence such that is a strictly increasing sequence of integers and is a subsequence of each . For , we obtain the well-studied longest increasing subsequence problem (LIS; we refer to [22] for an overview), which has an time solution and a matching lower bound in the decision tree model [25]. The extension to , denoted simply as LCIS, has been proposed by Yang, Huang, and Chao [46], partially motivated as a generalization of LIS and by potential applications in bioinformatics. They obtained an time algorithm, leaving open the natural question whether there exists a way to extend the near-linear time solution for LIS to a near-linear time solution for multiple sequences.

Interestingly, already a classic connection between LCS and LIS combined with a recent conditional lower bound of Abboud, Backurs and Vassilevska Williams [1] yields a partial negative answer assuming SETH.

###### Observation 1 (Folklore reduction, implicit in [29], explicit in [32]).

After time preprocessing, we can solve -LCS by a single call to -LCIS on sequences of length at most .

###### Proof.

Let denote the descending sequence of positions with . We define sequences for all . It is straightforward to see that for any , the length- increasing common subsequences of are in one-to-one correspondence to length- common subsequences of . Thus, the length of the LCIS of is equal to the length of the LCS of , and the claim follows since for all . ∎

###### Corollary 2.

Unless SETH fails, there is no time algorithm for LCIS for any constant .

###### Proof.

Note that by the above reduction, an time LCIS algorithm would give an time algorithm for 3-LCS. Such an algorithm would refute SETH by a result of Abboud et al. [1]. ∎

While this rules out near-linear time algorithms, still an unsatisfying large polynomial gap between best upper and conditional lower bounds persists.

### 1.2 Our Results

Our first result is a tight SETH-based lower bound for LCIS.

###### Theorem 3.

Unless SETH fails, there is no time algorithm for LCIS for any constant .

We extend our main result in several directions.

#### 1.2.1 Parameterized Complexity I: Solution Size

Subsequent work [18, 35] improved over Yang et al.’s algorithm when certain input parameters are small. Here, we focus particularly on the solution size, i.e., the length of the LCIS. Kutz et al. [35] provided an algorithm running in time . If is small compared to its worst-case upper bound of , say , this algorithm runs in strongly subquadratic time. Interestingly, exactly for this case, the reduction from 3-LCS to LCIS of Observation 1 already yields a matching SETH-based lower bound of . However, for smaller , this reduction yields no lower bound at all and only a non-matching lower bound for larger . We remedy this situation by the following result.^{1}^{1}1We mention in passing that a systematic study of the complexity of LCS in terms of such input parameters has been performed recently in [17].

###### Theorem 4.

Unless SETH fails, there is no time algorithm for LCIS for any constant . This even holds restricted to instances with , for arbitrarily chosen .

#### 1.2.2 Parameterized Complexity II: -Lcis

For constant , we can solve -LCIS in time [18, 35], or even time (see the appendix). While it is known that -LCS cannot be computed in time for any constant unless SETH fails [1], this does not directly transfer to -LCIS, since the reduction in Observation 1 is not tight. However, by extending our main construction, we can prove the analogous result.

###### Theorem 5.

Unless SETH fails, there is no time algorithm for -LCIS for any constant and .

#### 1.2.3 Longest Common Weakly Increasing Subsequence (LCWIS)

We consider a closely related variant of LCIS called the Longest Common Weakly Increasing Subsequence (-LCWIS): Here, given integer sequences of length at most , the task is to determine the longest weakly increasing (i.e. non-decreasing) integer sequence that is a common subsequence of . Again, we write LCWIS as a shorthand for 2-LCWIS. Note that the seemingly small change in the notion of increasing sequence has a major impact on algorithmic and hardness results: Any instance of LCIS in which the input sequences are defined over a small-sized alphabet , say , can be solved in strongly subquadratic time [35], by using the fact that . In contrast, LCWIS is quadratic-time SETH hard already over slightly superlogarithmic-sized alphabets [40]. We give a substantially different proof for this fact and generalize it to -LCWIS.

###### Theorem 6.

Unless SETH fails, there is no time algorithm for -LCWIS for any constant and . This even holds restricted to instances defined over an alphabet of size for any function growing arbitrarily slowly.

#### 1.2.4 Strengthening the Hardness

In an attempt to strengthen the conditional lower bounds for Edit Distance and LCS [9, 1, 16], particularly, to obtain barriers even for subpolynomial improvements, Abboud, Hansen, Vassilevska Williams, and Williams [2] gave the first fine-grained reductions from the satisfiability problem on branching programs. Using this approach, the quadratic-time hardness of a problem can be explained by considerably weaker variants of SETH, making the conditional lower bound stronger. We show that our lower bounds also hold under these weaker variants. In particular, we prove the following.

###### Theorem 7.

There is no strongly subquadratic time algorithm for LCIS, unless there is, for some , an algorithm for the satisfiability problem on branching programs of width and length on variables with .

### 1.3 Discussion, Outline and Technical Contributions

Apart from an interest in LCIS and its close connection to LCS, our work is also motivated by an interest in the optimality of dynamic programming (DP) algorithms^{2}^{2}2We refer to [47] for a simple quadratic-time DP formulation for LCIS.. Notably, many conditional lower bounds in target problems with natural DP algorithms that are proven to be near-optimal under some plausible assumption (see, e.g., [15, 3, 9, 10, 1, 16, 11, 23, 34] and [45] for an introduction to the field). Even if we restrict our attention to problems that find optimal sequence alignments under some restrictions, such as LCS, Edit Distance and LCIS, the currently known hardness proofs differ significantly, despite seemingly small differences between the problem definitions. Ideally, we would like to classify the properties of a DP formulation which allow for matching conditional lower bounds.

One step in this direction is given by the alignment gadget framework [16]. Exploiting normalization tricks, this framework gives an abstract property of sequence similarity measures to allow for SETH-based quadratic lower bounds. Unfortunately, as it turns out, we cannot directly transfer the alignment gadget hardness proof for LCS to LCIS – some indication for this difficulty is already given by the fact that LCIS can be solved in strongly subquadratic time over sublinear-sized alphabets [35], while the LCS hardness proof already applies to binary alphabets. By collecting gadgetry needed to overcome such difficulties (that we elaborate on below), we hope to provide further tools to generalize more and more quadratic-time lower bounds based on SETH.

#### 1.3.1 Technical Challenges

The known conditional lower bounds for global alignment problems such as LCS and Edit Distance work as follows. The reductions start from the quadratic-time SETH-hard Orthogonal Vectors problem (OV), that asks to determine, given two sets of -vectors over dimensions, whether there is a pair such that and are orthogonal, i.e., whose inner product is 0 (over the integers). Each vector and is represented by a (normalized) vector gadget and , respectively. Roughly speaking, these gadgets are combined to sequences and such that each candidate for an optimal alignment of and involves locally optimal alignments between pairs – the optimal alignment exceeds a certain threshold if and only if there is an orthogonal pair .

An analogous approach does not work for LCIS: Let be defined over an alphabet and over an alphabet . If and overlap, then and cannot both be aligned in an optimal alignment without interference with each other. On the other hand, if and are disjoint, then each vector should have its corresponding vector gadget defined over both and to enable to align with as well as with . The latter option drastically increases the size of vector gadgets. Thus, we must define all vector gadgets over a common alphabet and make sure that only a single pair is aligned in an optimal alignment (in contrast with pairs aligned in the previous reductions for LCS and Edit Distance).

#### 1.3.2 Technical Contributions and Proof Outline

Fortunately, a surprisingly simple approach works: As a key tool, we provide separator sequences and with the following properties: (1) for every the LCIS of and has a length of , where is a linear function, and (2) and are bounded by . Note that existence of such a gadget is somewhat unintuitive: condition (1) for and requires , yet still the total length must not exceed the length of significantly. Indeed, we achieve this by a careful inductive construction that generates such sequences with heavily varying block sizes and .

We apply these separator sequences as follows. We first define simple vector gadgets over an alphabet such that the length of an LCIS of and is . Then we construct the separator sequences as above over an alphabet whose elements are strictly smaller than all elements in . Furthermore, we create analogous separator sequences and which satisfy a property like (1) for all suffixes instead of prefixes, using an alphabet whose elements are strictly larger than all elements in . Now, we define

As we will show in Section 3, the length of an LCIS of and is for some constant depending only on and .

In contrast to previous such OV-based lower bounds, we use heavily varying separators (paddings) between vector gadgets.

## 2 Preliminaries

As a convention, we use capital or Greek letters to denote sequences over integers. Let be integer sequences. We write for the length of , for the -th element in the sequence (), and for the concatenation of and . We say that is a subsequence of if there exist indices such that for all . Given any number of sequences , we say that is a common subsequence of if is a subsequence of each . is called strictly increasing (or weakly increasing) if (or ). For any sequences , we denote by the length of their longest common subsequence that is strictly increasing.

### 2.1 Hardness Assumptions

All of our lower bounds hold assuming the Strong Exponential Time Hypothesis (SETH), introduced by Impagliazzo and Paturi [30, 31]. It essentially states that no exponential speed-up over exhaustive search is possible for the CNF satisfiability problem.

###### Hypothesis 8 (Strong Exponential Time Hypothesis (SETH)).

There is no such that for all there is an time algorithm for -SAT.

This hypothesis implies tight hardness of the -Orthogonal Vectors problem (-OV), which will be the starting point of our reductions: Given sets , each with vectors over dimensions, determine whether there is a -tuple such that . By exhaustive enumeration, it can be solved in time . The following conjecture is implied by SETH by the well-known split-and-list technique of Williams [44] (and the sparsification lemma [31]).

###### Hypothesis 9 (-OV conjecture).

Let . There is no time algorithm for -OV, with , for any constant .

For the special case of , which we simply denote by OV, we obtain the following weaker conjecture.

###### Hypothesis 10 (OV conjecture).

There is no time algorithm for OV, with , for any constant . Equivalently, even restricted to instances with and , , there is no time algorithm for OV, with , for any constant .

A proof of the folklore equivalence of the statements for equal and unequal set sizes can be found, e.g., in [16].

## 3 Main Construction: Hardness of LCIS

In this section, we prove quadratic-time SETH hardness of LCIS, i.e., prove Theorem 3. We first introduce an inflation operation, which we then use to construct our separator sequences. After defining simple vector gadgets, we show how to embed an Orthogonal Vectors instance using our vector gadgets and separator sequences.

### 3.1 Inflation

We begin by introducing the inflation operation, which roughly corresponds to weighing the sequences.

###### Definition 11.

For a sequence of integers we define:

###### Lemma 12.

For any two sequences and , .

###### Proof.

Let be the longest common increasing subsequence of and . Observe that is a common increasing subsequence of and of length , thus .

Conversely, let denote and denote . Let be the longest common increasing subsequence of and . If we divide all elements of by and round up to the closest integer, we end up with a weakly increasing sequence. Now, if we remove duplicate elements to make this sequence strictly increasing, we obtain , a common increasing subsequence of and . At most distinct elements may become equal after division by and rounding, therefore contains at least elements, so . This completes the proof. ∎

### 3.2 Separator sequences

Our goal is to construct two sequences and which can be split into blocks, i.e. and , such that the length of the longest common increasing subsequence of the first blocks of and the first blocks of equals , up to an additive constant. We call and separator sequences, and use them later to separate vector gadgets in order to make sure that only one pair of gadgets may interact with each other at the same time.

We construct the separator sequences inductively. For every , the sequences and are concatenations of blocks (of varying sizes), and . Let denote the largest element of both sequences. As we will soon observe, .

The construction works as follows: for , we can simply set and as one-element sequences . We then construct and inductively from and in two steps. First, we inflate both and , then after each (now inflated) block we insert -element sequences, called tail gadgets, for and for . Formally, we describe the construction by defining blocks of the new sequences. For ,

Note that the symbols appearing in tail gadgets do not appear in the inflated sequences. The largest element of both new sequences equals , and solving the recurrence gives indeed .

Now, let us prove two useful properties of the separator sequences.

###### Lemma 13.

= .

###### Proof.

Observe that . Indeed, to obtain first we double the size of and then add new elements for each of the blocks of . Solving the recurrence completes the proof. The same reasoning applies to . ∎

###### Lemma 14.

For every , .

###### Proof.

The proof is by induction on . Assume the statement is true for and let us prove it for .

##### The “” direction.

First, consider the case when both and are even. Observe that and are subsequences of and , respectively. Thus, using the induction hypothesis and inflation properties,

If is odd and is even, refer to the previous case to get a common increasing subsequence of and of length consisting only of elements less than or equal to , and append the element to the end of it. Analogously, for even and odd, take such an LCIS of and , and append . Finally, for both and odd, take an LCIS of and , and append and .

##### The “” direction.

We proceed by induction on . Fix and , and let be a longest common increasing subsequence of and .

If the last element of is less than or equal to , is in fact a common increasing subsequence of and , thus, by the induction hypothesis and inflation properties, .

The remaining case is when the last element of is greater than . In this case, consider the second-to-last element of . It must belong to some blocks and for and , and we claim that and cannot hold simultaneously: by construction of separator sequences, if blocks and have a common element larger than , then it is the only common element of these two blocks. Therefore, it cannot be the case that both and , because the last two elements of would then be located in and . As a consequence, , which lets us apply the induction hypothesis to reason that the prefix of omitting its last element is of length at most . Therefore, , which completes the proof. ∎

Observe that if we reverse the sequences and along with changing all elements to their negations, i.e. to , we obtain sequences and such that splits into blocks , splits into blocks , and

(1) |

### 3.3 Vector gadgets

Let and be two sets of -dimensional -vectors.

For let us construct the vector gadgets and as -element sequences, by defining, for every ,

Observe that at most one of the elements and may appear in the LCIS of and , and it happens if and only if and are not both equal to one. Therefore, , and, in particular, if and only if and are orthogonal.

### 3.4 Final construction

To put all the pieces together, we plug vector gadgets and into the separator sequences from Section 3.2, obtaining two sequences whose LCIS depends on the minimal inner product of vectors and . We provide a general construction of such sequences, which will be useful in later sections.

###### Lemma 15.

Let , be integer sequences such that none of them has an increasing subsequence longer than . Then there exist sequences and of length , constructible in linear time, such that:

for a constant that only depends on and and is .

###### Proof.

We can assume that for some positive integer , adding some dummy sequences if necessary. Recall the sequences , , and constructed in Section 3.2. Let be the sequences obtained from by applying inflation times (thus increasing their length by a factor of ). Each of these four sequences splits into (now inflated) blocks, e.g. , where .

We subtract from and a constant large enough for all their elements to be smaller than all elements of every and . Similarly, we add to and a constant large enough for all their elements to be larger than all elements of every and . Now, we can construct the sequences and as follows:

We claim that

Let and be the pair of sequences achieving . Recall that , with all the elements of this common subsequence preceding the elements of and in and , respectively, and being smaller than them. In the same way with all the elements of LCIS being greater and appearing later than those of and . By concatenating these three sequences we obtain a common increasing subsequence of and of length .

It remains to prove . Let be any common increasing subsequence of and . Observe that must split into three (some of them possibly empty) parts with consisting only of elements of and , – only elements of and , and – elements of and .

Let be the last element of and the first element of . We know that belongs to some blocks of and of , and belongs to some blocks of and of . Obviously and . By Lemma 14 and inflation properties we have and . We consider two cases:

Case 1. If and , then may only contain elements of and . Therefore

Case 2. If or , then must be a strictly increasing subsequence of both and therefore its length can be bounded by

On the other hand, . From that we obtain , as desired.

∎

We are ready to prove the main result of the paper.

###### Proof of Theorem 3.

Let , be two sets of -dimensional binary vectors. In Section 3.3 we constructed vector gadgets and , for , such that . To these sequences we apply Lemma 15, with , obtaining sequences and of length such that for a constant . This reduction, combined with an time algorithm for LCIS, would yield an algorithm for OV, refuting Hypothesis 10 and, in particular, SETH. ∎

With the reduction above, one can not only determine whether there exist a pair of orthogonal vectors or not, but also, in the latter case, calculate the minimum inner product over all pairs of vectors. Formally, by the above construction, we can reduce even the Most Orthogonal Vectors problem, as defined in Abboud et al. [1] to LCIS. This bases hardness of LCIS already on the inability to improve over exhaustive search for the MAX-CNF-SAT problem, which is a slightly weaker conjecture than SETH.

## 4 Matching Lower Bound for Output-Dependent Algorithms

To prove our bivariate conditional lower bound of , we provide a reduction from an OV instance with unequal vector set sizes.

###### Proof of Theorem 4.

Let be arbitrary and consider any OV instance with sets with , and . We reduce this problem, in linear time in the output size, to an LCIS instance with sequences and satisfying and an LCIS of length . Theorem 4 is an immediate consequence of the reduction: an time LCIS algorithm would yield an OV algorithm running in time , which would refute Hypothesis 10 and, in particular, SETH.

It remains to show the reduction itself. Let and be two sets of -dimensional -vectors. By adding dummy vectors, we can assume without loss of generality that for some integer .

We use the vector gadgets and from Section 3.4. This time, however, we group together every consecutive gadgets, i.e., , , and so on. Specifically, let be the -th vector gadget shifted by an integer (i.e. with added to all its elements). We define, for each ,

In a similar way, for , we replicate every gadget times with appropriate shifts, i.e.,

Let us now determine . No two gadgets grouped in can contribute to an LCIS together, as the later one would have smaller elements. Therefore, only one gadget can be used, paired with the one copy of having the matching shift. This yields , and in turn, also .

Observe that every is a concatenation of several gadgets, each one shifted to make its elements smaller than previous ones. Therefore, any increasing subsequence of must be contained in a single , and thus cannot be longer than . The same argument applies to every . Therefore, we can apply Lemma 15, with , to these sequences, obtaining and satisfying:

Recall that is some constant dependent only on and , and . The length of both and is , and the length of the output is , as desired. ∎

## 5 Hardness of -Lcis

In this section we show that, assuming SETH, there is no algorithm for the -LCIS problem, i.e., we prove Theorem 5. To obtain this lower bound we show a reduction from the -Orthogonal Vectors problem (for definition, see Section 2). There are two main ingredients of the reduction, i.e. separator sequences and vector gadgets, and both of them can be seen as natural generalizations of those introduced in Section 3.

### 5.1 Generalizing separator sequences

Please note that in this section we use a notation which is not consistent with the one from Section 3, because it has to accommodate indexing over sequences.

The aim of this section is to show, for any that is a power of two, how to construct sequences such that each of them can be split into blocks, i.e. , and for any choice of