IPEC 2016 Accepted Papers with Abstracts
Randomised enumeration of small witnesses using a decision oracle
Many combinatorial problems involve determining whether a universe of n elements contains a witness consisting of k elements which have some specified property. In this paper we investigate the relationship between the decision and enumeration versions of such problems: efficient methods are known for transforming a decision algorithm into a search procedure that finds a single witness, but even finding a second witness is not so straightforward in general. In this paper we show that, if the decision version of the problem belongs to FPT, there is a randomised algorithm which enumerates all witnesses in time f(k) · poly(n) · N, where N is the total number of witnesses and f is a computable function. This also gives rise to an efficient algorithm to count the total number of witnesses when this number is small.
Parallel Multivariate Meta-Theorems
Fixed-parameter tractability is based on the observation that many hard problems become tractable even on large inputs as long as certain input parameters are small. Originally, ``tractable'' just meant ``solvable in polynomial time,'' but especially modern hardware raises the question of whether we can also achieve ``solvable in polylogarithmic parallel time.'' A framework for this study of parallel fixed-parameter tractability is available and a number of isolated algorithmic results have been obtained in recent years, but one of the unifying core tools of classical FPT theory has been missing: algorithmic meta-theorems. We establish two such theorems by giving new upper bounds on the circuit depth necessary to solve the model checking problem for monadic second-order logic, once parameterized by the tree width and the formula (this is a parallel version of Courcelle's Theorem) and once by the tree depth and the formula. For our proofs we refine the analysis of earlier algorithms, especially of Bodlaender's, but also need to add new ideas, especially in the context where the parallel runtime is bounded by a function of the parameter and does not depend on the length of the input.
A Parameterized Algorithmics Framework for Digraph Degree Sequence Completion Problems
There has been intensive work on the parameterized complexity of the typically NP-hard task to edit undirected graphs into graphs fulfilling certain given vertex degree constraints. In this work, seemingly for the first time, we lift the investigations to the case of directed graphs; herein, we focus on arc insertions. To this end, our general two-stage framework consists of efficiently solving a problem-specific number problem transferring the solution to a solution for the graph problem by applying flow computations. In this way, we obtain fixed-parameter tractability and polynomial kernelizability results, with the central parameter being the maximum vertex degree. Although there are certain similarities with the much better studied undirected case, the flow computation used in the directed case seems not to work for the undirected case while f-factor computations as used in the undirected case seem not to work for the directed case.
The Parameterized Complexity of Dependency Detection in Relational Databases
We study the parameterized computational complexity of classical problems that arise in the profiling of relational data. Namely, we characterize the complexity of detecting unique column combinations (candidate keys), functional dependencies and inclusion dependencies with the solution size as parameter. While the discovery of uniques and functional dependencies, respectively, turns out to be W-complete, the detection of inclusion dependencies is one of the first natural problems proven to be complete for the class W. As a side effect, our reductions give insights on the complexity of enumerating all inclusion-wise minimal unique column combinations or functional dependencies.
Cut and Count and Representative Sets on Branch Decompositions
Recently, new techniques have been introduced to speed up dynamic programming algorithms on tree decompositions for connectivity problems: the ‘Cut and Count’ method and a method called the rank-based approach, based on representative sets and Gaussian elimination. These methods respectively give randomized and deterministic algorithms that are single exponential in the treewidth, and polynomial, respectively linear in the number of vertices. In this paper, we adapt these methods to branch decompositions yielding algorithms, both randomised and deterministic, that are in many cases faster than when tree decompositions would be used. In particular, we obtain the currently fastest randomised algorithms for several problems on planar graphs. When the involved weights are O(nO(1)), we obtain faster randomised algorithms on planar graphs for Steiner Tree, Connected Dominating Set, Feedback Vertex Set and TSP, and a faster deterministic algorithm for TSP. When considering planar graphs with arbitrary real weights, we obtain faster deterministic algorithms for all four mentioned problems.
A faster parameterized algorithm for Pseudoforest Deletion
A pseudoforest is a graph where each connected component contains at most one cycle, or alternatively, a graph that can be turned into a forest by removing at most one edge from each connected component. In this paper, we show that the following problem can be solved in O(3knkO(1)) time: given a graph G and an integer k, can we delete at most k vertices and incident edges from G such that we obtain a pseudoforest? The result improves upon an earlier result by Philip et al. [PhilipRS15] who gave an 7.56knO(1) time algorithm both in the exponential factor depending on k as well as in the polynomial factor depending on n.
Finding Secluded Places of Special Interest in Graphs
Finding a vertex subset in a graph that satisfies a certain property is one of the most-studied topics in algorithmic graph theory. The focus herein is often on minimizing or maximizing the size of the solution, that is, the size of the desired vertex set. In several applications, however, we also want to limit the “exposure” of the solution to the rest of the graph. This is the case, for example, when the solution represents persons that ought to deal with sensitive information or a segregated community. In this work, we thus explore the (parameterized) complexity of finding such secluded vertex subsets for a wide variety of properties that they shall fulfill. More precisely, we study the constraint that the (open or closed) neighborhood of the solution shall be bounded by a parameter and the influence of this constraint on the complexity of minimizing separators, feedback vertex sets, F-free vertex deletion sets, dominating sets, and the maximization of independent sets.
H-free Graphs, Independent Sets, and Subexponential-time Algorithms
It is an outstanding open question in algorithmic graph theory to determine the complexity of the Maximum Independent Set problem on Pt-free graphs, that is, on graphs not containing any induced path on t vertices. So far, polynomial-time algorithms are known only for t ≤ 5 [Lokshtanov et al., SODA 2014, 570-581, 2014]. Here we study the existence of subexponential-time algorithms for the problem: by generalizing an earlier result of Randerath and Schiermeyer for t = 5 [Discrete Appl. Math., 158 (2010), 1041-1044], we show that for any t ≥ 5, there is an algorithm for Maximum Independent Set on Pt-free graphs whose running time is subexponential in the number of vertices. Scattered Set is the generalization of Maximum Independent Set where the vertices of the solution are required to be at distance at least d from each other in the whole graph. We give a complete characterization of those graphs H for which Scattered Set on H-free graphs can be solved in time subexponential in the size of the input (that is, in the number of vertices plus number of edges): If every component of H is a path, then Scattered Set on H-free graphs with n vertices and m edges can be solved in time 2(n+m)1-O(1/|V(H)|), even if d is part of the input. Otherwise, assuming ETH, there is no 2o(n+m) time algorithm for Scattered Set for any fixed d ≥ 3 on H-free graphs with n vertices and m edges.
Lower Bounds for Protrusion Replacement by Counting Equivalence Classes
Garnero et al. [SIDMA 2015] recently introduced a framework based on dynamic programming to make applications of the protrusion replacement technique constructive and to obtain explicit upper bounds on the involved constants. They show that for several graph problems, for every boundary size t one can find an explicit set Rt of representatives. Any subgraph H with a boundary of size t can be replaced with a representative H' in Rt such that the effect of this replacement on the optimum can be deduced from H and H' alone. Their upper bounds on the size of the graphs in Rt grow doubly-exponentially with t. In this paper we complement their results by lower bounds on the sizes of representatives, in terms of the boundary size t. For example, we show that each set of representatives Rt for the Independent Set problem contains a graph with Ω(2t/sqrt(4t)) vertices. This lower bound even holds for sets that only represent the planar subgraphs of bounded pathwidth. To obtain our results we provide a lower bound on the number of equivalence classes of the canonical equivalence relation for Independent Set on t-boundaried graphs. We also find an elegant characterization of the number of equivalence classes in general graphs, in terms of the number of monotone functions of a certain kind. Our results show that the number of equivalence classes is at most 2(2t), improving on bounds of the form t(2t) which are implicit in the literature.
A Fast Parameterized Algorithm for Co-Path Set
The k-Co-Path Set problem asks, given a graph G and a positive integer k, whether one can delete k edges from G so that the remainder is a collection of disjoint paths. We give a linear-time FPT algorithm with complexity O*(1.588k) for deciding k-Co-Path Set, signiﬁcantly improving the previously best known O*(2.17k) of Feng, Zhou, and Wang (2015). We also describe a new O*(4tw(G)) algorithm for Co-Path Set using the Cut&Count framework, where tw(G) denotes treewidth. In general graphs, we combine this with a branching algorithm which reﬁnes a 6k-kernel into reduced instances, which we prove have bounded treewidth.
Edge Bipartization faster than 2k
In the Edge Bipartization problem one is given an undirected graph G and an integer k, and the question is whether k edges can be deleted from G so that it becomes bipartite. In 2006, Guo et al.  proposed an algorithm solving this problem in time O(2k m2); today, this algorithm is a textbook example of an application of the iterative compression technique. Despite extensive progress in the understanding of the parameterized complexity of graph separation problems in the recent years, no significant improvement upon this result has been yet reported. We present an algorithm for Edge Bipartization that works in time O(1.977k nm), which is the first algorithm with the running time dependence on the parameter better than 2k. To this end, we combine the general iterative compression strategy of Guo et al. , the techniqueproposed by Wahlström  of using a polynomial-time solvable relaxation in the form of a Valued Constraint Satisfaction Problem to guide a bounded-depth branching algorithm, and an involved Measure&Conquer analysis of the recursion tree.
Cutwidth: obstructions and algorithmic aspects
Cutwidth is one of classic layout parameters for graphs. It measures how well one can order the vertices of a graph in a linear manner, so that the maximum number of edges between any prefix and its complement suffix is minimized. As graphs of cutwidth at most k are closed under taking immersions, from the results of Robertson and Seymour, it follows that there is a finite list of minimal immersion obstructions for admitting a cut layout of width at most k. In this paper, we prove that every minimal immersion obstruction for cutwidth at most k has size at most 2O(k3 log k). For our proof, we introduce the concept of a lean ordering that can be seen as the analogue of lean decompositions introduced by Thomas for the case of treewidth. As an interesting algorithmic byproduct of the approach, we design a new fixed-parameter algorithm for computing the cutwidth of a graph that runs in time 2O(k2 log k) · n, where k is the optimum width and n is the vertex count of the graph. While being slower by a log k-factor in the exponent than the fastest known algorithm for cutwidth due to Thilikos, Bodlaender, and Serna, our algorithm has the advantage of being simpler and self-contained; arguably, it explains better the combinatorics of optimum-width layouts.
Structural Parameterizations of Feedback Vertex Set
A feedback vertex set in an undirected graph is a subset of vertices whose removal results in an acyclic graph. It is well-known that the problem of finding a minimum sized (or k
-sized in case of decision version of) feedback vertex set (FVS) is polynomial time solvable in (sub)cubic graphs, in psuedo-forests (graphs where each component has at most one cycle) and mock-forests (graphs where each vertex is part of at most one cycle). In general graphs, it is known that the problem is NP-complete, and has an O
) fixed-parameter algorithm and an O
) kernel where k
, the solution size, is the parameter. We consider the parameterized and kernelization complexity of feedback vertex set where the parameter is the size of some structure in the input. In particular, we show that
- FVS is fixed-parameter tractable, but is unlikely to have polynomial sized kernel when parameterized by the number of vertices of the graph whose degree is at least 4. This answers a question asked in an earlier paper.
- When parameterized by the number k, of vertices, whose deletion results in a psuedo-forest, we give an O(k6) vertices kernel improving from the previously known O(k10) bound.
- When parameterized by the number k of vertices, whose deletion results in a mock-d-forest, we give a kernel consisting of O(k3d+3) vertices and prove a lower bound of Ω(kd+2) vertices (under complexity theoretic assumptions). Mock-d-forest is a mock-forest where each component has at most d cycles, where d is a constant.
Dynamic Parameterized Problems
Many graph-theoretic problems typically deal with finding a subset of vertices (and/or edges) having certain properties that are of interest to us. In real-world applications, the (large) graph under consideration often changes over time and due to this dynamically changing structure, the solution at hand might lose the desired properties. Recomputing a new solution on the new graph is an expensive task especially when the number of modifications made to the graph is significantly smaller than the size of the graph. The goal in the area of dynamic graph algorithms is to efficiently maintain a solution under these changes. A dynamic version of a problem P is a quintuple (I,I',S,k,r) where I and I' are instances of P that are at an edit distance at most k and S is a solution to I. The task is to determine whether there is a solution S' to I' that is at a Hamming distance at most r from S. Here, k is referred to as the edit parameter and r is called as the distance parameter. Downey et al. pioneered the study of dynamic parameterized problems by introducing Dynamic Dominating Set, the dynamic version of Dominating Set, and proving that it is FPT with respect to the edit parameter. As a sequel to this work, Abu-Khzam et al. described various dynamic parameterized problems and showed fixed-parameter tractability and intractability results. Inspired by these works, we study the parameterized complexity of the dynamic variant of the classical Pi-Deletion problem and show NP-hardness, fixed-parameter tractability and kernelization results. Then, for the specific cases of Pi-Deletion such as Dynamic Vertex Cover and Dynamic Feedback Vertex Set, we describe improved FPT algorithms and give linear kernels. Specifically, we show that Dynamic Vertex Cover can be solved in O*(1.1740k) time and polynomial space while Dynamic Feedback Vertex Set admits a randomized algorithm with O*(1.6667k) running time. Finally, we consider Dynamic Connected Vertex Cover, Dynamic Dominating Set and Dynamic Connected Dominating Set and describe algorithms with O*(2k) running time. For Dynamic Dominating Set and Dynamic Connected Dominating Set, we show that this is the optimal running time (up to polynomial factors) assuming the Set Cover Conjecture.
Turbocharging Treewidth Heuristics
A widely used class of algorithms for computing tree decompositions of graphs are heuristics that compute an elimination order, i.e., a permutation of the vertex set. In this paper, we propose to turbocharge these heuristics. For a target treewidth k, suppose the heuristic has already computed a partial elimination order of width at most k, but extending it by one more vertex exceeds the target width k. At this moment of regret, we solve a subproblem which is to recompute the last c positions of the partial elimination order such that it can be extended without exceeding width k. We show that this subproblem is fixed-parameter tractable when parameterized by k and c, but it is para-NP-hard and W-hard when parameterized by only k or c, respectively. Our experimental evaluation of the FPT algorithm shows that we can trade a reasonable increase of the running time for quality of the solution.
On Satisfiability Problems with Linear Structure
It was recently shown [STV] that satisfiability is polynomially solvable when the incidence graph is an interval bipartite graph (an interval graph with all edges within each partition omitted). Here we relax this condition in several directions: First, we show an FPT algorithm when parameterized by k for k-interval bigraphs, bipartite graphs which can be converted to interval ones by adding to each node of one side at most k edges; the same result holds for the counting and the weighted maximization version of satisfiability. Second, given two linear orders, one for the variables and one for the clauses, we show how to find, in polynomial time, the smallest k such that there is a k-interval bigraph compatible with these two orders. We further prove that, barring complexity collapses, no such extensions are possible for CSPs more general than satisfiability. We also show NP-hardness of recognizing 1-interval bigraphs.
A 2ℓ k Kernel for ℓ-Component Order Connectivity
In the ℓ-Component Order Connectivity problem (ℓ ∈ ℕ), we are given a graph G on n vertices, m edges and a non-negative integer k and asked whether there exists a set of vertices S⊆ V(G) such that |S| ≤ k and the size of the largest connected component in G-S is at most ℓ. In this paper, we give a kernel for ℓ-Component Order Connectivity with at most 2ℓ k vertices and a parameterized algorithm with running time O((ℓ-0.2515)k (kℓ)O(1)+n+m). On the way to obtaining our kernel, we prove a generalization of the q-Expansion Lemma to weighted graphs. This generalization may be of independent interest.
Backdoors for Linear Temporal Logic
In the present paper we introduce the backdoor set approach into the field of temporal logic for the globally fragment of linear temporal logic. We study the parameterized complexity of the satisfiability problem parameterized by the size of the backdoor. We distinguish between backdoor detection and evaluation of backdoors into the fragments of Horn and Krom formulas. Here we classify the operator fragments of globally-operators for past/future/always, and the combination of them. Detection is shown to be fixed-parameter tractable (FPT) whereas the complexity of evaluation behaves different. We show that for Krom formulas the problem is paraNP-complete. For Horn formulas the complexity is shown to be either fixed parameter tractable or paraNP-complete depending on the considered operator fragment.
Exact Algorithms for List-coloring of Intersecting Hypergraphs
We show that list-coloring for any intersecting hypergraph of m edges on n vertices, and lists drawn from a set of size at most k, can be checked in quasi-polynomial time (nm)o(k2log(mn)).
Clifford algebras meet tree decompositions
We introduce the Noncommutative Subset Convolution - a convolution of functions useful when working with determinant-based algorithms. In order to perform it efficiently, we take advantage of Clifford algebras, a generalization of quaternions used mainly in the quantum field theory.We apply this tool to speed up algorithms counting subgraphs parameterized by the treewidth of a graph. We present an O*((2w + 1)tw)-time algorithm for counting Steiner trees and an O*((2w + 2)tw)-time algorithm for counting Hamiltonian cycles, both of which improve the previously known upper bounds. The result for Steiner Tree also translates into a deterministic algorithm for Feedback Vertex Set. All of these constitute the best known running times of deterministic algorithms for decision versions of these problems and they match the best obtained running times for pathwidth parameterization under assumption w = 2.
On the Parameterized Complexity of Biclique Cover and Partition
Given a bipartite graph G, we consider the decision problem called Biclique Cover for a fixed positive integer parameter k where we are asked whether the edges of G can be covered with at most k complete bipartite subgraphs (a.k.a. bicliques). In the Biclique Cover and Partition problem, we have the additional constraint that each edge should appear in exactly one of the k bicliques. These problems are both known to be NP-complete but fixed parameter tractable. However, the known FPT algorithms have a running time that is doubly exponential in k, and the best known kernel for both problems is exponential in k. We build on this kernel and improve the running time for Biclique Cover and Partition to O*(2k2) by exploiting a linear algebraic view on this problem with an algorithm using arithmetic in a finite field. On the other hand, we show that no such improvement is possible for Biclique Cover unless the Exponential Time Hypothesis (ETH) is false by proving a corresponding doubly exponential lower bound on the running time. We achieve this by giving a reduction from 3SAT on n variables to an instance of Biclique Cover with k = O(log n). As a further consequence of this reduction, we show that there is no subexponential kernel for Biclique Cover unless P = NP. Finally, we point out the significance of the exponential kernel mentioned above for the design of polynomial-time approximation algorithms for the optimization versions of both problems. That is, we show that it is possible to obtain approximation factors of n/logn for both problems, which seemingly remained unnoticed by the approximation community as the best previously reported approximation factor was n/sqrt(log n).
Optimal dynamic program for r-domination problems over tree decompositions
There has been recent progress in showing that the exponential dependence on treewidth in dynamic programming algorithms for solving NP-hard problems is optimal under the Strong Exponential Time Hypothesis (SETH). We extend this work to r-domination problems. In r-dominating set, one wishes to find a minimum subset S of vertices such that every vertex of G is within r hops of some vertex in S. In connected r-dominating set, one additionally requires that the set induces a connected subgraph of G. We give a O((2r+1)tw n) time algorithm for r-dominating set and a O((2r+2)tw nO(1)) time algorithm for connected r-dominating set in n-vertex graphs of treewidth tw. We show that the running time dependence on r and tw is the best possible under SETH. This adds to earlier observations that a "+1" in the denominator is required for connectivity constraints.
Computing Graph Distances Parameterized by Treewidth and Diameter
We show that the eccentricity of every vertex in an undirected graph on n vertices can be computed in time n exp(O(k log d)), where k is the treewidth of the graph and d is the diameter. This means that the diameter and the radius of the graph can be computed in the same time. In particular, if the diameter is constant, it can be determined in time n exp O(k). This result matches a recent hardness result by Abboud, Vassilevska Williams, and Wang [SODA 2016] that shows that under the Strong Exponential Time Hypothesis of Impagliazzo, Paturi, and Zane [J. Comp. Syst. Sc., 2001], for any ε > 0, no algorithm with running time n2-ε exp o(k) can distinguish between graphs with diameter 2 and 3. Our algorithm is elementary and self-contained.
Improved Bounds for Minimal Feedback Vertex Sets in Tournaments
We study feedback vertex sets (FVS) in tournaments, which are orientations of complete graphs.As our main result, we show that any tournament on n nodes has at most 1.5949n minimal FVS. This significantly improves the previously best upper bound of 1.6667n by Fomin et al. (STOC 2016). Our new upper bound almost matches the best known lower bound of 21(n/7) ≈ 1.5448n, due to Gaspers and Mnich (ESA 2010).In consequence, all minimal FVS of tournaments can be enumerated in time O(1.5949n).
Improved Algorithms and Combinatorial Bounds for Independent Feedback Vertex Set
In this paper we study the ''independent'' version of the classic FEEDBACK VERTEX SET problem in the realm of parameterized algorithms and moderately exponential time algorithms. More precisely, we study the INDEPENDENT FEEDBACK VERTEX SET problem, where we are given an undirected graph G on n vertices and a positive integer k, and the objective is to check if there is an independent feedback vertex set of size at most k. A set S ⊆ V(G) is called an independent feedback vertex set (ifvs) if S is an independent set and G - S is a forest. In this paper we design two deterministic exact algorithms for INDEPENDENT FEEDBACK VERTEX SET with running times O*(4.1481k) and O*(1.5981n). In fact, the algorithm with O*(1.5981k) running time finds the smallest sized ifvs. Both the algorithms are based on interesting measures and improve the best known algorithms for the problem in their respective domains. In particular, the algorithm with running time O*(4.1481k) is an improvement over the previous algorithm that ran in time O*(5k). On the other hand, the algorithm with running time O*(1.5981n) is the first moderately exponential time algorithm that improves over the naive algorithm that enumerates all the subsets of V(G). Additionally, we show that the number of minimal ifvses in any graph on n vertices is upper bounded by 1.7480n.
Fine-grained dichotomies for the Tutte plane and Boolean #CSP
Jaeger, Vertigan, and Welsh (1990) proved a dichotomy for the complexity of evaluating the Tutte polynomial at fixed points: The evaluation is #P-hard almost everywhere, and the remaining points admit polynomial-time algorithms. Dell, Husfeldt, and Wahlén (ICALP 2010) and Husfeldt and Taslaman (IPEC 2010), in combination with Curticapean (ICALP 2015), extended the #P-hardness results to tight lower bounds under the counting exponential time hypothesis #ETH, with the exception of the line y = 1, which was left open. We complete the dichotomy theorem for the Tutte polynomial under #ETH by proving that the number of all acyclic subgraphs of a given n-vertex graph cannot be determined in time exp( o(n) ) unless #ETH fails.Another dichotomy theorem we strengthen is the one of Creignou and Hermann (1996) for counting the number of satisfying assignments to a constraint satisfaction problem instance over the Boolean domain. We prove that all #P-hard cases are also hard under #ETH. The main ingredient is to prove that the number of independent sets in bipartite graphs with n vertices cannot be computed in time exp( o(n) ) unless #ETH fails.In order to prove our results, we use the block interpolation idea by Curticapean (ICALP 2015) and transfer it to systems of linear equations that might not directly correspond to interpolation.
Treedepth parameterized by vertex cover number
To solve hard graph problems from the parameterized perspective, structural parameters have commonly been used instead of natural parameters. In particular, vertex cover number is frequently used in this context. In this paper, we study the problem of computing the treedepth of a given graph G. We show that there are an O(τ(G)3) vertex kernel and an O(4τ(G)τ(G)n) time fixed parameter algorithm for this problem, where τ(G) is the size of a minimum vertex cover of G and n is the number of vertices of G.
Ground Reachability and Joinability in Linear Term Rewriting Systems are Fixed Parameter Tractable with Respect to Depth
The ground term reachability problem consists in determining whether a given variable-free term t can be transformed into a given variable-free term t' by the application of rules from a term rewriting system \rewritingrules. The joinability problem, on the other hand, consists in determining whether there exists a variable-free term t'' which is reachable both from t and from t'. Both problems have proven to be of fundamental importance for several subfields of computer science. Nevertheless, these problems are undecidable even when restricted to linear term rewriting systems. In this work, we approach reachability and joinability in linear term rewriting systems from the perspective of parameterized complexity theory, and show that these problems are fixed parameter tractable with respect to the depth of derivations. More precisely, we consider a notion of parallel rewriting, in which an unbounded number of rules can be applied simultaneously to a term as long as these rules do not interfere with each other. A term t1 can reach a term t2 in depth d if t2 can be obtained from t1 by the application of d parallel rewriting steps. Our main result states that for some function \multiplicativeconstant, and for any linear term rewriting system \rewritingrules, one can determine in time \multiplicativeconstant· |t1|· |t2| whether a ground term t2 can be reached from a ground term t1 in depth at most d by the application of rules from \rewritingrules. Additionally, one can determine in time \multiplicativeconstant2 · |t1|· |t2| whether there exists a ground term u, such that u can be reached from both t1 and t2 in depth at most d. Our algorithms improve exponentially on exhaustive search, which terminates in time 2|t1|· 2O(d) · |t2|, and can be applied with regard to any linear term rewriting system, irrespectively of whether the rewriting system in question is terminating or confluent.