Redundancy, Deduction Schemes, and Minimum-Size Bases for Association Rules
Association rules are among the most widely employed data analysis methods in the field of Data Mining. An association rule is a form of partial implication between two sets of binary variables. In the most common approach, association rules are parametrized by a lower bound on their confidence, which is the empirical conditional probability of their consequent given the antecedent, and/or by some other parameter bounds such as “support” or deviation from independence. We study here notions of redundancy among association rules from a fundamental perspective. We see each transaction in a dataset as an interpretation (or model) in the propositional logic sense, and consider existing notions of redundancy, that is, of logical entailment, among association rules, of the form “any dataset in which this first rule holds must obey also that second rule, therefore the second is redundant”. We discuss several existing alternative definitions of redundancy between association rules and provide new characterizations and relationships among them. We show that the main alternatives we discuss correspond actually to just two variants, which differ in the treatment of full-confidence implications. For each of these two notions of redundancy, we provide a sound and complete deduction calculus, and we show how to construct complete bases (that is, axiomatizations) of absolutely minimum size in terms of the number of rules. We explore finally an approach to redundancy with respect to several association rules, and fully characterize its simplest case of two partial premises.
Key words and phrases:Data mining, association rules, implications, redundancy, deductive calculus, optimum bases
The relatively recent discipline of Data Mining involves a wide spectrum of techniques, inherited from different origins such as Statistics, Databases, or Machine Learning. Among them, Association Rule Mining is a prominent conceptual tool and, possibly, a cornerstone notion of the field, if there is one. Currently, the amount of available knowledge regarding association rules has grown to the extent that the tasks of creating complete surveys and websites that maintain pointers to related literature become daunting. A survey, with plenty of references, is [CegRod], and additional materials are available in [HahslerWeb]; see also [AIS], [AMSTV], [Freitas], [PasBas], [Zaki], [ZO], and the references and discussions in their introductory sections.
Given an agreed general set of “items”, association rules are defined with respect to a dataset that consists of “transactions”, each of which is, essentially, a set of items. Association rules are customarily written as , for sets of items and , and they hold in the given dataset with a specific “confidence” quantifying how often appears among the transactions in which appears.
A close relative of the notion of association rule, namely, that of exact implication in the standard propositional logic framework, or, equivalently, association rule that holds in 100% of the cases, has been studied in several guises. Exact implications are equivalent to conjunctions of definite Horn clauses: the fact, well-known in logic and knowledge representation, that Horn theories are exactly those closed under bitwise intersection of propositional models leads to a strong connection with Closure Spaces, which are characterized by closure under intersection (see the discussions in [DP] or [KR]). Implications are also very closely related to functional dependencies in databases. Indeed, implications, as well as functional dependencies, enjoy analogous, clear, robust, hardly disputable notions of redundancy that can be defined equivalently both in semantic terms and through the same syntactic calculus. Specifically, for the semantic notion of entailment, an implication is entailed from a set of implications if every dataset in which all the implications of hold must also satisfy ; and, syntactically, it is known that this happens if and only if is derivable from via the Armstrong axiom schemes, namely, Reflexivity ( for ), Augmentation (if and then , where juxtaposition denotes union) and Transitivity (if and then ).
Also, such studies have provided a number of ways to find implications (or functional dependencies) that hold in a given dataset, and to construct small subsets of a large set of implications, or of functional dependencies, from which the whole set can be derived; in Closure Spaces and in Data Mining these small sets are usually called “bases”, whereas in Dependency Theory they are called “covers”, and they are closely related to deep topics such as hypergraph theory. Associated natural notions of minimality (when no implication can be removed), minimum size, and canonicity of a cover or basis do exist; again it is inappropriate to try to give a complete set of references here, but see, for instance, [DP], [EiterG], [GW], [GD], [GunoEtAl], [KR], [PT], [Wild], [ZO], and the references therein.
However, the fact has been long acknowledged (e.g. already in [Lux]) that, often, it is inappropriate to search only for absolute implications in the analysis of real world datasets. Partial rules are defined in relation to their “confidence”: for a given rule , the ratio of how often and are seen together to how often is seen. Many other alternative measures of intensity of implication exist [Garriga], [GH]; we keep our focus on confidence because, besides being among the most common ones, it has a natural interpretation for educated users through its correspondence with the observed conditional probability.
The idea of restricting the exploration for association rules to frequent itemsets, with respect to a support threshold, gave rise to the most widely discussed and applied algorithm, called Apriori [AMSTV], and to an intense research activity. Already with full-confidence implications, the output of an association mining process often consists of large sets of rules, and a well-known difficulty in applied association rule mining lies in that, on large datasets, and for sensible settings of the confidence and support thresholds and other parameters, huge amounts of association rules are often obtained. Therefore, besides the interesting progress in the topic of how to organize and query the rules discovered (see [LiuHsuMa], [LiuHuHsu], [TuLiu]), one research topic that has been worthy of attention is the identification of patterns that indicate redundancy of rules, and ways to avoid that redundancy; and each proposed notion of redundancy opens up a major research problem, namely, to provide a general method for constructing bases of minimum size with respect to that notion of redundancy.
For partial rules, the Armstrong schemes are not valid anymore. Reflexivity does hold, but Transitivity takes a different form that affects the confidence of the rules: if the rule (or , which is equivalent) and the rule both hold with confidence at least , we still know nothing about the confidence of ; even the fact that both and hold with confidence at least only gives us a confidence lower bound of for (assuming ). Augmentation does not hold at all; indeed, enlarging the antecedent of a rule of confidence at least may give a rule with much smaller confidence, even zero: think of a case where most of the times appears it comes with , but it only comes with when is not present; then the confidence of may be high whereas the confidence of may be null. Similarly, if the confidence of is high, it means that and appear together in most of the transactions having , whence the confidences of and are also high; but, with respect to the converse, the fact that both and appear in fractions at least of the transactions having does not inform us that they show up together at a similar ratio of these transactions: only a ratio of is guaranteed as a lower bound. In fact, if we look only for association rules with singletons as consequents (as in some of the analyses in [AgYu], or in the “basic association rules” of [LiHa], or even in the traditional approach to association rules [AIS] and the useful apriori implementation of Borgelt available on the web [BorgeltApriori]) we are almost certain to lose information. As a consequence of these failures of the Armstrong schemes, the canonical and minimum-size cover construction methods available for implications or functional dependencies are not appropriate for partial association rules.
On the semantic side, a number of formalizations of the intuition of redundancy among association rules exist in the literature, often with proposals for defining irredundant bases (see [AgYu], [CrisSim], [KryszPAKDD], [Lux], [PasBas], [PhanLuongICDM], [Zaki], the survey [Krysz], and section 6 of the survey [CegRod]). All of these are weaker than the notion that we would consider natural by comparison with implications (of which we start the study in the last section of this paper). We observe here that one may wish to fulfill two different roles with a basis, and that both appear (somewhat mixed) in the literature: as a computer-supported data structure from which confidences and supports of rules are computed (a role for which we use the closures lattice instead) or, in our choice, as a means of providing the user with a smallish set of association rules for examination and, if convenient, posterior enumeration of the rules that follow from each rule in the basis. That is, we will not assume to have available, nor to wish to compute, exact values for the confidence, but only discern whether it stays above a certain user-defined threshold. We compute actual confidences out of the closure lattice only at the time of writing out rules for the user.
This paper focuses mainly on several such notions of redundancy, defined in a rather general way, by resorting to confidence and support inequalities: essentially, a rule is redundant with respect to another if it has at least the same confidence and support of the latter for every dataset. We also discuss variants of this proposal and other existing definitions given in set-theoretic terms. For the most basic notion of redundancy, we provide formal proofs of the so far unstated equivalence among several published proposals, including a syntactic calculus and a formal proof of the fact, also previously unknown, that the existing basis known as the Essential Rules or the Representative Rules ([AgYu], [KryszPAKDD], [PhanLuongICDM]) is of absolutely minimum size.
It is natural to wish further progress in reducing the size of the basis. Our theorems indicate that, in order to reduce further the size without losing information, more powerful notions or redundancy must be deployed. We consider for this role the proposal of handling separately, to a given extent, full-confidence implications from lower-than-1-confidence rules, in order to profit from their very different combinatorics. This separation is present in many constructions of bases for association rules [Lux], [PasBas], [Zaki]. We discuss corresponding notions of redundancy and completeness, and prove new properties of these notions; we give a sound and complete deductive calculus for this redundancy; and we refine the existing basis constructions up to a point where we can prove again that we attain the limit of the redundancy notion.
Next, we discuss yet another potential for strengthening the notion of redundancy. So far, all the notions have just related one partial rule to another, possibly in the presence of full implications. Is it possible to combine two partial rules, of confidence at least , and still obtain a partial rule obeying that confidence level? Whereas the intuition is that these confidences will combine together to yield a confidence lower than , we prove that there is a specific case where a rule of confidence at least is nontrivially entailed by two of them. We fully characterize this case and obtain from the caracterization yet another deduction scheme. We hope that further progress along the notion of a set of partial rules entailing a partial rule will be made along the coming years.
Preliminary versions of the results in sections LABEL:dedplain, LABEL:redundcalculus, LABEL:clocalcsoundcompl, and LABEL:closbasedent have been presented at Discovery Science 2008 [Bal08b]; preliminary versions of the remaining results (except those in section LABEL:suppbound, which are newer and unpublished) have been presented at ECMLPKDD 2008 [Bal08].
Our notation and terminology are quite standard in the Data Mining literature. All our developments take place in the presence of a “universe” set of atomic elements called items; their absence or presence in sets or items plays the same role as binary-valued attributes of a relational table. Subsets of are called itemsets. A dataset is assumed to be given; it consists of transactions, each of which is an itemset labeled by a unique transaction identifier. The identifiers allow us to distinguish among transactions even if they share the same itemset. Upper-case, often subscripted letters from the end of the alphabet, like or , denote itemsets. Juxtaposition denotes union of itemsets, as in ; and denotes proper subsets, whereas is used for the usual subset relationship with potential equality.
For a transaction , we denote the fact that is a subset of the itemset corresponding to , that is, the transaction satisfies the minterm corresponding to in the propositional logic sense.
From the given dataset we obtain a notion of support of an itemset: is the cardinality of the set of transactions that include it, ; sometimes, abusing language slightly, we also refer to that set of transactions itself as support. Whenever is clear, we drop the subindex: . Observe that whenever ; this is immediate from the definition. Note that many references resort to a normalized notion of support by dividing by the dataset size. We chose not to, but there is no essential issue here. Often, research work in Data Mining assumes that a threshold on the support has been provided and that only sets whose support is above the threshold (then called “frequent”) are to be considered. We will require this additional constraint occassionally for the sake of discussing the applicability of our developments.
We immediately obtain by standard means (see, for instance, [GW] or [Zaki]) a notion of closed itemsets, namely, those that cannot be enlarged while maintaining the same support. The function that maps each itemset to the smallest closed set that contains it is known to be monotonic, extensive, and idempotent, that is, it is a closure operator. This notion will be reviewed in more detail later on. Closed sets whose support is above the support threshold, if given, are usually termed closed frequent sets.
Association rules are pairs of itemsets, denoted as for itemsets and . Intuitively, they suggest the fact that occurs particularly often among the transactions in which occurs. More precisely, each such rule has a confidence associated: the confidence of an association rule in a dataset is . As with support, often we drop the subindex . The support in of the association rule is .
We can switch rather freely between right-hand sides that include the left-hand side and right-hand sides that don’t:
Rules and are equivalent by reflexivity if and .
Clearly, and, likewise, for any ; that is, the support and confidence of rules that are equivalent by reflexivity always coincide. A minor notational issue that we must point out is that, in some references, the left-hand side of a rule is required to be a subset of the right-hand side, as in [Lux] or [PhanLuongICDM], whereas many others require the left- and right-hand sides of an association rule to be disjoint, such as [Krysz] or the original [AIS]. Both the rules whose left-hand side is a subset of the right-hand side, and the rules that have disjoint sides, may act as canonical representatives for the rules equivalent to them by reflexivity. We state explicitly one version of this immediate fact for later reference:
If rules and are equivalent by reflexivity, , and , then they are the same rule: and .
In general, we do allow, along our development, rules where the left-hand side, or a part of it, appears also at the right-hand side, because by doing so we will be able to simplify the mathematical arguments. We will assume here that, at the time of printing out the rules found, that is, for user-oriented output, the items in the left-hand side are removed from the right-hand side; accordingly, we write our rules sometimes as to recall this convention.
Also, many references require the right-hand side of an association rule to be nonempty, or even both sides. However, empty sets can be handled with no difficulty and do give meaningful, albeit uninteresting, rules. A partial rule with an empty right-hand side is equivalent by reflexivity to , or to for any , and all of these rules have always confidence 1. A partial rule with empty left-hand side, as employed, for instance, in [Krysz], actually gives the normalized support of the right-hand side as confidence value:
In a dataset of transactions, .
Again, these sorts of rules could be omitted from user-oriented output, but considering them conceptually valid simplifies the mathematical development. We also resort to the convention that, if (which implies that as well) we redefine the undefined confidence as 1, since the intuitive expression “all transactions having do have also ” becomes vacuously true. This convention is irrespective of whether .
Throughout the paper, “implications” are association rules of confidence 1, whereas “partial rules” are those having a confidence below 1. When the confidence could be 1 or could be less, we say simply “rule”.
section1[Redundancy Notions]Redundancy Notions
We start our analysis from one of the notions of redundancy defined formally in [AgYu]. The notion is employed also, generally with no formal definition, in several papers on association rules, which subsequently formalize and study just some particular cases of redundancy (e.g. [KryszPAKDD], [SaquerDeogun]); thus, we have chosen to qualify this redundancy as “standard”. We propose also a small variation, seemingly less restrictive; we have not found that variant explicitly defined in the literature, but it is quite natural.
[AgYu] has standard redundancy with respect to if the confidence and support of are larger than or equal to those of , in all datasets.
has plain redundancy with respect to if the confidence of is larger than or equal to the confidence of , in all datasets.
Generally, we will be interested in applying these definitions only to rules where since, otherwise, for all datasets and the rule is trivially redundant. We state and prove separately, for later use, the following new technical claim:
Assume that rule is plainly redundant with respect to rule , and that . Then .
Assume , to argue the contrapositive. Then, we can consider a dataset consisting of one transaction and, say, transactions . No transaction includes , therefore ; however, is either 1 or , which can be pushed up as much as desired by simply increasing . Then, plain redundancy does not hold, because it requires to hold for all datasets whereas, for this particular dataset, the inequality fails.
The first use of this lemma is to show that plain redundancy is not, actually, weaker than standard redundancy.
Consider any two rules and where . Then has standard redundancy with respect to if and only if has plain redundancy with respect to .