Steven F. Ashby Center for Applied Scientific Computing

Download Report

Transcript Steven F. Ashby Center for Applied Scientific Computing

Data Mining
Association Analysis: Basic Concepts
and Algorithms
Lecture Notes for Chapter 6
Introduction to Data Mining
by
Tan, Steinbach, Kumar
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
1
Association Rule Mining

Given a set of transactions, find rules that will predict the
occurrence of an item based on the occurrences of other
items in the transaction
Market-Basket transactions
TID
Items
1
Bread, Milk
2
3
4
5
Bread, Diaper, Beer, Eggs
Milk, Diaper, Beer, Coke
Bread, Milk, Diaper, Beer
Bread, Milk, Diaper, Coke
© Tan,Steinbach, Kumar
Introduction to Data Mining
Example of Association Rules
{Diaper}  {Beer},
{Milk, Bread}  {Eggs,Coke},
{Beer, Bread}  {Milk},
Implication means co-occurrence,
not causality!
4/18/2004
‹#›
Definition: Frequent Itemset

Itemset
– A collection of one or more items

Example: {Milk, Bread, Diaper}
– k-itemset


An itemset that contains k items
Support count ()
– Frequency of occurrence of an itemset
– E.g. ({Milk, Bread,Diaper}) = 2

Support
TID
Items
1
Bread, Milk
2
3
4
5
Bread, Diaper, Beer, Eggs
Milk, Diaper, Beer, Coke
Bread, Milk, Diaper, Beer
Bread, Milk, Diaper, Coke
– Fraction of transactions that contain an
itemset
– E.g. s({Milk, Bread, Diaper}) = 2/5

Frequent Itemset
– An itemset whose support is greater
than or equal to a minsup threshold
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
Definition: Association Rule

Association Rule
– An implication expression of the form
X  Y, where X and Y are itemsets
– Example:
{Milk, Diaper}  {Beer}

Rule Evaluation Metrics
TID
Items
1
Bread, Milk
2
3
4
5
Bread, Diaper, Beer, Eggs
Milk, Diaper, Beer, Coke
Bread, Milk, Diaper, Beer
Bread, Milk, Diaper, Coke
– Support (s)

Example:
Fraction of transactions that contain
both X and Y
{Milk, Diaper}  Beer
– Confidence (c)

Measures how often items in Y
appear in transactions that
contain X
© Tan,Steinbach, Kumar
s
 (Milk , Diaper, Beer )
|T|

2
 0.4
5
 (Milk, Diaper, Beer ) 2
c
  0.67
 (Milk , Diaper )
3
Introduction to Data Mining
4/18/2004
‹#›
Another Example
Customer
buys both
Customer
buys diaper
Customer
buys beer
Transaction ID Items Bought Let minimum support 50%, and
minimum confidence 50%, we
2000
A,B,C
have
1000
A,C
– A  C (50%, 66.6%)
4000
A,D
– C  A (50%, 100%)
5000
B,E,F
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
Association Rule Mining Task

Given a set of transactions T, the goal of
association rule mining is to find all rules having
– support ≥ minsup threshold
– confidence ≥ minconf threshold

Brute-force approach:
– List all possible association rules
– Compute the support and confidence for each rule
– Prune rules that fail the minsup and minconf
thresholds
 Computationally prohibitive!
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
Mining Association Rules
Example of Rules:
TID
Items
1
Bread, Milk
2
3
4
5
Bread, Diaper, Beer, Eggs
Milk, Diaper, Beer, Coke
Bread, Milk, Diaper, Beer
Bread, Milk, Diaper, Coke
{Milk,Diaper}  {Beer} (s=0.4, c=0.67)
{Milk,Beer}  {Diaper} (s=0.4, c=1.0)
{Diaper,Beer}  {Milk} (s=0.4, c=0.67)
{Beer}  {Milk,Diaper} (s=0.4, c=0.67)
{Diaper}  {Milk,Beer} (s=0.4, c=0.5)
{Milk}  {Diaper,Beer} (s=0.4, c=0.5)
Observations:
• All the above rules are binary partitions of the same itemset:
{Milk, Diaper, Beer}
• Rules originating from the same itemset have identical support but
can have different confidence
• Thus, we may decouple the support and confidence requirements
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
Mining Association Rules

Two-step approach:
1. Frequent Itemset Generation
–
Generate all itemsets whose support  minsup
2. Rule Generation
–

Generate high confidence rules from each frequent itemset,
where each rule is a binary partitioning of a frequent itemset
Frequent itemset generation is still
computationally expensive
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
Frequent Itemset Generation
null
A
B
C
D
E
AB
AC
AD
AE
BC
BD
BE
CD
CE
DE
ABC
ABD
ABE
ACD
ACE
ADE
BCD
BCE
BDE
CDE
ABCD
ABCE
ABDE
ACDE
ABCDE
© Tan,Steinbach, Kumar
Introduction to Data Mining
BCDE
Given d items, there
are 2d possible
candidate itemsets
4/18/2004
‹#›
Frequent Itemset Generation

Brute-force approach:
– Each itemset in the lattice is a candidate frequent itemset
– Count the support of each candidate by scanning the
database
Transactions
N
TID
1
2
3
4
5
Items
Bread, Milk
Bread, Diaper, Beer, Eggs
Milk, Diaper, Beer, Coke
Bread, Milk, Diaper, Beer
Bread, Milk, Diaper, Coke
List of
Candidates
M
w
– Match each transaction against every candidate
– Complexity ~ O(NMw) => Expensive since M = 2d !!!
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
Frequent Itemset Generation Strategies

Reduce the number of candidates (M)
– Complete search: M=2d
– Use pruning techniques to reduce M

Reduce the number of transactions (N)
– Reduce size of N as the size of itemset increases

Reduce the number of comparisons (NM)
– Use efficient data structures to store the candidates or
transactions
– No need to match every candidate against every
transaction
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
Reducing Number of Candidates

Apriori principle:
– If an itemset is frequent, then all of its subsets must also
be frequent

Apriori principle holds due to the following property
of the support measure:
X ,Y : ( X  Y )  s( X )  s(Y )
– Support of an itemset never exceeds the support of its
subsets
– This is known as the anti-monotone property of support
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
Illustrating Apriori Principle
null
A
B
C
D
E
AB
AC
AD
AE
BC
BD
BE
CD
CE
DE
ABC
ABD
ABE
ACD
ACE
ADE
BCD
BCE
BDE
CDE
Found to be
Infrequent
ABCD
ABCE
Pruned
supersets
© Tan,Steinbach, Kumar
Introduction to Data Mining
ABDE
ACDE
BCDE
ABCDE
4/18/2004
‹#›
Illustrating Apriori Principle
Item
Bread
Coke
Milk
Beer
Diaper
Eggs
Count
4
2
4
3
4
1
Items (1-itemsets)
Itemset
{Bread,Milk}
{Bread,Beer}
{Bread,Diaper}
{Milk,Beer}
{Milk,Diaper}
{Beer,Diaper}
Minimum Support = 3
Pairs (2-itemsets)
(No need to generate
candidates involving Coke
or Eggs)
Triplets (3-itemsets)
If every subset is considered,
6C + 6C + 6C = 41
1
2
3
With support-based pruning,
6 + 6 + 1 = 13
© Tan,Steinbach, Kumar
Count
3
2
3
2
3
3
Introduction to Data Mining
Itemset
{Bread,Milk,Diaper}
Count
3
4/18/2004
‹#›
Apriori Algorithm

Method:
– Let k=1
– Generate frequent itemsets of length 1
– Repeat until no new frequent itemsets are identified
 Generate
length (k+1) candidate itemsets from length k
frequent itemsets if their first k-1 items are identical
 Prune
candidate itemsets containing subsets of length k that
are infrequent
 Count
the support of each candidate by scanning the DB
 Eliminate
candidates that are infrequent, leaving only those
that are frequent
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
The Apriori Algorithm — Example
Database D
TID
100
200
300
400
itemset sup.
C1
{1}
2
{2}
3
Scan D
{3}
3
{4}
1
{5}
3
Items
134
235
1235
25
C2 itemset sup
L2 itemset sup
2
2
3
2
{1
{1
{1
{2
{2
{3
C3 itemset
{2 3 5}
Scan D
{1 3}
{2 3}
{2 5}
{3 5}
© Tan,Steinbach, Kumar
2}
3}
5}
3}
5}
5}
1
2
1
2
3
2
L1 itemset sup.
{1}
{2}
{3}
{5}
2
3
3
3
C2 itemset
{1 2}
Scan D
{1
{1
{2
{2
{3
3}
5}
3}
5}
5}
L3 itemset sup
{2 3 5} 2
Introduction to Data Mining
4/18/2004
‹#›
Reducing Number of Comparisons

Candidate counting:
– Scan the database of transactions to determine the
support of each candidate itemset
– To reduce the number of comparisons, store the
candidates in a hash structure
Instead of matching each transaction against every candidate,
match it against candidates contained in the hashed buckets

Transactions
N
TID
1
2
3
4
5
Hash Structure
Items
Bread, Milk
Bread, Diaper, Beer, Eggs
Milk, Diaper, Beer, Coke
Bread, Milk, Diaper, Beer
Bread, Milk, Diaper, Coke
k
Buckets
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
Generate Hash Tree
Suppose you have 15 candidate itemsets of length 3:
{1 4 5}, {1 2 4}, {4 5 7}, {1 2 5}, {4 5 8}, {1 5 9}, {1 3 6}, {2 3 4}, {5 6 7}, {3 4 5},
{3 5 6}, {3 5 7}, {6 8 9}, {3 6 7}, {3 6 8}
You need:
• Hash function
• Max leaf size: max number of itemsets stored in a leaf node (if number of
candidate itemsets exceeds max leaf size, split the node)
Hash function
3,6,9
1,4,7
234
567
345
136
145
2,5,8
124
457
© Tan,Steinbach, Kumar
125
458
Introduction to Data Mining
356
357
689
367
368
159
4/18/2004
‹#›
Association Rule Discovery: Hash tree
Hash Function
1,4,7
Candidate Hash Tree
3,6,9
2,5,8
234
567
145
136
345
Hash on
1, 4 or 7
124
457
© Tan,Steinbach, Kumar
125
458
159
Introduction to Data Mining
356
357
689
367
368
4/18/2004
‹#›
Association Rule Discovery: Hash tree
Hash Function
1,4,7
Candidate Hash Tree
3,6,9
2,5,8
234
567
145
136
345
Hash on
2, 5 or 8
124
457
© Tan,Steinbach, Kumar
125
458
159
Introduction to Data Mining
356
357
689
367
368
4/18/2004
‹#›
Association Rule Discovery: Hash tree
Hash Function
1,4,7
Candidate Hash Tree
3,6,9
2,5,8
234
567
145
136
345
Hash on
3, 6 or 9
124
457
© Tan,Steinbach, Kumar
125
458
159
Introduction to Data Mining
356
357
689
367
368
4/18/2004
‹#›
Subset Operation
Given a transaction t, what
are the possible subsets of
size 3?
Transaction, t
1 2 3 5 6
Level 1
1 2 3 5 6
2 3 5 6
3 5 6
Level 2
12 3 5 6
13 5 6
123
125
126
135
136
Level 3
© Tan,Steinbach, Kumar
15 6
156
23 5 6
235
236
25 6
256
35 6
356
Subsets of 3 items
Introduction to Data Mining
4/18/2004
‹#›
Subset Operation Using Hash Tree
Hash Function
1 2 3 5 6 transaction
1+ 2356
2+ 356
1,4,7
3+ 56
3,6,9
2,5,8
234
567
145
136
345
124
457
125
458
© Tan,Steinbach, Kumar
159
356
357
689
Introduction to Data Mining
367
368
4/18/2004
‹#›
Subset Operation Using Hash Tree
Hash Function
1 2 3 5 6 transaction
1+ 2356
2+ 356
12+ 356
1,4,7
3+ 56
3,6,9
2,5,8
13+ 56
234
567
15+ 6
145
136
345
124
457
© Tan,Steinbach, Kumar
125
458
159
Introduction to Data Mining
356
357
689
367
368
4/18/2004
‹#›
Subset Operation Using Hash Tree
Hash Function
1 2 3 5 6 transaction
1+ 2356
2+ 356
12+ 356
1,4,7
3+ 56
3,6,9
2,5,8
13+ 56
234
567
15+ 6
145
136
345
124
457
© Tan,Steinbach, Kumar
125
458
159
356
357
689
367
368
Match transaction against 11 out of 15 candidates
Introduction to Data Mining
4/18/2004
‹#›
Factors Affecting Complexity

Choice of minimum support threshold
–
–

Dimensionality (number of items) of the data set
–
–

more space is needed to store support count of each item
if number of frequent items also increases, both computation and
I/O costs may also increase
Size of database
–

lowering support threshold results in more frequent itemsets
this may increase number of candidates and max length of
frequent itemsets
since Apriori makes multiple passes, run time of algorithm may
increase with number of transactions
Average transaction width
– transaction width increases with denser data sets
– This may increase max length of frequent itemsets and traversals
of hash tree (number of subsets in a transaction increases with its
width)
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
Rule Generation

Given a frequent itemset L, find all non-empty
subsets f  L such that f  L – f satisfies the
minimum confidence requirement
– If {A,B,C,D} is a frequent itemset, candidate rules:
ABC D,
A BCD,
AB CD,
BD AC,

ABD C,
B ACD,
AC  BD,
CD AB,
ACD B,
C ABD,
AD  BC,
BCD A,
D ABC
BC AD,
If |L| = k, then there are 2k – 2 candidate
association rules (ignoring L   and   L)
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
Rule Generation

How to efficiently generate rules from frequent
itemsets?
– In general, confidence does not have an antimonotone property
c(ABC D) can be larger or smaller than c(AB D)
– But confidence of rules generated from the same
itemset has an anti-monotone property
– e.g., L = {A,B,C,D}:
c(ABC  D)  c(AB  CD)  c(A  BCD)
Confidence is anti-monotone w.r.t. number of items on the
RHS of the rule

© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
Rule Generation for Apriori Algorithm
Lattice of rules
Low
Confidence
Rule
CD=>AB
ABCD=>{ }
BCD=>A
ACD=>B
BD=>AC
D=>ABC
BC=>AD
C=>ABD
ABD=>C
AD=>BC
B=>ACD
ABC=>D
AC=>BD
AB=>CD
A=>BCD
Pruned
Rules
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
Rule Generation for Apriori Algorithm

Candidate rule is generated by merging two rules
that share the same prefix
in the rule consequent
CD=>AB

BD=>AC
join(CD=>AB,BD=>AC)
would produce the candidate
rule D => ABC
D=>ABC

Prune rule D=>ABC if its
subset AD=>BC does not have
high confidence
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
Effect of Support Distribution

Many real data sets have skewed support
distribution
Support
distribution of
a retail data set
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
Effect of Support Distribution

How to set the appropriate minsup threshold?
– If minsup is set too high, we could miss itemsets involving
interesting rare items (e.g., expensive products)
– If minsup is set too low, it is computationally expensive and the
number of itemsets is very large

Using a single minimum support threshold may not be
effective

Solution: use multiple minimum support, so a larger
minsup for frequent items (e.g., milk, bread) and a smaller
minsup for rare items (e.g., diamond)
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
Compact Representation of Frequent Itemsets

Some itemsets are redundant because they have
identical support as their supersets
TID A1 A2 A3 A4 A5 A6 A7 A8 A9 A10 B1 B2 B3 B4 B5 B6 B7 B8 B9 B10 C1 C2 C3 C4 C5 C6 C7 C8 C9 C10
1
1
1
1
1
1
1
1
1
1
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
2
1
1
1
1
1
1
1
1
1
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
3
1
1
1
1
1
1
1
1
1
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
4
1
1
1
1
1
1
1
1
1
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
5
1
1
1
1
1
1
1
1
1
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
6
0
0
0
0
0
0
0
0
0
0
1
1
1
1
1
1
1
1
1
1
0
0
0
0
0
0
0
0
0
0
7
0
0
0
0
0
0
0
0
0
0
1
1
1
1
1
1
1
1
1
1
0
0
0
0
0
0
0
0
0
0
8
0
0
0
0
0
0
0
0
0
0
1
1
1
1
1
1
1
1
1
1
0
0
0
0
0
0
0
0
0
0
9
0
0
0
0
0
0
0
0
0
0
1
1
1
1
1
1
1
1
1
1
0
0
0
0
0
0
0
0
0
0
10
0
0
0
0
0
0
0
0
0
0
1
1
1
1
1
1
1
1
1
1
0
0
0
0
0
0
0
0
0
0
11
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
1
1
1
1
1
1
1
1
1
12
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
1
1
1
1
1
1
1
1
1
13
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
1
1
1
1
1
1
1
1
1
14
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
1
1
1
1
1
1
1
1
1
15
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
1
1
1
1
1
1
1
1
1

10 
Number of frequent itemsets  3    
k

Need a compact representation
10
k 1
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
Maximal Frequent Itemset
An itemset is maximal frequent if none of its immediate supersets
is frequent
null
Maximal
Itemsets
A
B
C
D
E
AB
AC
AD
AE
BC
BD
BE
CD
CE
DE
ABC
ABD
ABE
ACD
ACE
ADE
BCD
BCE
BDE
CDE
ABCD
ABCE
ABDE
Infrequent
Itemsets
ABCD
E
© Tan,Steinbach, Kumar
Introduction to Data Mining
ACDE
BCDE
Border
4/18/2004
‹#›
Closed Itemset

An itemset is closed if none of its immediate supersets
has the same support as the itemset
TID
1
2
3
4
5
Items
{A,B}
{B,C,D}
{A,B,C,D}
{A,B,D}
{A,B,C,D}
© Tan,Steinbach, Kumar
Itemset
{A}
{B}
{C}
{D}
{A,B}
{A,C}
{A,D}
{B,C}
{B,D}
{C,D}
Introduction to Data Mining
Support
4
5
3
4
4
2
3
3
4
3
Itemset Support
{A,B,C}
2
{A,B,D}
3
{A,C,D}
2
{B,C,D}
3
{A,B,C,D}
2
4/18/2004
‹#›
Maximal vs Closed Itemsets
TID
Items
1
ABC
2
ABCD
3
BCE
4
ACDE
5
DE
124
123
A
12
124
AB
12
24
AC
ABE
2
245
C
123
4
AE
24
ABD
1234
B
AD
2
ABC
2
3
BD
4
ACD
345
D
BC
BE
2
4
ACE
ADE
E
24
CD
34
CE
3
BCD
45
ABCE
ABDE
ACDE
BDE
CDE
BCDE
ABCDE
Introduction to Data Mining
DE
4
BCE
4
ABCD
Not supported by
any transactions
© Tan,Steinbach, Kumar
Transaction Ids
null
4/18/2004
‹#›
Maximal vs Closed Frequent Itemsets
Minimum support = 2
124
123
A
12
124
AB
12
ABC
24
AC
AD
ABD
ABE
1234
B
AE
345
D
2
3
BC
BD
4
ACD
245
C
123
4
24
2
Closed but
not maximal
null
24
BE
2
4
ACE
E
ADE
CD
Closed and
maximal
34
CE
3
BCD
45
DE
4
BCE
BDE
CDE
4
2
ABCD
ABCE
ABDE
ACDE
BCDE
# Closed = 9
# Maximal = 4
ABCDE
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
Maximal vs Closed Itemsets
Frequent
Itemsets
Closed
Frequent
Itemsets
Maximal
Frequent
Itemsets
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
Alternative Methods for Frequent Itemset Generation

Traversal of Itemset Lattice
– General-to-specific vs Specific-to-general
Frequent
itemset
border
null
null
..
..
..
..
{a1,a2,...,an}
{a1,a2,...,an}
(a) General-to-specific
© Tan,Steinbach, Kumar
Frequent
itemset
border
..
..
Frequent
itemset
border
(b) Specific-to-general
Introduction to Data Mining
null
{a1,a2,...,an}
(c) Bidirectional
4/18/2004
‹#›
Alternative Methods for Frequent Itemset Generation

Traversal of Itemset Lattice
– Equivalent Classes
null
A
AB
ABC
B
AC
AD
ABD
ACD
null
C
BC
BD
D
CD
A
AB
BCD
AC
ABC
B
C
BC
AD
ABD
D
BD
CD
ACD
BCD
ABCD
ABCD
(a) Prefix tree
© Tan,Steinbach, Kumar
Introduction to Data Mining
(b) Suffix tree
4/18/2004
‹#›
Alternative Methods for Frequent Itemset Generation

Traversal of Itemset Lattice
– Breadth-first vs Depth-first
(b) Depth first
(a) Breadth first
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
Alternative Methods for Frequent Itemset Generation

Representation of Database
– horizontal vs vertical data layout
Horizontal
Data Layout
TID
1
2
3
4
5
6
7
8
9
10
Items
A,B,E
B,C,D
C,E
A,C,D
A,B,C,D
A,E
A,B
A,B,C
A,C,D
B
© Tan,Steinbach, Kumar
Vertical Data Layout
A
1
4
5
6
7
8
9
Introduction to Data Mining
B
1
2
5
7
8
10
C
2
3
4
8
9
D
2
4
5
9
E
1
3
6
4/18/2004
‹#›
FP-growth Algorithm

Use a compressed representation of the
database using an FP-tree

Once an FP-tree has been constructed, it uses a
recursive divide-and-conquer approach to mine
the frequent itemsets
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
FP-tree construction
null
After reading TID=1:
TID
1
2
3
4
5
6
7
8
9
10
Items
{A,B}
{B,C,D}
{A,C,D,E}
{A,D,E}
{A,B,C}
{A,B,C,D}
{B,C}
{A,B,C}
{A,B,D}
{B,C,E}
A:1
B:1
After reading TID=2:
null
A:1
B:1
B:1
C:1
D:1
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
FP-Tree Construction
TID
1
2
3
4
5
6
7
8
9
10
Items
{A,B}
{B,C,D}
{A,C,D,E}
{A,D,E}
{A,B,C}
{A,B,C,D}
{B,C}
{A,B,C}
{A,B,D}
{B,C,E}
Header table
Item
Pointer
A
B
C
D
E
© Tan,Steinbach, Kumar
Transaction
Database
null
B:3
A:7
B:5
C:1
C:3
D:1
D:1
C:3
D:1
D:1
D:1
E:1
E:1
E:1
Pointers are used to assist
frequent itemset generation
Introduction to Data Mining
4/18/2004
‹#›
FP-growth
C:1
Conditional Pattern base
for D:
P = {(A:1,B:1,C:1),
(A:1,B:1),
(A:1,C:1),
(A:1),
(B:1,C:1)}
D:1
Recursively apply FPgrowth on P
null
A:7
B:5
B:1
C:1
C:3
D:1
D:1
Frequent Itemsets found
(with sup > 1):
AD, BD, CD, ABD, ACD,
BCD
D:1
D:1
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
Tree Projection
null
Set enumeration tree:
A
Possible Extension:
E(A) = {B,C,D,E}
B
C
D
E
AB
AC
AD
AE
BC
BD
BE
CD
CE
DE
ABC
ABD
ABE
ACD
ACE
ADE
BCD
BCE
BDE
CDE
Possible Extension:
E(ABC) = {D,E}
ABCD
ABCE
ABDE
ACDE
BCDE
ABCDE
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
Tree Projection
Items are listed in lexicographic order
 Each node P stores the following information:

–
–
–
–
Itemset for node P
List of possible lexicographic extensions of P: E(P)
Pointer to projected database of its ancestor node
Bitvector containing information about which
transactions in the projected database contain the
itemset
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
Projected Database
Original Database:
TID
1
2
3
4
5
6
7
8
9
10
Items
{A,B}
{B,C,D}
{A,C,D,E}
{A,D,E}
{A,B,C}
{A,B,C,D}
{B,C}
{A,B,C}
{A,B,D}
{B,C,E}
Projected Database
for node A:
TID
1
2
3
4
5
6
7
8
9
10
Items
{B}
{}
{C,D,E}
{D,E}
{B,C}
{B,C,D}
{}
{B,C}
{B,D}
{}
For each transaction T, projected transaction at node A is T  E(A)
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
ECLAT

For each item, store a list of transaction ids (tids)
Horizontal
Data Layout
TID
1
2
3
4
5
6
7
8
9
10
© Tan,Steinbach, Kumar
Items
A,B,E
B,C,D
C,E
A,C,D
A,B,C,D
A,E
A,B
A,B,C
A,C,D
B
Vertical Data Layout
A
1
4
5
6
7
8
9
B
1
2
5
7
8
10
C
2
3
4
8
9
D
2
4
5
9
E
1
3
6
TID-list
Introduction to Data Mining
4/18/2004
‹#›
ECLAT

Determine support of any k-itemset by intersecting tid-lists
of two of its (k-1) subsets.
A
1
4
5
6
7
8
9


B
1
2
5
7
8
10

AB
1
5
7
8
3 traversal approaches:
– top-down, bottom-up and hybrid


Advantage: very fast support counting
Disadvantage: intermediate tid-lists may become too
large for memory
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
Pattern Evaluation

Association rule algorithms tend to produce too
many rules
– many of them are uninteresting or redundant
– Redundant if {A,B,C}  {D} and {A,B}  {D}
have same support & confidence
Interestingness measures can be used to
prune/rank the derived patterns
 In the original formulation of association rules,
support & confidence are the only measures used
and an objective interestingness measure.

– An object interestingness measure is domain
independent.
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
Computing Interestingness Measure

Given a rule X  Y, information needed to compute rule
interestingness can be obtained from a contingency table
Contingency table for X  Y
Y
Y
X
f11
f10
f1+
X
f01
f00
fo+
f+1
f+0
|T|
f11: support of X and Y
f10: support of X and Y
f01: support of X and Y
f00: support of X and Y
Used to define various measures

© Tan,Steinbach, Kumar
support, confidence, lift, Gini,
J-measure, etc.
Introduction to Data Mining
4/18/2004
‹#›
Drawback of Confidence
Coffee
Coffee
Tea
15
5
20
Tea
75
5
80
90
10
100
Association Rule: Tea  Coffee
Confidence= P(Coffee|Tea) = 0.75
but P(Coffee) = 0.9
 Although confidence is high, rule is misleading
 P(Coffee|Tea) = 0.9375
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
Statistical Independence

Population of 1000 students
– 600 students know how to swim (S)
– 700 students know how to bike (B)
– 420 students know how to swim and bike (S,B)
– P(SB) = 420/1000 = 0.42
– P(S)  P(B) = 0.6  0.7 = 0.42
– P(SB) = P(S)  P(B) => Statistical independence
– P(SB) > P(S)  P(B) => Positively correlated
– P(SB) < P(S)  P(B) => Negatively correlated
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
Statistical-based Measures

Measures that take into account statistical
dependence
P(Y | X )
Lift 
P(Y )
P( X , Y )
Interest 
P( X ) P(Y )
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
Example: Lift/Interest
Coffee
Coffee
Tea
15
5
20
Tea
75
5
80
90
10
100
Association Rule: Tea  Coffee
Confidence= P(Coffee|Tea) = 0.75
but P(Coffee) = 0.9
 Lift = 0.75/0.9= 0.8333 (< 1, therefore is negatively associated)
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
Lift and Interest (Another Example)

Example 2:
– X and Y: positively
correlated,
– X and Z, negatively
related
– support and
confidence of
X=>Z dominates
© Tan,Steinbach, Kumar
X 1 1 1 1 0 0 0 0
Y 1 1 0 0 0 0 0 0
Z 0 1 1 1 1 1 1 1
Rule Support Confidence
X=>Y 25%
50%
X=>Z 37.50%
75%
Introduction to Data Mining
4/18/2004
‹#›
Lift and Interest (Another Example)

Interest (lift)
P( A  B)
P( A) P( B )
– A and B negatively correlated, if the value is less than 1;
otherwise A and B positively correlated
X11110000
Y11000000
Z01111111
© Tan,Steinbach, Kumar
Itemset
Support
Interest
X,Y
X,Z
Y,Z
25%
37.50%
12.50%
2
0.9
0.57
Introduction to Data Mining
4/18/2004
‹#›
Drawback of Lift & Interest
Y
Y
X
10
0
10
X
0
90
90
10
90
100
0.1
Lift 
 10
(0.1)(0.1)
Y
Y
X
90
0
90
X
0
10
10
90
10
100
0.9
Lift 
 1.11
(0.9)(0.9)
Statistical independence:
If P(X,Y)=P(X)P(Y) => Lift = 1
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
Constraint-Based Frequent Pattern Mining

Classification of constraints based on their constraintpushing capabilities
– Anti-monotonic: If constraint c is violated, its further mining can
be terminated
– Monotonic: If c is satisfied, no need to check c again
– Convertible: c is not monotonic nor anti-monotonic, but it can be
converted into it if items in the transaction can be properly
ordered
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
Anti-Monotonicity
TDB (min_sup=2)


A constraint C is antimonotone if the super
pattern satisfies C, all of its sub-patterns do so
too
In other words, anti-monotonicity: If an itemset S
violates the constraint, so does any of its
superset
TID
Transaction
10
a, b, c, d, f
20
30
40
b, c, d, f, g, h
a, c, d, e, f
c, e, f, g
Item
Price
Profit

Ex. 1. sum(S.price)  v is anti-monotone
a
60
40

Ex. 2. range(S.profit)  15 is anti-monotone
b
20
0
c
80
-20
d
30
10
e
70
-30
f
100
30
g
50
20
h
40
-10
– Itemset ab violates C
– So does every superset of ab

Ex. 3. sum(S.Price)  v is not anti-monotone

Ex. 4. support count is anti-monotone: core
property used in Apriori
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
Monotonicity
TDB (min_sup=2)



A constraint C is monotone if the pattern
satisfies C, we do not need to check C in
subsequent mining
Alternatively, monotonicity: If an itemset S
satisfies the constraint, so does any of its
superset
TID
Transaction
10
a, b, c, d, f
20
b, c, d, f, g, h
30
a, c, d, e, f
40
c, e, f, g
Item
Price
Profit
a
60
40
b
20
0
c
80
-20
d
30
10
– Itemset ab satisfies C
e
70
-30
– So does every superset of ab
f
100
30
g
50
20
h
40
-10
Ex. 1. sum(S.Price)  v is monotone

Ex. 2. min(S.Price)  v is monotone

Ex. 3. C: range(S.profit)  15
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
Converting “Tough” Constraints
TDB (min_sup=2)


Convert tough constraints into antimonotone or monotone by properly
ordering items
Examine C: avg(S.profit)  25
– Order items in value-descending order
<a,
f, g, d, b, h, c, e>
– If an itemset afb violates C
So
It
does afbh, afb*
becomes anti-monotone!
© Tan,Steinbach, Kumar
Introduction to Data Mining
TID
Transaction
10
a, b, c, d, f
20
b, c, d, f, g, h
30
a, c, d, e, f
40
c, e, f, g
Item
Profit
a
b
c
d
e
f
g
h
40
0
-20
10
-30
30
20
-10
4/18/2004
‹#›
Strongly Convertible Constraints

avg(X)  25 is convertible anti-monotone w.r.t.
item value descending order R: <a, f, g, d, b, h,
c, e>
– If an itemset af violates a constraint C, so does every
itemset with af as prefix, such as afd

avg(X)  25 is convertible monotone w.r.t. item
value ascending order R-1: <e, c, h, b, d, g, f, a>
– If an itemset d satisfies a constraint C, so does
itemsets df and dfa, which having d as a prefix

Item
Profit
a
40
b
0
c
-20
d
10
e
-30
f
30
g
20
h
-10
Thus, avg(X)  25 is strongly convertible
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›