Transcript ich.vscht.cz
Last lecture summary
Test-data and Cross Validation
testing error training error model complexity
Test set method
• • • • Split the data set into training and test data sets.
Common ration – 70:30 Train Test Train the algorithm on training set, assess its performance on the test set.
Disadvantages – – This is simple, however it wastes data.
Test set estimator of performance has high variance
adopted from Cross Validation tutorial, Andrew Moore http://www.autonlab.org/tutorials/overfit.html
• stratified division – same proportion of data in the training and test sets
• • • • Training error can not be used as an indicator of model’s performance due to overfitting.
Training
data set - train a range of models, or a given model with a range of values for its parameters.
Compare them on independent data –
Validation
set.
– If the model design is iterated many times, then some overfitting to the validation data can occur and so it may be necessary to keep aside a third
Test
set on which the performance of the selected model is finally evaluated.
LOOCV
1. choose one data point 2. remove it from the set 3. fit the remaining data points 4. note your error using the removed data point as test Repeat these steps for all points. When you are done report the mean square error (in case of regression).
Which kind of Cross Validation?
Good
Test set Cheap
Bad
Variance Wastes data LOOCV Doesn’t waste data Expensive
Can we get best of both worlds?
adopted from Cross Validation tutorial by Andrew Moore http://www.autonlab.org/tutorials/overfit.html
k-fold crossvalidation
1. randomly break data into k partitions 2. remove one partition from the set 3. fit the remaining data points 4. note your error using the removed partition as test data set Repeat these steps for all partitions. When you are done report the mean square error (in case of regression).
Which kind of Cross Validation?
Test set LOOCV 3-fold 10-fold
Good
Cheap.
Doesn’t waste data.
Slightly better than test set.
Only wastes 10%.
Only 10 times more expensive instead of R times.
Bad
Variance Wastes data.
Expensive.
Wastier than 3-fold.
Expensiver than test set.
Wastes 10%.
10 times more expensive instead of R times (as LOOCV is).
R-fold is identical to LOOCV
Model selection via CV
polynomial regression 4 5 6 2 3
degree
1
MSE train MSE 10-fold Choice
Selection and testing
• Complete procedure to algorithm selection and estimation of its quality 1. Divide data to train/test Train Test 2. By Cross Validation on the Train choose the algorithm Train Val 3. Use this algorithm to construct a classifier using Train Train 4. Estimate its quality on the Test Test
Nearest Neighbors Classification
• •
Similarity
s ij
is quantity that reflects the strength of relationship between two objects or two features.
Distance
d ij
measures dissimilarity – Dissimilarity measure the discrepancy between the two objects based on several features.
– Distance satisfies the following conditions: • • distance is always positive or zero (d
ij
≥ 0) distance is zero if and only if it measured to itself • distance is symmetric (d
ij
= d
ji
) – In addition, if distance satisfies triangular inequality |x+y| ≤ |x|+|y|, then it is called
metric
.
Distances for quantitative variables
• Minkowski distance (L
p
norm)
L p
p i n
1
x i
y i p
• distance matrix – matrix with all pairwise distances
p1 p2 p3 p4 p1
0 2.828
3.162
5.099
p2
2.828
0 1.414
3.162
p3
3.162
1.414
0 2
p4
5.099
3.162
2 0
y 2 x 2
Manhattan distance
L
1
i n
1
x i
y i
x 1 y 1
y 2 x 2
Euclidean distance
L
2
i n
1
x i
y i
2 x 1 y 1
k-NN
• • • supervised learning target function f may be – dicrete-valued (classification) – real-valued (regression) We assign to the class which instance is most similar to the given point.
• • k-NN is a lazy learner
lazy learning
– generalization beyond the training data is delayed until a query is made to the system – opposed to eager learning – system tries to generalize the training data before receiving queries
Which k is best?
k = 1 fitting noise, outliers overfitting k = 15 value not too small smooth out distinctive behavior
Hastie et al., Elements of Statistical Learning
Real-valued target function
• Algorithm calculates the mean value of the k nearest training examples.
k = 3 value = 12 value = (12+14+10)/3 = 12 value = 14 value = 10
Distance-weighted NN
• Give greater weight to closer neighbors 5 k = 4 4 1 2 • • unweighted 2 votes 2 votes • • weighted 1/1 2 + 1/2 2 1/4 2 + 1/5 2 = 1.25 votes = 0.102 votes
k-NN issues
• • • • Curse of dimensionality is a problem.
Significant computation may be required to process each new query.
To find nearest neighbors one has to evaluate full distance matrix.
Efficient indexing of stored training examples helps –
kd-tree
Cluster Analysis (i.e. new stuff)
• • We have data, we don’t know classes.
Assign data objects into groups (called clusters) so that data objects from the same cluster are more similar to each other than objects from different clusters.
• • We have data, we don’t know classes.
Assign data objects into groups (called clusters) so that data objects from the same cluster are more similar to each other than objects from different clusters.
Stages of clustering process
On clustering validation techniques, M. Halkidi, Y. Batistakis, M. Vazirgiannis
How would you solve the problem?
• • How to find clusters? Group together most similar patterns.
Single linkage
based on A Tutorial on Clustering Algorithms http://home.dei.polimi.it/matteucc/Clustering/tutorial_html/hierarchical.html
MI/TO
BA FL MI/TO
0
NA RM BA FL MI NA RM TO BA FL
0 662 662 877 255 412 996 0 295 468 268 400 MI 877 295 0 NA 255 468 754 754 564 138 0 RM 412 268 564 219 219 869 0 669 TO 996 400 138 869 669 0
MI/TO
BA
877
FL MI/TO
0
NA RM BA FL MI NA RM TO BA FL
0 662 662 877 255 412 996 0 295 468 268 400 MI 877 295 0 NA 255 468 754 754 564 138 0 RM 412 268 564 219 219 869 0 669 TO 996 400 138 869 669 0
MI/TO
BA
877
FL
295
MI/TO
0
NA RM BA FL MI NA RM TO BA FL
0 662 662 877 255 412 996 0 295 468 268 400 MI 877 295 0 NA 255 468 754 754 564 138 0 RM 412 268 564 219 219 869 0 669 TO 996 400 138 869 669 0
MI/TO
BA
877
FL
295
MI/TO
0
NA
754
RM BA FL MI NA RM TO BA FL
0 662 662 877 255 412 996 0 295 468 268 400 MI 877 295 0 NA 255 468 754 754 564 138 0 RM 412 268 564 219 219 869 0 669 TO 996 400 138 869 669 0
MI/TO
BA
877
FL
295
MI/TO
0
NA
754
RM
564
BA FL MI NA RM TO BA FL
0 662 662 877 255 412 996 0 295 468 268 400 MI 877 295 0 NA 255 468 754 754 564 138 0 RM 412 268 564 219 219 869 0 669 TO 996 400 138 869 669 0
RM BA FL BA FL
0 662 662 0 MI/TO 877 295
NA
255 468 412 268
MI/TO
877 295 0 754 564
NA RM
255 412 468 268 754 564 0 219 219 0
BA FL MI/TO BA
0 662
FL
662 0 877 295 NA/RM 255 268
MI/TO
877 295 0 564
NA/RM
255 268 564 0
BA/NA/RM FL MI/TO BA/NA/RM
0 268 564
FL
268 0 295
MI/TO
564 295 0
BA/FL/NA/RM MI/TO BA/FL/NA/RM
0 295
MI/TO
295 0
Torino → Milano Rome → Naples → Bari → Florence
Dendrogram
Join Torino–Milano and Rome–Naples–Bari–Florence
Torino → Milano (138) Rome → Naples (219) → Bari (255) → Florence (268)
Dendrogram
Join Torino–Milano and Rome–Naples–Bari–Florence (295) BA NA RM FL MI TO
BA NA RM FL MI TO
Complete linkage
MI/TO
BA FL MI/TO
0
NA RM BA FL MI NA RM TO BA FL
0 662 662 877 255 412 996 0 295 468 268 400 MI 877 295 0 NA 255 468 754 754 564 138 0 RM 412 268 564 219 219 869 0 669 TO 996 400 138 869 669 0
MI/TO
BA
996
FL MI/TO
0
NA RM BA FL MI NA RM TO BA FL
0 662 662 877 255 412 996 0 295 468 268 400 MI 877 295 0 NA 255 468 754 754 564 138 0 RM 412 268 564 219 219 869 0 669 TO 996 400 138 869 669 0
MI/TO
BA
996
FL
400
MI/TO
0
NA RM BA FL MI NA RM TO BA FL
0 662 662 877 255 412 996 0 295 468 268 400 MI 877 295 0 NA 255 468 754 754 564 138 0 RM 412 268 564 219 219 869 0 669 TO 996 400 138 869 669 0
MI/TO
BA
996
FL
400
MI/TO
0
NA
869
RM BA FL MI NA RM TO BA FL
0 662 662 877 255 412 996 0 295 468 268 400 MI 877 295 0 NA 255 468 754 754 564 138 0 RM 412 268 564 219 219 869 0 669 TO 996 400 138 869 669 0
MI/TO
BA
996
FL
400
MI/TO
0
NA
869
RM
669
BA FL MI NA RM TO BA FL
0 662 662 877 255 412 996 0 295 468 268 400 MI 877 295 0 NA 255 468 754 754 564 138 0 RM 412 268 564 219 219 869 0 669 TO 996 400 138 869 669 0
RM BA FL BA FL
0 662 662 0 MI/TO 996 400
NA
255 468 412 268
MI/TO
996 400 0 869 669
NA RM
255 412 468 268 869 669 0 219 219 0
BA FL MI/TO BA
0 662
FL
662 0 996 400 NA/RM 412 468
MI/TO
996 400 0 869
NA/RM
412 468 869 0
BA BA
0 MI/TO/FL 996
NA/RM
412
MI/TO/FL
996 0 869
NA/RM
412 869 0
complete linkage
BA NA RM FL MI TO
single linkage
BA NA RM FL MI TO
Average linkage
MI/TO
BA
936.5
FL MI/TO
0 (996+877)/2=936.5
NA RM BA FL MI NA RM TO BA FL
0 662 662 877 255 412 996 0 295 468 268 400 MI 877 295 0 NA 255 468 754 754 564 138 0 RM 412 268 564 219 219 869 0 669 TO 996 400 138 869 669 0
Centroid linkage
MI/TO
BA
895
FL MI/TO
0
NA RM
cluster is represented by its centroid
BA FL MI NA RM TO BA FL
0 662 662 877 255 412 996 0 295 468 268 400 MI 877 295 0 NA 255 468 754 754 564 138 0 RM 412 268 564 219 219 869 0 669 TO 996 400 138 869 669 0
Similarity?
Summary • single linkage (MIN) • complete linkage (MAX) • average linkage • centroids
Summary • single linkage (MIN) • complete linkage (MAX) • average linkage • centroids
Summary • single linkage (MIN) • complete linkage (MAX) • average linkage • centroids
Summary • single linkage (MIN) • complete linkage (MAX) • average linkage • centroids
Summary • single linkage (MIN) • complete linkage (MAX) • average linkage • centroids
Ward’s linkage (method)
In Ward’s method metrics are not used, they do not have to be chosen. Instead, sums of squares (i.e. squared Euclidean distances) between centroids of clusters are computed.
• • • Ward's method says that the distance between two clusters, A and B, is how much the sum of squares will increase when we merge them.
At the beginning of clustering, the sum of squares starts out at zero (because every point is in its own cluster) and then grows as we merge clusters. Ward‘s method keeps this growth as small as possible.
Types of clustering
• •
hierarchical
– groups data with a sequence of nested partitions •
agglomerative
– bottom-up – Start with each data point as one cluster, join the clusters up to the situation when all points form one cluster.
•
divisive
– top-down – Initially all objects are in one cluster, then the cluster is subdivided into smaller and smaller pieces.
partitional
– divides data points into some prespecified number of clusters without the hierarchical structure – i.e. divides the space
Hierarchical clustering
• • • Agglomerative methods are used more widely.
Divisive methods need to consider (2 N − 1 −1) possible subset divisions, which is very computationally intensive. – computational difficulties of finding the optimum partitions Divisive clustering methods are better at finding large clusters than hierarchical methods.
Hierarchical clustering
• Disadvantages – High computational complexity – at least O(N
2
).
• Needs to calculate all mutual distances.
– Inability to adjust once the splitting or merging is performed • no undo
k-means
• • • • How to avoid the computing of all mutual distances?
Calculate distances from representatives (centroids) of clusters.
Advantage: number of centroids is much lower than the number of data points.
Disadvantage: number of centroids k must be given in advance
k-means – kids algorithm
• • • • • • Once there was a land with N houses.
One day K kings arrived to this land.
Each house was taken by the nearest king.
But the community wanted their king to be at the center of the village, so the throne was moved there. Then the kings realized that some houses were closer to them now, so they took those houses, but they lost some.. This went on and on… Until one day they couldn't move anymore, so they settled down and lived happily ever after in their village.
k-means – adults algorithm
• • • decide on the number of clusters k randomly initialize k centroids repeat until convergence (centroids do not move) – assign each point to the cluster represented by the centroid it is nearest to – move the centroids to the position given as a mean of all points in the cluster
k-means applet
http://www.kovan.ceng.metu.edu.tr/~maya/kmeans/index.html
• Disadvantages: – k must be determined in advance.
– Sensitive to initial conditions. The algorithm minimizes the following “energy” function, but may be trapped in the local minima.
1
x i
X l
||
x i
l
|| 2 – Applicable only when mean is defined, then what about categorical data? E.g. replace mean with mode (k-modes).
– Arithmetic mean is not robust to outliers (use median – k-medoids).
– Clusters are spherical because the algorithm is based on distance.
1
x i
X l
||
x i
l
|| 2
Cluster validation
• • • How many clusters are there in data set?
Does the resulting clustering scheme fit our data set?
Is there a better partitioning for the data set?
• The quantitative evaluations of the clustering results are known under the general term cluster validity methods.
• • •
external
– evaluate clustering results based on the knowledge of the correct class labels
internal
– no correct class labels are available, the quality estimate is based on the information intrinsic to the data alone
relative
– several different classifications of one set of data are compared using the same algorithm of classification with different parameters
External validation measures
• • partitioning can be evaluated both with regard to the – purity – fraction of the clusters taken up by its predominant class label – completeness – fraction of items in this predominant class that is grouped in the cluster at hand It is important to take both into account
• binary measures – based on the contingency table of the pairwise assignment of data items – we have two divisions of data consisting of N objects (S = {S
1
• … S
N
}): original data set (i.e. known) C = {C
1
, …, C
n
} • generated (i.e. after clustering) P = {P
1
, …, P
m
}
C P
(a)
C P
(b)
C P
(c)
C P
(d) Rand index
R
a a
d c d
<0,1>, should be maximized
Rand index
(example)
Data assigned to:
Same clusters in ground truth C Different clusters in ground truth C
Same cluster
20
Different clusters
24 20 72 Rand index = (20+72) / (20+24+20+72) = 92/136 =
0.68
Cophenetic correlation coefficient
• • • How good is the hierarchical clustering that we just performed?
Meaning how faithfully a dendrogram preserves the pairwise distances between the original data points?
To calculate it we need two matrices: distance matrix and cophenetic matrix.
TO → MI (138) RM → NA (219) → BA (255) → FL (268) Join TO–MI and RM–NA–BA–FL (295) BA NA RM FL MI TO
BA FL MI NA RM TO BA
0 268 295 255 255 295
FL MI NA RM
0 295 268 268 295 0 295 295 138 0 219 0 295 295
TO
0
Distance matrix BA FL MI NA RM BA FL
0 662 0 MI 877 295 0 NA 255 468 754 0 RM 412 268 564 219 0 TO 996 400 138 869 669
TO
0
Cophenetic matrix BA FL MI NA RM TO BA
0 268 295 255 255 295
FL
0 295 268 268 295
MI NA RM
0 295 295 138 0 219 0 295 295
TO
0 CPCC is a correlation coeffcient between these two columns.
Dist
662 877 255 412 996 295 468 268 400 754 564 138 219 869 669
CP
268 295 255 255 295 295 268 268 295 295 295 138 219 295 295
r xy
n
1 1
n
i
1
x i
x
y i
s s x y y
CPCC = 0.64 (64%)
Interpretation of CPCC
• • • if CPCC < cca 0.8, all data belong to just one cluster The higher the CPCC is, the less information is lost in the clustering process.
CPCC can be calculated at each step of the building of the dendrogram taking into account only entities built into to the tree to that point.
– plot of CPCC vs. number – decrease in the CPCC indicates that the cluster just formed has made the dendrogram less faithful to data – i.e. stop clustering one step before
CPCC = 0.64 < 0.8
Silhouette validation technique
• Using this approach each cluster could be represented by the so-called silhouette, which is based on the comparison of its cohesion (tightness) and separation.
cohesion separation
• In this technique, several measures can be calculated: 1. silhouette width for each sample • you’ll see later why I call these numbers “width” 2. average silhouette width for each cluster 3. overall average silhouette width for a total data set
max
– – s(i) – sillhouete of one data point (sample) cohesion a(i) – average distance of sample i to all other objects in the same cluster – separation b(i) – average distance of sample i to the objects in other clusters. Find the minimum among the clusters.
x x
cohesion
a(i): average distance in the cluster
separation
b(i): average distances to others clusters, find minimal
• -1 ≤ s(i) ≤ 1 – close to 1 – i
th
sample has been assigned to an appropriate cluster (i.e. good) – close to 0 – the sample could also be assigned to the nearest neighbouring cluster (i.e. indiferent) – close to -1 – such a sample has been “misclassified” (i.e. bad)
• For the given cluster j it is possible to calculate a cluster silhouette S
j
as the average of all samples’ silhouette widths in that cluster.
S j
1
n i
n 1 – It characterises the heterogenity and isolation properties of such a cluster.
• Global silhouette (silhouette index) GS is calculated as (c is the number of clusters)
GS
c
1
j
c 1
S j
– The largest GS indicates the best number of clusters.
Iris data k-means – silhouette
http://www.mathworks.com/products/statistics/demos.html?file=/products/demos/shipping/stats/clusterdemo.html
http://www.mathworks.com/products/statistics/demos.html?file=/products/demos/shipping/stats/clusterdemo.html
http://www.mathworks.com/products/statistics/demos.html?file=/products/demos/shipping/stats/clusterdemo.html
GS 2clusters GS 3clusters = 0.8504
= 0.7357
http://www.mathworks.com/products/statistics/demos.html?file=/products/demos/shipping/stats/clusterdemo.html