Transcript Document



Data Mining using WEKA




Geoff Holmes


Department of Computer Science,
University of Waikato, New Zealand





WEKA project and team
Data Mining process
Data format
Preprocessing
Classification
Regression
Clustering
Associations
Attribute selection
Visualization
Performing experiments
New Directions
Conclusion
Waikato Environment for
Knowledge Analysis
Copyright: Martin Kramer ([email protected])
• PGSF/NERF project been going since 1994.
• New Java software development from 98 on.
• Project goals:
• Develop a state-of-the-art workbench of data mining tools
• Explore fielded applications
• Develop new fundamental methods
2
WEKA TEAM

Geoff Holmes, Ian Witten, Bernhard Pfahringer,
Eibe Frank, Mark Hall, Yong Wang, Remco
Bouckaert, Peter Reutemann, Gabi Schmidberger,
Dale Fletcher, Tony Smith, Mike Mayo and Richard
Kirkby

Members on editorial board of MLJ, programme
committees for ICML, ECML, KDD, ….

Authors of a widely adopted data mining textbook.
3
Data mining process
Selected
data
Preprocessed
data
Select Preprocess
Transform
Transformed
data
Mine
Extracted
information
Assimilated
knowledge
Analyze &
Assimilate
4
Data mining software

Commercial packages (Cost ? X 106 dollars)
IBM Intelligent Miner
 SAS Enterprise Miner
 Clementine


WEKA (Free = GPL licence!)
Java => Multi-platform
 Open source – means you get source code

5
Data format

Outlook
Temperature
Humidity
Windy
Play
Sunny
Hot
High
False
No
Sunny
Hot
High
True
No
Overcast
Hot
High
False
Yes
Rainy
Mild
Normal
False
Yes
…
…
…
…
…
Rectangular table format (flat file) very common



Most techniques exist to deal with table format
Row=instance=individual=data point=case=record
Column=attribute=field=variable=characteristic=dimension
6
Data complications





Volume of data – sampling; essential attributes
Missing data
Inaccurate data
Data filtering
Data aggregation
7
WEKA’s ARFF format
%
% ARFF file for weather data with some numeric features
%
@relation weather
@attribute
@attribute
@attribute
@attribute
@attribute
outlook {sunny, overcast, rainy}
temperature numeric
humidity numeric
windy {true, false}
play? {yes, no}
@data
sunny, 85, 85, false, no
sunny, 80, 90, true, no
overcast, 83, 86, false, yes
...
8
Attribute types


ARFF supports numeric and nominal attributes
Interpretation depends on learning scheme

Numeric attributes are interpreted as
-


ordinal scales if less-than and greater-than are used
ratio scales if distance calculations are performed
(normalization/standardization may be required)
Instance-based schemes define distance between
nominal values (0 if values are equal, 1 otherwise)
Integers: nominal, ordinal, or ratio scale?
9
Missing values

Frequently indicated by out-of-range entries
Types: unknown, unrecorded, irrelevant
 Reasons: malfunctioning equipment, changes in
experimental design, collation of different datasets,
measurement not possible


Missing value may have significance in itself (e.g.
missing test in a medical examination)

Most schemes assume that is not the case 
“missing” may need to be coded as additional value
10
Getting to know the data

Simple visualization tools are very useful for
identifying problems
Nominal attributes: histograms (Distribution
consistent with background knowledge?)
 Numeric attributes: graphs (Any obvious outliers?)




2-D and 3-D visualizations show dependencies
Domain experts need to be consulted
Too much data to inspect? Take a sample!
11
Learning and using a model

Learning


Learning algorithm takes instances of concept as input
Produces a structural description (model) as output
Input:
concept
to learn

Learning
algorithm
Model
Prediction


Input
Model takes new instance as input
Outputs prediction
Model
Prediction
12
Structural descriptions (models)

Some models are better than others
Accuracy
 Understandability


Models range from “easy to understand” to
virtually incomprehensible
Decision trees
 Rule induction
 Regression models
 Neural networks

Easier
Harder
13
Pre-processing the data




Data can be imported from a file in various
formats: ARFF, CSV, C4.5, binary
Data can also be read from a URL or from SQL
databases using JDBC
Pre-processing tools in WEKA are called “filters”
WEKA contains filters for:

Discretization, normalization, resampling, attribute
selection, attribute combination, …
14
Explorer: pre-processing
15
Building classification models


“Classifiers” in WEKA are models for predicting
nominal or numeric quantities
Implemented schemes include:


Decision trees and lists, instance-based classifiers,
support vector machines, multi-layer perceptrons,
logistic regression, Bayes’ nets, …
“Meta”-classifiers include:

Bagging, boosting, stacking, error-correcting output
codes, data cleansing, …
16
Explorer: classification
17
Explorer: classification
18
Explorer: classification
19
Explorer: classification
20
Explorer: classification
21
Explorer: classification
22
Explorer: classification
23
Explorer: classification
24
Explorer: classification/regression
25
Explorer: classification
26
Clustering data


WEKA contains “clusterers” for finding groups of
instances in a datasets
Implemented schemes are:




k-Means, EM, Cobweb
Coming soon: x-means
Clusters can be visualized and compared to “true”
clusters (if given)
Evaluation based on loglikelihood if clustering
scheme produces a probability distribution
27
Explorer: clustering
28
Explorer: clustering
29
Explorer: clustering
30
Explorer: clustering
31
Finding associations

WEKA contains an implementation of the Apriori
algorithm for learning association rules


Allows you to identify statistical dependencies
between groups of attributes:


Works only with discrete data
milk, butter  bread, eggs (with confidence 0.9 and
support 2000)
Apriori can compute all rules that have a given
minimum support and exceed a given confidence
32
Explorer: association rules
33
Attribute selection


Separate panel allows you to investigate which
(subsets of) attributes are the most predictive ones
Attribute selection methods contain two parts:
A search method: best-first, forward selection,
random, exhaustive, race search, ranking
 An evaluation method: correlation-based, wrapper,
information gain, chi-squared, PCA, …


Very flexible: WEKA allows (almost) arbitrary
combinations of these two
34
Explorer: attribute selection
35
Data visualization


Visualization is very useful in practice: e.g. helps
to determine difficulty of the learning problem
WEKA can visualize single attributes (1-d) and
pairs of attributes (2-d)




To do: rotating 3-d visualizations (Xgobi-style)
Color-coded class values
“Jitter” option to deal with nominal attributes (and
to detect “hidden” data points)
“Zoom-in” function
36
Explorer: data visualization
37
Performing experiments






The Experimenter makes it easy to compare the
performance of different learning schemes applied
to the same data.
Designed for nominal and numeric class problems
Results can be written into file or database
Evaluation options: cross-validation, learning
curve, hold-out
Can also iterate over different parameter settings
Significance-testing built in!
38
Experimenter: setting it up
39
Experimenter: running it
40
Experimenter: analysis
41
New Directions for Weka


New user interface based on work flows
New data mining techniques





PACE regression
Bayesian Networks
Logistic option trees
New frameworks for very large data sources (MOA)
New applications in the agricultural sector



Matchmaker for RPBC Ltd
Pest control for kiwifruit management
Crop forecasting
42
Next Generation Weka: Knowledge flow GUI
43
Conclusions



Weka is a comprehensive suite of Java programs
united under a common interface to permit
exploration and experimentation on datasets using
state-of-the-art techniques.
The software is available under the GPL from
http://www.cs.waikato.ac.nz/~ml
Weka provides the perfect environment for
ongoing research in data mining.
44