Document 7140770

Download Report

Transcript Document 7140770

An Introduction to Data Mining

Padhraic Smyth Information and Computer Science University of California, Irvine July 2000

Today’s talk: An introduction to data mining General concepts Focus on current practice of data mining: main message is be aware of the “hype factor” Wednesday’s talk: Application of ideas in data mining to problems in atmospheric/environmental science

Outline of Today’s Talk

What is Data Mining?

Computer Science and Statistics: a Brief History

Models and Algorithms

Hot Topics in Data Mining

Conclusions

The Data Revolution

Context

– “.. drowning in data, but starving for knowledge” – Ubiquitous in business, science, medicine, military – Analyzing/exploring data manually becomes difficult with massive data sets •

Viewpoint: data as a resource

– Data themselves are not of direct use – How can we leverage data to make better decisions ?

Technology is a Driving Factor

Larger, cheaper memory

– Moore’s law for magnetic disk density “capacity doubles every 18 months” (Jim Gray, Microsoft) – storage cost per byte falling rapidly •

Faster, cheaper processors

– can analyze more data – fit more complex models – invoke massive search techniques – more powerful visualization

.

.

1 2 .

.

N Massive Data Sets 1 2 . . . . . . . . . . . d

Characteristics

– very large N (billions) – very large d (thousands or millions) – heterogeneous – dynamic – (Note: in scientific applications there is often a temporal and/or spatial dimension)

High-dimensional data (

David Scott, Multivariate Density Estimation, Wiley, 1992) Hypercube in d dimensions Hypersphere in d dimensions

Volume of sphere relative to cube in d dimensions?

Dimension Rel. Volume 2 0.79

3 ?

4 ?

5 ?

6 ?

7 ?

Hypercube in d dimensions

High-dimensional data

Hypersphere in d dimensions Dimension Rel. Volume 2 0.79

3 0.53

4 0.31

5 0.16

6 0.08

7 0.04

• •

high d, uniform => most data points will be “out” at the corners high-d space is sparse: and non-intuitive

What is data mining?

What is data mining?

“Data-driven discovery of models and patterns from massive observational data sets”

What is data mining?

“The magic phrase to put in every funding proposal you write to NSF, DARPA, NASA, etc”

What is data mining?

“The magic phrase you use to sell your…..

- database software - statistical analysis software - parallel computing hardware consulting services”

What is data mining?

“ Data-driven discovery of models and patterns from massive observational data sets ” Statistics, Inference

What is data mining

?

“Data-driven discovery of models and patterns from massive observational data sets” Statistics, Inference Languages and Representations

What is data mining?

“Data-driven discovery of models and patterns from massive observational data sets” Statistics, Inference Languages, Representations Engineering, Data Management

What is data mining?

“Data-driven discovery of models and patterns from massive observational data sets” Statistics, Inference Languages, Representations Applications Engineering, Data Management

Who is involved in Data Mining?

Business Applications

– customer-based, transaction-oriented applications – very specific applications in fraud, marketing, credit-scoring • in-house applications (e.g., AT&T, Microsoft, etc) • consulting firms: considerable hype factor!

– largely involve the application of existing statistical ideas, scaled up to massive data sets (“engineering”) •

Academic Researchers

– mainly in computer science – extensions of existing ideas, significant “bandwagon effect” – largely focused on prediction with multivariate data •

Bottom Line:

– primarily computer scientists, often with little knowledge of statistics, main focus is on algorithms

Myths and Legends in Data Mining

“Data analysis can be fully automated”

– human judgement is critical in almost all applications – “semi-automation” is however very useful

Myths and Legends in Data Mining

“Data analysis can be fully automated”

– human judgement is critical in almost all applications – “semi-automation” is however very useful •

“Association rules are useful”

– association rules are essentially lists of correlations – no documented successful application – compare with decision trees (numerous applications)

Myths and Legends in Data Mining

“Data analysis can be fully automated”

– human judgement is critical in almost all applications – “semi-automation” is however very useful •

“Association rules are useful”

– association rules are essentially lists of correlations – no documented successful application – compare with decision trees (numerous applications) •

“With massive data sets you don’t need statistics”

– massiveness brings heterogeneity - even more statistics

Current Data Mining Software

1. General purpose tools

– software systems for data mining (IBM, SGI, etc) • just simple statistical algorithms with SQL?

• limited support for temporal, spatial data – some successes (difficult to validate) • banking, marketing, retail • mainly useful for large-scale EDA?

– “mining the miners” (Jerry Friedman): • similar to expert systems/neural networks hype in 80’s?

Transaction Data and Association Rules

Items

x x x x x x x x x x x x x x x x x x x x

Supermarket example: (Srikant and Agrawal, 1997)

– #items = 500,000, #transactions = 1.5 million

Transaction Data and Association Rules

Items

x x x x x x x x x x x x x x x x x x x x

Example of an Association Rule

If a customer buys beer they will also buy chips

– p(chips|beer) = “confidence” – p(beer) = “support”

Current Data Mining Software

2. Special purpose (“niche”) applications

fraud detection, direct-mail marketing, credit-scoring,etc.

often solve high-dimensional classification/regression problems Telephone industry applications fraud Direct-mail advertising find new customers increase # home-equity loans common theme: “track the customer!” -

difficult to validate claims of success (few publications)

Advanced Scout

Background

– every NBA game is annotated (each pass, shot, foul, etc.) – potential competitive advantage for coaches – Problem: over a season, this generates alot of data! •

Solution (Bhandari et al, IBM, 1997)

– “attribute focusing” finds conditional ranges on attributes where the distributions differ from the norm – generates descriptions of interesting patterns e.g., “Player X made 100% of his shots when when Player Y was in the game: X normally makes only 50% of his shots” •

Status

– used by 28 of the 29 teams in the NBA – an intelligent assistant

AT&T Classification of Telephone Numbers

Background

– AT&T has about 100 million customers – It logs 300 million calls per day, 40 attributes each – 350 million unique telephone numbers – Which are business and which are residential?

Solution (Pregibon and Cortes, AT&T,1997)

– Proprietary model, using a few attributes, trained on known business customers to adaptively track p(business|data) – Significant systems engineering: data are downloaded nightly, model updated (20 processors, 6Gb RAM, terabyte disk farm) •

Status:

– invaluable evolving “snapshot” of phone usage in US for AT&T – basis for fraud detection, marketing, and other applications

Bad Debt Prediction

Background

– Bank has 120,000 accounts which are delinquent – employs 500 collectors – process is expensive and inefficient •

Predictive Modeling

– target variable: amount repaid within 6 months – input variables: 2000 different variables derived from credit history – model outputs are used to “score” each debtor based on likelihood of paying •

Results

– decision trees, “bump-hunting” used to score customers • non-trivial software issues in handling such large data sets – “scoring” system in routine use – estimated savings to bank are in millions/annum

Outline

What is Data Mining?

Computer Science and Statistics: a Brief History

Historical Context: Statistics

Gauss, Fisher, and all that

– least-squares, maximum likelihood – development of fundamental principles •

The Mathematical Era

– 1950’s: Neyman, etc: the mathematicians take over •

The Computational Era

– steadily growing since the 1960’s • note: “data mining/fishing” viewed very negatively!

– 1970’s: EDA, Bayesian estimation, flexible models, EM, etc – a growing awarness of the power and role of computing in data analysis

Historical Context: Computer Science

Pattern Recognition and AI

– focus on perceptual problems (e.g., speech, images) – 1960’s: bifurcation into statistical and non-statistical approaches, e.g., grammars – convergence of applied statistics and engineering • e.g., statistical image analysis: Geman, Grenander, etc •

Machine Learning and Neural Networks

– 1980’s: failure of non-statistical learning approaches – emergence of flexible models (trees, networks) – convergence of applied statistics and learning • e.g., work of Friedman, Spiegelhalter, Jordan, Hinton

The Emergence of Data Mining

Distinct threads of evolution

– AI/machine learning • 1989 KDD workshop -> ACM SIGKDD 2000 • focus on “automated discovery, novelty” – Database Research • focus on massive data sets • e.g., SIGMOD -> association rules, scalable algorithms – “Data Owners” • what can we do with all this data in our RDBMS?

• primarily customer-oriented transaction data owners • industry dominated, applications-oriented

The Emergence of Data Mining

The “Mother in Law” phenomenon

• even your mother-in-law has heard about data mining •

Beware of the hype!

– remember expert systems, neural nets, etc – basically sound ideas that were oversold creating a backlash

Statistics Computer Science

Statistics Computer Science

Statistical Inference Statistical Pattern Recognition Neural Networks Machine Learning Data Mining Databases

Statistics

Where Work is Published

Computer Science

Statistical Inference Statistical Pattern Recognition Neural Networks Machine Learning

JASA, JRSS IEEE PAMI ICPR ICCV

Data Mining Databases

NIPS Neural Comp.

ICML COLT ML Journal KDD IJDMKD SIGMOD VLDB

Statistics

Focus Areas

Computer Science

Statistical Inference Statistical Pattern Recognition Neural Networks Machine Learning Data Mining Databases Computer Vision, Signal Recognition Nonlinear Regression Graphical Models Hidden Variable Models Flexible Classification Models Pattern Finding Scalable Algorithms

General Characteristics

Computer Vision, Signal Recognition Hidden Variable Models

More Statistical

Nonlinear Regression Flexible Classification Models Graphical Models Pattern Finding Scalable Algorithms

More Algorithmic

General Characteristics

Computer Vision, Signal Recognition Hidden Variable Models

More Statistical

Nonlinear Regression Flexible Classification Models Graphical Models Pattern Finding Scalable Algorithms

More Algorithmic Continuous Signals Categorical Data

General Characteristics

Computer Vision, Signal Recognition Hidden Variable Models

More Statistical

Nonlinear Regression Flexible Classification Models Graphical Models Pattern Finding Scalable Algorithms

More Algorithmic Continuous Signals Model-Based Categorical Data “Model-free”

General Characteristics

Computer Vision, Signal Recognition Hidden Variable Models

More Statistical

Nonlinear Regression Flexible Classification Models Graphical Models Pattern Finding Scalable Algorithms

More Algorithmic Continuous Signals Model-Based Time/Space Modeling Categorical Data “Model-free” Multivariate Data

“Hot Topics”

Computer Vision, Signal Recognition Hidden Variable Models Nonlinear Regression Flexible Classification Models Graphical Models

Deformable Templates Mixture/ Factor Models Belief Networks

Pattern Finding

Classification Trees

Scalable Algorithms

Association Rules Hidden Markov Models Model Combining Support Vector Machines

Implications

The “renaissance data miner” is skilled in:

– statistics: theories and principles of inference – modeling: languages and representations for data – optimization and search – algorithm design and data management •

The educational problem

– is it necessary to know all these areas in depth?

– Is it possible?

– Do we need a new breed of professionals?

The applications viewpoint:

– How does a scientist or business person keep up with all these developments? – How can they choose the best approach for their problem

Outline

What is Data Mining?

Computer Science and Statistics: a Brief History

Models and Algorithms

Data Set E.g., multivariate, continuous/categorical, temporal, spatial, combinations, etc

Data Set Task E.g., Exploration, Prediction, Clustering, Density Estimation, Pattern Discovery

Data Set Task Model Language/Representation: Underlying functional form used for representation, e.g., linear functions, hierarchies, rules/boxes, grammars, etc

Data Set Task Model Score Function Statistical Inference: How well a model fits data, e.g., square-error, likelihood, classification loss, query match, interpretation

Data Set Task Model Score Function Optimization

Modeling

Computational method used to optimize score function, given the model and score function, e.g., hill-climbing, greedy search, linear programming

Data Set Task Model Score Function Optimization Data Access

Modeling Algorithm

Actual instantiation as an algorithm with data structures, efficient implementation, etc.

Data Set Task Model Score Function

Modeling

Optimization Data Access

Algorithm

Human Evaluation/Decisions

Multivariate Prediction

CART

Cross-Validation Hierarchical representation of piecewise constant mapping Emphasis on predictive power and flexibility of model Greedy Search Flat File Accuracy and Interpretability

Transaction Exploratory Thresholds on p Sets of local rules/ conditional probabilities

Association Rules

Systematic Search Relational Database Emphasis on computational efficiency and data access ????

The Reductionist Viewpoint

Methodology

– reduce problems to fundamental components – think in terms of components first, algorithms second – ultimately the application should “drive” the algorithm – allows systematic comparison and synthesis – clarifies relative role of statistics, databases, search, etc

Cultural Differences

Computer Scientists:

– often have little exposure to the “modeling art” of data analysis – tend to stick to a small set of well-understood models and problems – papers focus on algorithms, not models – but are typically good at making things run fast •

Statisticians:

– applied statisticians are often very good at the “art” component – little experience with the data management/engineering part – papers focus on models, not algorithms •

Bottom line

– the computer scientists get more attention since they are much savvier at marketing new ideas than the statisticians – The “right” way: systematically combine both statistics and engineering/CS, beware of hype

Outline

What is Data Mining?

Computer Science and Statistics: a Brief History

Models and Algorithms

Hot Topics in Data Mining

Hot Topics

1. Flexible Prediction Models

2. Scalable Algorithms

3. Pattern Discovery

4. Graphical Models

5. Hidden Variable Models

6. Deformable Templates

7. Heterogenous Data Today’s talk Wednesday’s talk

1. Flexible Prediction Models

Model Combining:

– Stacking • linear combinations of models with X-validated weights – Bagging • equally weighted combinations trained on bootstrap samples – Boosting • iterative re-training on data points which contribute to error •

Flexible Model Forms

– Decision trees – Neural networks – Support vector machines

2. Scalable Algorithms

How far away are the data?

Memory RAM Disk

2. Scalable Algorithms

How far away are the data?

Memory RAM Disk Random Access Time 10

-8

seconds 10

-3

seconds

2. Scalable Algorithms

How far away are the data?

Memory RAM Disk Random Access Time 10

-8

seconds 10

-3

seconds Effective Distance 1 meter 100 km

2. Scalable Algorithms

“Scaling down the data” or “data approximation”

– work from clever data summarizations (e.g., sufficient statistics) •

Squashing (DuMouchel et al, AT&T, KDD ‘99)

– create a small “pseudo data set” – similar statistical properties to the original (massive) data set – now run your standard algorithm on the pseudo-data – can be significantly better than random sampling – interesting theoretical (statistical) basis •

Frequent Itemsets

– find all tuples which with more than T occurrences in D – (basis for association rule algorithms) – itemsets: cheap computational way to generate joint probabilities – use maximum entropy to construct full model from itemsets (Pavlov, Mannila, and Smyth, KDD 99)

2. Scalable Algorithms

“Scaling up the algorithm”

– data structures and caching strategies to speed up known algorithms – typically orders of magnitude speed improvements •

Exact Algorithms

– BOAT (Gehrke et al, SIGMOD 98): • a scalable decision tree construction algorithm • clever algorithms can work from only 2 scans – ADTrees (Moore, CMU, 1998) • clever data structures for caching sufficient statistics for multivariate categorical data •

Approximate Algorithms

– approximate EM for Gaussian mixture modeling (Bradley and Fayyad, KDD 98) – various heuristics for caching, approximation

3. Pattern Finding

Patterns = unusual hard-to find local “pockets” of data

– finding patterns is not the same as global model fitting – the simplest example of patterns are association rules •

“Bump-hunting”

– PRIM algorithm of Friedman and Fisher (1999) – finds multivariate “boxes” in high-dimensional spaces where mean of target variable is higher – effective and flexible • e.g., finding small highly profitable groups of customers

“Bump-Hunting”

“Bump-Hunting”

“Bump-Hunting”

“Bump-Hunting”

“Bump-Hunting”

“Bump-Hunting”

Pattern Finding in Sequence Data

Clustering Sequences

– sequences of different lengths from different individuals • e.g. sequences of Web-page requests – Problem: do the sequences cluster into groups?

– Clustering problem is non-trivial: • distance between 2 sequences of different lengths?

Model-based approach (Cadez, Heckerman, Smyth, KDD 2000)

– each cluster described as a Markov model – defines a mixture of Markov models, EM used for clustering – Application to MSNBC.com Web data • 900,000 users/sequences per day • clustered into order of 100 groups • useful for visualization/exploration of massive Web log

Clusters of Dynamic Behavior

Cluster 1

B A C D

Cluster 2

B A C D A

Cluster 3

B C D

Final Comments

Successful data mining requires integration of

– statistics – computer science – the application discipline •

Current practice of data mining

– computer scientists focused on business applications – relatively little statistical sophistication: but some new ideas – considerable “hype” factor •

Wednesday’s talk:

– new ideas in temporal and spatial models – new ideas in latent variable modeling – potential applications in atmospheric/environmental science

Further Reading

Papers:

www.ics.uci.edu/~datalab

– e.g., see P. Smyth, “Data mining: data analysis on a grand scale?”, preprint of review paper to appear in

Statistical Methods in Medical Research

Text (forthcoming)

Principles of Data Mining

• D. J Hand, H. Mannila, P. Smyth • MIT Press, late 2000

3. Pattern Finding

Contrast Sets (Bay and Pazzani, KDD99)

– individuals or objects categorized into 2 groups • e.g., students enrolled in CS and in Engineering – high-dimensional multivariate measurements on each – Problem: automatically summarize the significant differences between the two groups.

• e.g., [fraction of ESL >] AND [mean SAT >] in CS •

Approach

– massive systematic breadth-first search through potential variable value conjunctions – branch-and-bound pruning of exponentially large search space – statistical adjustments for multiple hypothesis problem

3. Pattern Finding

Contrast Sets (Bay and Pazzani, KDD99)

– individuals or objects categorized into 2 groups • e.g., students enrolled in CS and in Engineering – high-dimensional multivariate measurements on each – automatically produces a summary of significant differences between groups (Bay and Pazzani, KDD ‘99) – combines massive search with statistical estimation •

Time-Series Pattern Spotting

– “find me a shape that looks like this” – semi-Markov deformable templates (Ge and Smyth, KDD 2000) – significantly outperforms template matching and DTW – Bayesian approach integrates prior knowledge with data

Example: Deformable Templates

 

Each waveform segment corresponds to a state in the model.

Segmental hidden semi-Markov model Segments - - - - - - - States S 1 S 2 S T

End-Point Detection in Semiconductor Manufacturing

500

Pattern-Based End-Point Detection

400 300 Original Pattern 200 0 500 50 100 150 200 250 300 350 400 400 300 200 0 50 100 Detected Pattern 150 200 TIME (SECONDS) 250 300 350 400

Heterogeneous Data Modeling

Clustering Objects (sequences, curves, etc)

– probabilistic approach: define a mixture of models (Cadez, Gaffney, and Smyth, KDD 2000) – unified framework for clustering objects of different dimensions – applications: • curve-clustering: – e.g., mixture of regression models (Gaffney and Smyth (KDD ‘99) – video movement, gene expression data, storm trajectories • sequence clustering – e.g., mixtures of Markov models – clustering of MSNBC Web data (Cadez et al, KDD ‘00)

160 TRAJECTORIES OF CENTROIDS OF MOVING HAND IN VIDEO STREAMS 140 120 100 80 60 40 0 5 10 15 TIME 20 25 30 ESTIMATED CLUSTER TRAJECTORY 125 120 115 110 105 100 95 90 85 0 5 10 15 TIME 20 25 30 ESTIMATED CLUSTER TRAJECTORY 80 70 60 50 40 30 20 10 0 0 5 10 15 TIME 20 25 30

Heterogenous Populations of Objects

Population Model in parameter space Individuals and Parameters Observed Data