Text Mining and Social Networks

Download Report

Transcript Text Mining and Social Networks

Mixture Models for Document Clustering
Edward J. Wegman
Yasmin H. Said
George Mason University, College of Science
October 30, 2008
University of Maryland, College Park
Outline
• Overview of Text Mining
• Vector Space Text Models
– Latent Semantic Indexing
• Social Networks
– Graph and Matrix Duality
– Two Mode Networks
– Block Models and Clustering
• Examples of Text Mining and Social Networks
• Mixture Models and Document Clustering
• Conclusions and Acknowledgements
Text Mining
• Synthesis of …
– Information Retrieval
• Focuses on retrieving documents from a fixed database
• Bag-of-words methods
• May be multimedia including text, images, video, audio
– Natural Language Processing
• Usually more challenging questions
• Vector space models
• Linguistics: morphology, syntax, semantics, lexicon
– Statistical Data Mining
• Pattern recognition, classification, clustering
Text Mining Tasks
• Text Classification
– Assigning a document to one of several pre-specified
classes
• Text Clustering
– Unsupervised learning – discovering cluster structure
• Text Summarization
– Extracting a summary for a document
– Based on syntax and semantics
• Author Identification/Determination
– Based on stylistics, syntax, and semantics
• Automatic Translation
– Based on morphology, syntax, semantics, and lexicon
• Cross Corpus Discovery
– Also known as Literature Based Discovery
Text Preprocessing
• Denoising
– Means removing stopper words … words with
little semantic meaning such as the, an, and, of,
by, that and so on.
– Stopper words may be context dependent, e.g.
Theorem and Proof in a mathematics document
• Stemming
– Means removal suffixes, prefixes and infixes to
root
– An example: wake, waking, awake, woke 
wake
Vector Space Model
• Documents and queries are represented in a high-dimensional
vector space in which each dimension in the space
corresponds to a word (term) in the corpus (document
collection).
• The entities represented in the figure are q for query and d1,
d2, and d3 for the three documents.
• The term weights are derived from occurrence counts.
Vector Space Methods
• The classic structure in vector space
text mining methods is a termdocument matrix where
– Rows correspond to terms, columns
correspond to documents, and
– Entries may be binary or frequency counts.
• A simple and obvious generalization is
a bigram (multigram)-document matrix
where
– Rows correspond to bigrams, columns to
documents, and again entries are either
binary or frequency counts.
Vector Space Methods
Latent Semantic Indexing
Latent Semantic Indexing
LSI – Some Basic Relations
Social Networks
• Social networks can be represented as
graphs
– A graph G(V, E), is a set of vertices, V, and
edges, E
– The social network depicts actors (in classic
social networks, these are humans) and their
connections or ties
– Actors are represented by vertices, ties between
actors by edges
• There is one-to-one correspondence
between graphs and so-called adjacency
matrices
• Example: Author-Coauthor Networks
Graphs versus Matrices
Two-Mode Networks
• When there are two types of actors
–
–
–
–
–
–
Individuals and Institutions
Alcohol Outlets and Zip Codes
Paleoclimate Proxies and Papers
Authors and Documents
Words and Documents
Bigrams and Documents
• SNA refers to these as two-mode networks, graph
theory as bi-partite graphs
– Can convert from two-mode to one-mode
Two-Mode Computation
Consider a bipartite individual by institution
social network. Let Am×n be the individual by
institution adjacency matrix with m = the
number of individuals and n = the number of
institutions. Then
Cm×m = Am×nATn×m=
Individual-Individual social network adjacency
matrix with cii = ∑jaij = the strength of ties to all
individuals in i’s social network and cij = the tie
strength between individual i and individual j.
Two-Mode Computation
Similarly,
Pn×n = ATn×m Am×n=
Institution by Institution social network adjacency
matrix with pjj=∑iaij= strength of ties to all
institutions in i’s social network with pij the tie
strength between institution i and institution j.
Two-Mode Computation
• Of course, this exactly resembles the
computation for LSI.
• Viewed as a two-mode social network, this
computation allows us:
– to calculate strength of ties between terms
relative to this document database (corpus)
– And also to calculate strength of ties between
documents relative to this lexicon
• If we can cluster these terms and these
documents, we can discover:
– similar sets of documents with respect to this
lexicon
– sets of words that are used the same way in this
corpus
Example of a Two-Mode
Network
Our A matrix
Example of a Two-Mode
Network
Our P matrix
Block Models
• A partition of a network is a clustering of the
vertices in the network so that each vertex is
assigned to exactly one class or cluster.
• Partitions may specify some property that
depends on attributes of the vertices.
• Partitions divide the vertices of a network
into a number of mutually exclusive subsets.
– That is, a partition splits a network into parts.
• Partitions are also sometimes called blocks
or block models.
– These are essentially a way to cluster actors
together in groups that behave in a similar way.
Example of a Two-Mode
Network
Block Model P Matrix - Clustered
Example of a Two-Mode
Network
Block Model Matrix – Our C Matrix Clustered
Example Data
• The text data were collected by the Linguistic Data
Consortium in 1997 and were originally used in
Martinez (2002)
– The data consisted of 15,863 news reports
collected from Reuters and CNN from July 1,
1994 to June 30, 1995
– The full lexicon for the text database included
68,354 distinct words
• In all 313 stopper words are removed
• after denoising and stemming, there remain 45,021
words in the lexicon
– In the examples that I report here, there are 503
documents only
Example Data
• A simple 503 document corpus we have
worked with has 7,143 denoised and stemmed
entries in its lexicon and 91,709 bigrams.
– Thus the TDM is 7,143 by 503 and the BDM is
91,709 by 503.
– The term vector is 7,143 dimensional and the
bigram vector is 91,709 dimensional.
– The BPM for each document is 91,709 by 91,709
and, of course, very sparse.
• A corpus can easily reach 20,000 documents or
more.
Term-Document Matrix Analysis
Zipf’s Law
Term-Document Matrix Analysis
serb
bosniannato
bosnia
crash
plane
safe
41
air
kim
iraqi
iraq
famili
give
365
effort
356 earlier
went
impact
ground
55
43
close
340
planet
telescop
jupit
fragment
comet
earth
show
perhap
train
handlong
problem 157
situat
pictur
45
move
152
bodi
67
502 part
363
seem
1 anyth
water
someth
laterlittl 57 sure
60
79
77
133
oper
big
80
76
81
home
53
nuclear
remain
fact
70
caus
36
realli
207
probabl
side
5258 22
hitnight
return277
cours
118120138
61
come
193 174
71
265
263
158
47
168
155
65
seen
278
499
28
37
abl
pilot
173
27
492 497
bobbi
144
159
498 503
129
latest
lot
5469
flood
154
world
38
150
493
125
182
washington
31
japan kobe
sort
34
165 262181198
166
30
495
74
42
35
month
4846
kind
488
mean heard
hall
494
491
area
feel
helicopt
122137
help
51
25
250
254 280202
490
40 39
rescu
167
501
64
23
quak
496
mile
170
south 68
128
56
194
130
33
123
274
chief
indic
258
353
361
5
178
456 357
32
175
164
124 481
169
247180
151
131
245
489
179
minut
259
153 341
damag
earthquak
141
349
354 162
meet
releas
405
188
126
271
116 tell
333 369
135
132
455
127
saw
227 242
206
368
militari
140
253
483 few
13 404
500 251
467
350
337
thing
177 231 191
see
115
need
134north
270
232 261
366
117 119
161
352
start
273
275
66
happen
324
176korea
362
248 228
453
386 75 252 209
267 143
355
ago
yesterdai
thank
hope
338
370
339
korean
466
359
center
256
260
continu even
160
stori
244
211
266
431
confirm208
276
good
323
204 268
344
336us 331
search
396
talk
like
well just back
156 346
189
272
join get
done
439 485
237
241
464327
335
go here
out hour
401391
269 187
try take dai
american
know
20
right
still
426
look
332
383 142
todai
216
time
think
wai
week point make
place
322
402
first
474149 342 358 326 345
cnn
192 16nation
201
88
205279
334
be
on
morn
210
392
148
clinton
225
least
476 219
explos
find
17
live
came
sai
through
419
leader
14 393
report
226
238
163
fire
far
unit
367
presid
expect
184
incid18
inform question
457
awai
year
246190
peopl
330
249
two
397
number
new
left
257
447
want
459
offici
221
107
last
213
call
480
109
6
347437
hous made state told
work
218
second
believ
222
white
87
186 264199
govern
said
220
364
12
475
458
299 114
185
86
possibl
195
turn
john
282
384 196
shot 106
438
427 217 462
215
197
200
139
477
8
408
4 389
410 214
444
296 417 429sever 284 person
injur
100
400
whether
183
five
203
15
230
298
239
233
293
411
418
283
385
292
ask
387
395
424 425
486
236 461 223 443
398
50
343
appar
10
288
290
415
463
319
409
offic
430
304
19
376
451
car
3
9
11
countri
303
289 235
issu7388
229
255
428
449
325
110
423
315
407
473
406
scene
460
381
478 448
470 310317
294
49
mr
build
308
285
300
citi
24
320
394 147
104
454
concern 21
224306
servic
375
found
85
312
301
403
hear 487
89 108
44
311
471
open
399
102
59
309 101
defens
secur
depart
99377145
328
318
111
146
26
shoot
polic
291
316446 374 84involv297
29
390
man bomb
62
121
94
465
105
321
445
against
98
372
feder
78
48293
investig
379
313
382
414
112
378
kill305 103
287
360
373
respons
90
4342
suspect
136
death
314 case 440212
469 92 91
82 422
413
420
37196
113
416
author
450 83
oklahomatest
479
95
433
307
302
484
286
421 evid
97
442
massachusett
salvi
243
329
412
380
york
234
452
468
clinic
fbi
436
472 281
charg
432 435
court
dna
abort
441 295
attornei
blood
judg
simpson
prosecut
mission generword171
appear
il
forc
deal
astronom
flight space
63
348
351
240
172
72
73
Mixture Models for Clustering
• Mixture models fit a mixture of (normal)
distributions
• We can use the means as centroids of
clusters
• Assign observations to the “closest” centroid
• Possible improvement in computational
complexity
Our Proposed Algorithm
• Choose the number of desired clusters.
• Using a normal mixtures model, calculate
the mean vector for each of the
document proto-clusters.
• Assign each document (vector) to a protocluster anchored by the closest mean
vector.
– This is a Voronoi tessellation of the 7143dimensional term vector space. The Voronoi
tiles correspond to topics for the documents.
• Or assign documents based on maximum
posterior probability.
Normal Mixtures
EM Algorithm for Normal Mixtures
Notation
Considerations about the
Normal Density
Revised EM Algorithm
Computational Complexity
Results
 Weights
Cluster Size Distribution
(Based on Voronoi Tessellation)
Cluster Size Distribution
(Based on Maximum Estimated Posterior
Probability, ij)
Document by Cluster Plot
(Voronoi)
Document by Cluster Plot
(Maximum Posterior Probability)
Cluster Identities
• Cluster 02: Comet Shoemaker Levy Crashing into
Jupiter.
• Cluster 08: Oklahoma City Bombing.
• Cluster 11: Bosnian-Serb Conflict.
• Cluster 12: Court-Law, O.J. Simpson Case.
• Cluster 15: Cessna Plane Crashed onto South Lawn
White House.
• Cluster 19: American Army Helicopter Emergency
Landing in North Korea.
• Cluster 24: Death of North Korean Leader (Kim il
Sung) and North Korea’s Nuclear Ambitions.
• Cluster 26: Shootings at Abortion Clinics in Boston.
• Cluster 28: Two Americans Detained in Iraq.
• Cluster 30: Earthquake that Hit Japan.
Acknowledgments
• Dr. Walid Sharabati
• Dr. Angel Martinez
• Army Research Office (Contract W911NF-041-0447)
• Army Research Laboratory (Contract
W911NF-07-1-0059)
• National Institute On Alcohol Abuse And
Alcoholism (Grant Number F32AA015876)
• Isaac Newton Institute
• Patent Pending
Contact Information
Edward J. Wegman
Department of Computational and Data Sciences
MS 6A2, George Mason University
4400 University Drive
Fairfax, VA 22030-4444 USA
Email: [email protected]
Phone: +1 703 993-1691