Information Visualization: Principles, Promise, and
Download
Report
Transcript Information Visualization: Principles, Promise, and
User Interfaces for Information Access
Marti Hearst
IS202, Fall 2006
1
Outline
•
•
•
•
•
What do people search for?
Why is supporting search difficult?
What works in search interfaces?
When does search result grouping work?
What about social tagging and search?
2
What Do People Search For?
(And How?)
3
A
of Information Needs
Question/Answer
• What is the typical height of a giraffe?
Browse and Build
• What are some good ideas for
landscaping my client’s yard?
Text Data Mining
• What are some promising untried
treatments for Raynaud’s disease?
4
Questions and Answers
• What is the height of a typical giraffe?
– The result can be a simple answer, extracted from
existing web pages.
– Can specify with keywords or a natural language
query
• However, most search engines are not set up to handle
questions properly.
• Get different results using a question vs. keywords
5
6
7
8
9
10
Classifying Queries
• Query logs only indirectly indicate a user’s needs
• One set of keywords can mean various different things
– “barcelona”
– “dog pregnancy”
– “taxes”
• Idea: pair up query logs with which search result the
user clicked on.
– “taxes” followed by a click on tax forms
– Study performed on Altavista logs
– Author noted afterwards that Yahoo logs appear to have a
different query balance.
Rose & Levinson, Understanding User Goals in Web Search, Proceedings of WWW’04
11
Classifying Web Queries
Search Goals
Description
Example Queries
Navigational
(~25%)
Go to a specific
known website
Informational
(~62%)
Learn something
about a topic,
get an answer to a
question,
get advice, ideas
get a list of links
2004 election data
baseball death and
injury
why are metals
shiny
phone card
Obtain a resource
(e.g., software,
music, knitting
patterns)
kazaa lite
live camera in L.A.
Resource
(~13%)
aloha airlines
duke university
hospital
kelly blue book
Rose & Levinson, Understanding User Goals in Web Search, Proceedings of WWW’04
12
What are people looking for?
Check out Google Answers
13
14
15
Why is Supporting Search Difficult?
16
Why is Supporting Search Difficult?
•
•
•
•
Everything is fair game
Abstractions are difficult to represent
The vocabulary disconnect
Users’ lack of understanding of the technology
17
Everything is Fair Game
• The scope of what people search for is all
of human knowledge and experience.
– Other interfaces are more constrained
(word processing, formulas, etc)
• Interfaces must accommodate human
differences in:
–
–
–
–
Knowledge / life experience
Cultural background and expectations
Reading / scanning ability and style
Methods of looking for things (pilers vs. filers)
18
Abstractions Are Hard to Represent
• Text describes abstract concepts
– Difficult to show the contents of text in a visual or
compact manner
• Exercise:
– How would you show the preamble of the US Constitution
visually?
– How would you show the contents of Joyce’s Ulysses
visually? How would you distinguish it from Homer’s The
Odyssey or McCourt’s Angela’s Ashes?
• The point: it is difficult to show text without using text
19
Vocabulary Disconnect
• If you ask a set of people to describe a set of
things there is little overlap in the results.
20
Lack of Technical Understanding
• Most people don’t understand the underlying
methods by which search engines work.
21
People Don’t Understand Search Technology
A study of 100 randomly-chosen people found:
– 14% never type a url directly into the address bar
• Several tried to use the address bar, but did it wrong
– Put spaces between words
– Combinations of dots and spaces
– “nursing spectrum.com” “consumer reports.com”
– Several use search form with no spaces
• “plumber’slocal9” “capitalhealthsystem”
– People do not understand the use of quotes
• Only 16% use quotes
• Of these, some use them incorrectly
– Around all of the words, making results too restrictive
– “lactose intolerance –recipies”
» Here the – excludes the recipes
– People don’t make use of “advanced” features
• Only 1 used “find in page”
• Only 2 used Google cache
Hargattai, Classifying and Coding Online Actions, Social Science Computer
Review 22(2), 2004 210-227.
22
People Don’t Understand Search Technology
Without appropriate explanations, most of 14
people had strong misconceptions about:
• ANDing vs ORing of search terms
– Some assumed ANDing search engine indexed a smaller
collection; most had no explanation at all
• For empty results for query “to be or not to be”
– 9 of 14 could not explain in a method that remotely
resembled stop word removal
• For term order variation “boat fire” vs. “fire boat”
– Only 5 out of 14 expected different results
– Understanding was vague, e.g.:
» “Lycos separates the two words and searches for the
meaning, instead of what’re your looking for. Google
understands the meaning of the phrase.”
Muramatsu & Pratt, “Transparent Queries: Investigating Users’
Mental Models of Search Engines, SIGIR 2001.
23
What Works in Search Interfaces?
24
What Works for Search Interfaces?
• Query term highlighting
– in results listings
– in retrieved documents
• Sorting of search results according to important criteria
(date, author)
• Grouping of results according to well-organized category
labels (see Flamenco)
• DWIM only if highly accurate:
– Spelling correction/suggestions
– Simple relevance feedback (more-like-this)
– Certain types of term expansion
• So far: not really visualization
Hearst et al: Finding the Flow in Web Site Search, CACM 45(9), 2002.
25
Highlighting Query Terms
• Boldface or color
• Adjacency of terms with relevant context is a
useful cue.
26
27
28
Highlighted query term hits
using Google toolbar
Microso
US
found!
Blackout
found!
PGA
don’t know
Microsoft
don’t know
29
Small Details Matter
• UIs for search especially require great
care in small details
– In part due to the text-heavy nature of
search
– A tension between more information and
introducing clutter
• How and where to place things important
– People tend to scan or skim
– Only a small percentage reads instructions
30
Small Details Matter
• UIs for search especially require endless tiny adjustments
– In part due to the text-heavy nature of search
• Example:
– In an earlier version of the Google Spellchecker, people
didn’t always see the suggested correction
• Used a long sentence at the top of the page:
“If you didn’t find what you were looking for …”
• People complained they got results, but not the right results.
• In reality, the spellchecker had suggested an appropriate
correction.
Interview with Marissa Mayer by Mark Hurst:
http://www.goodexperience.com/columns/02/1015google.html
31
Small Details Matter
• The fix:
– Analyzed logs, saw people didn’t see the correction:
• clicked on first search result,
• didn’t find what they were looking for (came right back to the
search page
• scrolled to the bottom of the page, did not find anything
• and then complained directly to Google
– Solution was to repeat the spelling suggestion at the
bottom of the page.
• More adjustments:
– The message is shorter, and different on the top vs. the
bottom
Interview with Marissa Mayer by Mark Hurst:
http://www.goodexperience.com/columns/02/1015google.html
32
33
Using DWIM
• DWIM – Do What I Mean
– Refers to systems that try to be “smart” by guessing users’
unstated intentions or desires
• Examples:
– Automatically augment my query with related terms
– Automatically suggest spelling corrections
– Automatically load web pages that might be relevant to
the one I’m looking at
– Automatically file my incoming email into folders
– Pop up a paperclip that tells me what kind of help I need.
• THE CRITICAL POINT:
– Users love DWIM when it really works
– Users DESPISE it when it doesn’t
34
DWIM that Works
• Amazon’s “customers who bought X also bought Y”
– And many other recommendation-related features
35
DWIM Example:
Spelling Correction/Suggestion
• Google’s spelling suggestions are highly
accurate
• But this wasn’t always the case.
– Google introduced a version that wasn’t very
accurate. People hated it. They pulled it. (According
to a talk by Marissa Mayer of Google.)
– Later they introduced a version that worked well. People
love it.
• But don’t get too pushy.
– For a while if the user got very few results, the page was
automatically replaced with the results of the spelling
correction
– This was removed, presumably due to negative responses
Information from a talk by Marissa Mayer of Google
36
Query Reformulation
• Query reformulation:
– After receiving unsuccessful results, users modify
their initial queries and submit new ones intended to
more accurately reflect their information needs.
• Web search logs show that searchers often
reformulate their queries
– A study of 985 Web user search sessions found
• 33% went beyond the first query
• Of these, ~35% retained the same number of terms
while 19% had 1 more term and 16% had 1 fewer
Use of query reformulation and relevance feedback by Excite users,
37
Spink, Janson & Ozmultu, Internet Research 10(4), 2001
Query Reformulation
• Many studies show that if users engage in
relevance feedback, the results are much
better.
– In one study, participants did 17-34% better with RF
– They also did better if they could see the RF terms
than if the system did it automatically (DWIM)
• But the effort required for doing so is usually a
roadblock.
Koenemann & Belkin, A Case for Interaction: A Study of Interactive
Information Retrieval Behavior and Effectiveness, CHI’96
38
Query Reformulation
• What happens when the web search engines
suggests new terms?
• Web log analysis study using the Prisma term
suggestion system:
Anick, Using Terminological Feedback for Web Search Refinement –
A Log-based Study, SIGIR’03.
39
Query Reformulation Study
• Feedback terms were displayed to 15,133 user sessions.
– Of these, 14% used at least one feedback term
– For all sessions, 56% involved some degree of query
refinement
• Within this subset, use of the feedback terms was 25%
– By user id, ~16% of users applied feedback terms at least
once on any given day
• Looking at a 2-week session of feedback users:
– Of the 2,318 users who used it once, 47% used it again in
the same 2-week window.
• Comparison was also done to a baseline group that was
not offered feedback terms.
– Both groups ended up making a page-selection click at the
same rate.
Anick, Using Terminological Feedback for Web Search Refinement –
A Log-based Study, SIGIR’03.
40
Query Reformulation Study
Anick, Using Terminological Feedback for Web Search Refinement –
A Log-based Study, SIGIR’03.
41
Query Reformulation Study
• Other observations
– Users prefer refinements that contain the initial
query terms
– Presentation order does have an influence on term
uptake
Anick, Using Terminological Feedback for Web Search Refinement –
A Log-based Study, SIGIR’03.
42
Query Reformulation Study
• Types of refinements
Anick, Using Terminological Feedback for Web Search Refinement –
A Log-based Study, SIGIR’03.
43
Prognosis: Query Reformulation
• Researchers have always known it can be helpful, but
the methods proposed for user interaction were too
cumbersome
– Had to select many documents and then do feedback
– Had to select many terms
– Was based on statistical ranking methods which are hard
for people to understand
• Indirect Relevance Feedback can improve general
ranking (see section on social search)
44
Usability of Grouping Search Results
45
The Need to Group
• Interviews with lay users often reveal a
desire for better organization of retrieval
results
• Useful for suggesting where to look next
– People prefer links over generating search
terms*
– But only when the links are for what they want
*Ojakaar and Spool, Users Continue After Category Links, UIETips Newsletter,
http://world.std.com/~uieweb/Articles/, 2001
46
47
48
49
50
Conundrum
• Everyone complains about disorganized
search results.
• There are lots of ideas about how to
organize them.
• Why don’t the major search engines do
so?
• What works; what doesn’t?
51
Different Types of Grouping
Clusters
Keyword Sharing
(Document similarity based)
(polythetic)
(any doc with keyword in group)
(monothetic)
Scatter/Gather
Grouper
Findex
DisCover
Single Category
Multiple (Faceted) Categories
Swish
Dynacat
Flamenco
Phlat/Stuff I’ve seen
Monothetic vs Polythetic After Kummamuru et al, 2004
52
Clusters
• Fully automated
• Potential benefits:
– Find the main themes in a set of documents
• Potentially useful if the user wants a summary
of the main themes in the subcollection
• Potentially harmful if the user is interested in
less dominant themes
– More flexible than pre-defined categories
• There may be important themes that have not
been anticipated
– Disambiguate ambiguous terms
• ACL
– Clustering retrieved documents tends to group
those relevant to a complex query together
Hearst, Pedersen, Revisiting the Cluster Hypothesis, SIGIR’96
53
Categories
• Human-created
– But often automatically assigned to items
• Arranged in hierarchy, network, or facets
– Can assign multiple categories to items
– Or place items within categories
• Usually restricted to a fixed set
– So help reduce the space of concepts
• Intended to be readily understandable
– To those who know the underlying domain
– Provide a novice with a conceptual structure
• There are many already made up!
54
Cluster-based Grouping
Document Self-similarity
(Polythetic)
55
Scatter/Gather Clustering
• Developed at PARC in the late 80’s/early 90’s
• Top-down approach
– Start with k seeds (documents) to represent k
clusters
– Each document assigned to the cluster with the most
similar seeds
• To choose the seeds:
– Cluster in a bottom-up manner
– Hierarchical agglomerative clustering
• Can recluster a cluster to produce a hierarchy of
clusters
Pedersen, Cutting, Karger, Tukey, Scatter/Gather: A Cluster-based
Approach to Browsing Large Document Collections, SIGIR 1992
56
The Scatter/Gather Interface
57
Two Queries: Two Clusterings
AUTO, CAR, ELECTRIC
8 control drive accident …
AUTO, CAR, SAFETY
6 control inventory integrate …
25 battery california technology … 10 investigation washington …
48 import j. rate honda toyota …
12 study fuel death bag air …
16 export international unit japan
61 sale domestic truck import …
3 service employee automatic …
11 japan export defect unite …
The main differences are the clusters that are central to the query
58
Scatter/Gather Evaluations
• Can be slower to find answers than linear
search!
• Difficult to understand the clusters.
• There is no consistence in results.
• However, the clusters do group relevant
documents together.
• Participants noted that useful for eliminating
irrelevant groups.
59
Visualizing Clustering Results
• Use clustering to map the entire huge
multidimensional document space into a huge
number of small clusters.
• User dimension reduction and then project
these onto a 2D/3D graphical representation
64
Clustering Visualizations
image from Wise et al 95
65
Clustering Visualizations
(image from Wise et al 95)
66
67
(Lin 92, Chen et al.
97)
Kohonen Feature Maps
Are visual clusters useful?
• Four Clustering Visualization Usability Studies
• Conclusions:
– Huge 2D maps may be inappropriate focus for
information retrieval
• cannot see what the documents are about
• space is difficult to browse for IR purposes
• (tough to visualize abstract concepts)
– Perhaps more suited for pattern discovery and gistlike overviews.
68
Term-based Grouping
Single Term from Document
Characterizes the Group
(Monothetic)
80
Findex, Kaki & Aula
• Two innovations:
– Used very simple method to create the
groupings, so that it is not opaque to users
• Based on frequent keywords
• Doc is in category if it contains the keyword
• Allows docs to appear in multiple categories
– Did a naturalistic, longitudinal study of use
• Analyzed the results in interesting ways
Kaki and Aula: “Findex: Search Result Categories Help Users when Document Ranking Fails”, CHI ‘05
81
82
Study Design
• 16 academics
– 8F, 8M
– No CS
– Frequent searchers
• 2 months of use
• Special Log
– 3099 queries issued
– 3232 results accessed
• Two questionnaires (at start and end)
• Google as search engine; rank order
retained
83
After 1 Week
Months
After 2
84
Kaki & Aula Key Findings
(all significant)
• Category use takes almost 2 times longer than linear
– First doc selected in 24.4 sec vs 13.7 sec
• No difference in average number of docs opened per
search (1.05 vs. 1.04)
• However, when categories used, users select >1 doc in
28.6% of the queries (vs 13.6%)
• Num of searches without 0 result selections is lower
when the categories are used
• Median position of selected doc when:
– Using categories: 22 (sd=38)
– Just ranking:
2 (sd=8.6)
85
Kaki & Aula Key Findings
• Category Selections
– 1915 categories selections in 817 searches
– Used in 26.4% of the searches
– During the last 4 weeks of use, the proportion of searches
using categories stayed above the average (27-39%)
– When categories used, selected 2.3 cats on average
– Labels of selected cats used 1.9 words on average
(average in general was 1.4 words)
– Out of 15 cats (default):
• First quartile at 2nd cat
• Median at 5th
• Third quartile at 9th
86
Kaki & Aula Survey Results
• Subjective opinions improved over time
• Realization that categories useful only some of the time
• Freeform responses indicate that categories useful when
queries vague, broad or ambiguous
• Second survey indicated that people felt that their
search habits began to change
–
–
–
–
Consider query formulation less than before (27%)
Use less precise search terms (45%)
Use less time to evaluate results (36%)
Use categories for evaluating results (82%)
87
Conclusions from Kaki Study
• Simplicity of category assignment made
groupings understandable
– (my view, not stated by them)
• Keyword-based Categories:
–
–
–
–
–
Are beneficial when result ranking fails
Find results lower in the ranking
Reduce empty results
May make it easier to access multiple results
Availability changed user querying behavior
88
Highlight, Wu et al.
• Select terms from document summaries,
organize into a subsumption hierarchy.
• Highlight the terms in the retrieved
documents.
Wu, Shankar, Chen, Finding More Useful Information Faster from Web Search Results
CICM ‘03
89
90
91
Category-based Grouping
General Categories
Domain-Specific Categories
94
95
SWISH, Chen & Dumais
• 18 participants, 30 tasks, within subjects
• Significant (and large, 50%) timing differences
in favor of categories
• For queries where the results are in the first
page, the differences are much smaller.
• Strong subjective preferences.
• BUT: the baseline was quite poor and the
queries were very cooked.
– Very small category set (13 categories)
– Subhierarchy wasn’t used.
Chen, Dumais, Bringing Order to the Web: Automatically Categorizing Search Results CHI 2000
96
Test queries, Chen & Dumais
Information Need
Pre-specified Query
giants ridge ski resort
“giants”
book about "numerical recipes"
for computer software
“recipes”
information about Indian
motorcycles
“Indian”
"the home page for the band,
"They Might be Giants""
“giants”
"the home page for the
basketball team, the Washington
Wizards"
“washington”
Chen, Dumais, Bringing Order to the Web, Automatically Categorizing Search Results. CHI 2000
97
Revisiting the Study, Dumais, Cutrell,
Chen
• This followup study reveals that the baseline
had been unfairly weakened.
• The speedup isn’t so much from the category
labels as the grouping of similar documents.
• For queries where the answer is in the first
page, the category effects are not very strong.
102
DynaCat, Pratt, Hearst, and Fagan.
• Medical Domain
• Decide on important question types in an
advance
– What are the adverse effects of drug D?
– What is the prognosis for treatment T?
• Make use of MeSH categories
• Retain only those types of categories known to
be useful for this type of query.
Pratt, W., Hearst, M, and Fagan, L. A Knowledge-Based Approach to Organizing Retrieved Documents. AAAI-99
103
DynaCat, Pratt, Hearst, & Fagan
Pratt, W., Hearst, M, and Fagan, L. A Knowledge-Based Approach to Organizing Retrieved Documents. AAAI-99
104
DynaCat Study, Pratt, Hearst & Fagan
• Design
– Three queries
– 24 cancer patients
– Compared three interfaces
• ranked list, clusters, categories
• Results
– Participants strongly preferred categories
– Participants found more answers using categories
– Participants took same amount of time with all three
interfaces
Pratt, W., Hearst, M, and Fagan, L. A Knowledge-Based Approach to Organizing
Retrieved Documents. AAAI-99
105
DynaCat study, Pratt et al.
106
Faceted Category Grouping
Multiple Categories per Document
107
Search Usability Design Goals
1.
2.
3.
4.
5.
6.
7.
8.
Strive for Consistency
Provide Shortcuts
Offer Informative Feedback
Design for Closure
Provide Simple Error Handling
Permit Easy Reversal of Actions
Support User Control
Reduce Short-term Memory Load
From Shneiderman, Byrd, & Croft, Clarifying Search, DLIB Magazine, Jan 1997. www.dlib.org
108
How to Structure Information for
Search and Browsing?
• Hierarchy is too rigid
• Hierarchical faceted
metadata:
– A useful middle ground
• KL-One is too complex
109
The Problem with Hierarchy
• Inflexible
– Force the user to start with a particular category
– What if I don’t know the animal’s diet, but the
interface makes me start with that category?
• Wasteful
– Have to repeat combinations of categories
– Makes for extra clicking and extra coding
• Difficult to modify
– To add a new category type, must duplicate it
everywhere or change things everywhere
110
The Idea of Facets
• Facets are a way of labeling data
– A kind of Metadata (data about data)
– Can be thought of as properties of items
• Facets vs. Categories
– Items are placed INTO a category system
– Multiple facet labels are ASSIGNED TO items
111
The Idea of Facets
• Create INDEPENDENT categories (facets)
– Each facet has labels (sometimes arranged in a hierarchy)
• Assign labels from the facets to every item
– Example: recipe collection
Ingredient
Cooking
Method
Chicken
Stir-fry
Bell Pepper
Curry
Course
Cuisine
Main Course
Thai
112
Using Facets
• Now there are multiple ways to get to each
item
Desserts
Fruits
Preparation Method
Fry
Saute
Boil
Bake
Broil
Freeze
Fruit > Pineapple
Dessert > Cake
Preparation > Bake
Cakes
Cookies
Dairy
Ice Cream
Sherbet
Flan
Cherries
Berries
Blueberries
Strawberries
Bananas
Pineapple
Dessert > Dairy > Sherbet
Fruit > Berries > Strawberries
Preparation > Freeze
114
Flamenco Usability Studies
• Usability studies done on 3 collections:
– Recipes: 13,000 items
– Architecture Images: 40,000 items
– Fine Arts Images: 35,000 items
• Conclusions:
– Users like and are successful with the
dynamic faceted hierarchical metadata,
especially for browsing tasks
– Very positive results, in contrast with
studies on earlier iterations.
Yee, K-P., Swearingen, K., Li, K., and Hearst, M.,
Faceted Metadata for Image Search and Browsing, in CHI 2003.
116
Flamenco Study Post-Test
Comparison
Which Interface Preferable For:
Find images of roses
Find all works from a given period
Find pictures by 2 artists in same media
Overall Assessment
More useful for your tasks
Easiest to use
Most flexible
More likely to result in dead ends
Helped you learn more
Overall preference
Yee, K-P., Swearingen, K., Li, K., and Hearst, M.,
Faceted Metadata for Image Search and Browsing, in CHI 2003.
Baseline
Faceted
15
16
2
30
1
29
4
28
8
23
6
24
28
3
1
31
2
29
118
The Advantages of Facets
• Lets the user decide how to start, and how to
explore and group.
• After refinement, categories that are not relevant to
the current results disappear.
• Seamlessly integrates keyword search with the
organizational structure.
• Very easy to expand out (loosen constraints)
• Very easy to build up complex queries.
Hearst, M., Elliott, A., English, J., Sinha, R., Swearingen, K., and Yee, P.,
Finding the Flow in Web Site Search, Communications of the ACM, 45 (9), September 2002, pp.42-49
119
Advantages of Facets
• Can’t end up with empty results sets
– (except with keyword search)
• Helps avoid feelings of being lost.
• Easier to explore the collection.
– Helps users infer what kinds of things are in the collection.
– Evokes a feeling of “browsing the shelves”
• Is preferred over standard search for collection
browsing in usability studies.
– (Interface must be designed properly)
Hearst, M., Elliott, A., English, J., Sinha, R., Swearingen, K., and Yee, P.,
Finding the Flow in Web Site Search, Communications of the ACM, 45 (9), September 2002, pp.42-49
120
Advantages of Facets
• Seamless to add new facets and subcategories
• Seamless to add new items.
• Helps with “categorization wars”
– Don’t have to agree exactly where to place
something
• Interaction can be implemented using a
standard relational database.
• May be easier for automatic categorization
Hearst, M., Elliott, A., English, J., Sinha, R., Swearingen, K., and Yee, P.,
Finding the Flow in Web Site Search, Communications of the ACM, 45 (9), September 2002, pp.42-49
121
Summary:
Evaluation Good Ideas
• Longitudinal studies of real use
• Match the participants to the content of the
collection and the tasks
• Test against a strong baseline
122
Summary: Evaluation Problems
• Bias participants towards a system
– “Try our interface” versus linear view
• Tailor tasks unrealistically to benefit the target
interface
• Impoverish the baseline relative to the test
condition
• Conflate test conditions
123
Summary: Grouping Search Results
Grouping search results seems beneficial in two
circumstances:
1.
General web search, using transparent labeling
(monothetic terms) or category labels rather than
cluster centroids.
Effects:
•
Works primarily on ambiguous queries,
– (so used a fraction of the time)
•
Promotes relevant results up from below the first page of hits
– So important to group the related items together visually
•
Users tend to select more documents than with linear search
•
May work even better with meta-search
•
Positive subjective responses (small studies)
•
Visualization does not work.
124
Summary: Grouping Search Results
Grouping search results seems beneficial
in two circumstances:
2. Collection navigation with faceted categories
•
•
•
•
•
Multiple angles better than single categories
“searchers” turn into “browsers”
Becoming commonplace in e-commerce, digital
libraries, and other kinds of collections
Extends naturally to tags.
Positive subjective responses (small studies)
125
Social Tagging and Search
126
Topical
Metadata
Search
Structured, Flexible
Navigation
127
Problem with Metadata-Oriented Approaches
Getting the metadata!
128
Topical
Metadata
Search
Recorded Human Interaction
Social question
answering
Click-through
ranking
Inferred
recommendations
129
Human Real-time Question
Answering
• More popular in Korea than algorithmic search
– Maybe fewer good web pages?
– Maybe more social society?
• Several examples in US:
– Yahoo answers recently released and successful
– wondir.com
– answerbag.com
130
Yahoo Answers
(also answerbag.com, wondir.com, etc)
131
Yahoo Answers appearing in search results
132
answerbag.com
133
Using User Behavior as Implicit
Preferences
• Search click-through experimentally shown to
boost search rankings for top results
– Joachims et al. ‘05, Agichtein et al. ‘06
– Works ok even if non-relevant documents examined
– Best in combination with sophisticated search algorithms
– Doesn’t work well for ambiguous queries
• Aggregates of movie and book selections comprise
implicit recommendations
134
Search
Topical
Metadata
Recorded Human Interaction
Social Tagging
(photos, bookmarks)
Game-based
tagging
135
Social Tagging
• Metadata assignment without all the bother
• Spontaneous, easy, and tends towards single terms
136
Issues with Photo and Web link
Tagging
• There is a strong personal component
– Marking for my own reminders
– Marking for my circle of friends
• There is also a strong social component
– Try to promote certain tags to make them more
popular, or post to popular tags to see your
influence rise
137
Tagging Games
• Assigning metadata is fun! (ESP game, von
Ahn)
– No need for reputation system, etc.
• Pay people to do it
– MyCroft (iSchool student project)
• Drawback: least common denominator labels
• Experts already label their own data or that
about which they have expertise
– E.g., protein function
– Wikipedia
138
Topical
Metadata
Search
Recorded Human Interaction
Social question
answering
Click-through
ranking
Inferred
recommendations
Social Tagging
(photos, bookmarks)
Game-based
tagging
????
139
Expert-Oriented Tagging in Search
• Already happening at Google co-op
• Shows up in certain types of search results
140
Expert-Oriented Tagging
• Already happening at Google co-op
• Shows up in certain types of search results
141
Promoting Expertise-Oriented Tagging
• Research area: User Interfaces
– To make rapid-feedback suggestions of pre-established
tags
• Like type-ahead queries
– To incentivize labeling and make it fun
– To allow the personal aspects to shine through
142
Promoting Expertise-Oriented Tagging
• Research area: NLP Algorithms
– (We have an algorithm to build facets from text)
– To convert tags into facet hierarchies
– To capture implicit labeling information
143
Promoting Expertise-Oriented Tagging
• Research area: Digital infrastructure
• Extending tagging games
• Build an architecture that channels specialized
subproblems to appropriate experts
– We now know there is a green plant in an office; direct
this to the botany > houseplants experts
144
Promoting Expertise-Oriented Tagging
• Research area: economics and sociology
– What are the right incentive structures?
145
Using Implicit Preferences
• Extend implicit recommendation technology to
online catalog use
146
Final Words
• User interfaces for search remains a
fascinating and challenging field
• Search has taken a primary role in the web
and internet business
• Thus, we can expect fascinating
developments, and maybe some
breakthroughs, in the next few years!
147