Week 5 Lecture 2: The Negotiation Framework
Download
Report
Transcript Week 5 Lecture 2: The Negotiation Framework
Computational
Models of
Discourse Analysis
Carolyn Penstein Rosé
Language Technologies Institute/
Human-Computer Interaction Institute
Warm-Up
Get ready to discuss the following:
What did you notice was similar and different between the
DA/Speech-Act paradigm of Levinson and the Negotiation
framework of Martin & Rose?
What answers would Martin & Rose propose to the objections
Levinson presented about DA such as non falsifiability or
looseness of conditional relevance constraints?
To help you get
started: Here’s a
quote from Levinson
about DA theorists.
Do these all apply to
the Negotiation
framework?
* No one posted. Are you still ready for the discussion?
How is conversation locally
managed?
Two main issues:
When do we speak
Given what was spoken last, what types of contributions are
conditionally relevant now?
Monday we’ll look at a computational approach to modeling this
(paper published last year at the most prestigious language
technologies conference)
Today we’ll here Elijah talk about a computational approach to this
problem (paper submitted this year to the most prestigious language
technologies conference)
What should you get out of this?
Learn to see how state-of-the-art approaches computationalize
constructs from theory more or less faithfully
Learn to evaluate computationalizations in light of theory
Chicken and Egg…
Operationalization
Main issue for this week:
Exploring sequencing and
linking between speech acts in
conversation
Computationalization
* Where do the ordering constraints come from? Is it the language? Or is it what is behind the language
(e.g., intentions, task structure)? If the latter, how do we computationalize that?
The nature
of what we
are modeling
What we can
know about it
and how certain
we can be
How we
learn what
we know
SFL fits here
Rules, like speech
acts
Qualitative
observations,
anthrooplogy style
The Negotiation System
* They
consider the
13 leaf
nodes as
speech
acts.
* This indicates an
adjacency pair.
How do we recognize speech acts
Form-function correspondences: Mood, modality markers,
tense/finiteness, temporal adverbials, person of subject
Discourse markers
Or at least Linguistic tests (could a discourse marker indicating a
speech act be added?)
Indirect speech-acts are a form of grammatical metaphor
More Insight into the Grammar
of a Speech Act and Responses
The Structure of an Exchange
* Indicates
simultaneous
choices.
The Structure of an Exchange
Q:
Q:
A:
A:
Q:
A:
dK2
tr
rtr
dK1
K2
K1
S1: Can I ask you a question?
S2: What did you say?
S1: I said, “Can I ask you a
question?”
S2: Oh, Sure.
S1: Where is the chocolate?
S2: In the fridge.
S1: Can I ask you a question?
S2: What did you say?
S1: I said, “Can I ask you a
question?”
S2: Sure.
S1: Where is the chocolate?
S2: In the fridge.
Adjacency Pairs: First
pair part followed by
second pair part, possible
with embedded pairs
Negotiation: One core
move, possibly preceded
by a secondary move
There
can also be
preparatory and follow up
moves – these are related
to the core move, not
embedded exchanges
What would this look like in the
Negotiation Framework?
What would this look like in the
Negotiation Framework?
Left on
hold
A2
A1/K2
K2
dK1
f
K1
tr
rtr
tr
K2
K1
dA1
f
Do we get
embedding, or
do we abandon
the K2 in T4?
Do you think
there is a
linguistic test
that can tell us?
Tips for next time
Tips for next time
We will look at a paper about turn taking
When perplexity is high, the model is having a
harder time predicting what is next
For turn taking perplexity, we have a state
representation that specifies at one time point
which participants are talking and which are not
The model takes the current state into account
and measures how surprised it is at the next state
If the next state is surprising given the current
state, the perplexity at that time point is high
Tips for next time
If you compare models based on turn
taking perplexity, the one with lower
perplexity probably has more of the
information needed to account for
transitions between states
Differences between models:
Whose
behavior is contingent on whose
behavior
Which data is used to build the model, and
which data is used to test
Tips for next time
What do the results say about how
conversation is locally managed?
Considering that we’re really good at
deciding when we can start talking, what
must we be paying attention to?
Connects back to the discussion from
Levinson pp 296-303
Based on your reading of Levinson, what
other experiments would you propose that
Laskowsky run?
Questions?