Presentation about structure of papers and writing abstracts

Download Report

Transcript Presentation about structure of papers and writing abstracts

Research Paper Example
Exploiting Process Lifetime Distributions
for Dynamic Load Balancing
Mor Harchol-Balter
Allen Downey
SIGMETRICS 2006
Components of a Research Paper
•
•
•
•
•
Background
Idea
Work
Results
Next
Background
• Context of the
research
• What was known
before
• What is the question
• Positioning of our
work
• Related work
In this paper:
• Load balancing in a
network
• Migration thought not
to help [ELZ88]
• We can do better
Idea
• Something new
– Algorithm
– Observations
• Why the paper was
written
In this paper:
• Process lifetimes are
heavy tailed
• This can be used to
perform beneficial
migrations
Work
•
•
•
•
Collecting data
Measurements
Simulations
Analysis
In this paper:
• Collect data on
process lifetimes
• Collect data on system
overheads
• Perform simulations
– Check competitiveness
– Check sensitivity
– Check optimality
Results
• Outcome of the
simulations, analysis,
etc.
– Tables
– Graphs
– Interpretation
In this paper:
• Better performance
than competition
• Near optimal
• Not sensitive to
system parameters
Next
• Extensions
• Shortcomings
In this paper:
• More realistic memory
usage model
• Take network into
account
• And more…
The Abstract
•
•
•
•
•
Bckgrnd
Idea
Work
Results
Next
The abstract should mainly
reflect the idea and results,
with the necessary minimum of
background and work.
Say what it is,
not that it exists!
Abstract #1
Bckgrnd
Idea
Work
Results
Next
We measure the distribution of lifetimes of UNIX
processes and propose a functional form that fits
this distribution well. We use this functional form
to derive a policy for preemptive migration, and
then use a trace-driven simulator to compare our
proposed policy with other preemptive migration
policies, and with a non-preemptive load
balancing strategy. We find that, contrary to
previous reports, the performance benefits of
preemptive migration are significantly greater
than those of non-preemptive migration, even
when the memory-transfer cost is high. Using a
model of migration costs representative of current
systems, we find that preemptive migration
reduces the mean delay (queueing and
migration) by 35-50%, compared to nonpreemptive migration.
Abstract #2
Bckgrnd
Idea
Work
Results
Next
We consider policies for CPU load balancing in
networks of workstations. We address the
question of whether preemptive migration
(migrating active processes) is necessary, or
whether remote execution (migrating processes
only at the time of birth) is sufficient for load
balancing. We show that resolving this issue is
strongly tied to understanding the process lifetime
distribution. Our measurements indicate that the
distribution of lifetimes for a UNIX process is
Pareto (heavy-tailed), with a consistent functional
form over a variety of workloads. We show how to
apply this distribution to derive a preemptive
migration policy that requires no hand-tuned
parameters. We use a trace driven simulation to
show that our preemptive migration strategy is far
more effective than remote execution, even when
the memory transfer cost is high.
Abstract #3
Bckgrnd
Idea
Work
Results
Next
We consider policies for CPU load balancing in
networks of workstations, and specifically the use
of preemptive migration (migrating active
processes). We start by measuring the distribution
of lifetimes of UNIX processes, and find that it is
Pareto (heavy-tailed), with a consistent functional
form over a variety of workloads. This distribution
has the characteristic that processes that have
already run for T time are expected to continue to
run for an additional T time. Our policy uses this to
identify those processes that will benefit from
migration and justify the costs involved. We use a
trace driven simulation to show that our preemptive
migration strategy is far more effective than remote
execution, even when the memory transfer cost is
high.