Power Aware Virtual Machine Placement

Download Report

Transcript Power Aware Virtual Machine Placement

Power Aware Virtual Machine
Placement
Yefu Wang
Introduction
• Data centers are underutilized
– Prepared for extreme workloads
– Commonly under 20% utilized
• Shutting down unused servers
– Saves more power than DVFS
– Application correctness should
be guaranteed
• Design choices
– Workload redirection
– VM live migration
– Workload redirection + VM live migration
ECE692 2009
[Meisner’09]
2
Design Choice (1) : Workload Redirection
Web requests
Example: [Heo’07]
ECE692 2009
3
Design Choice (2): VM Live Migration
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
Example: [Wang’09, Verma’08]
ECE692 2009
4
Design Choice (3): Hybrid
• 6 servers and 12 VMs
• 2 applications: Gold
and Silver
• HTTP requests are
dispatched by a
dispatcher
[Kusic’08]
ECE692 2009
5
pMapper: Power and Migration Cost
Aware Application Placement in
Virtualized Systems
Akshat Verma, Puneet Ahuja, Anindya Neogi
IBM India Research Lab, IIT Delhi
Application Placement
• Static vs. dynamic placement
– Utilization of each server shows
dynamic pattens
– Dynamic placement saves more energy
– A later paper from the save authors
advocates static placement
CPU utilization(0%-100%) are
mapped to grayscale (0-1) in this
figure.
• System administrators have more control
ECE692 2009
7
Application Placement Architecture
DVFS
VM resizing +
idling
ECE692 2009
8
Optimization Formulations
• Cost performance tradeoff
N
max
M
  B(x
i 1
M
i, j
)
j 1
 P(A
I
)  Mig ( Ao , A I )
j 1
Performance benefit
Power
Migration Cost
• Cost Minimization with Performance Constraint
M
min
 P(A
I
)  Mig ( A o , A I )
j 1
• Performance benefit maximization with power constraint
N
max
M
  B(x
i 1
i, j
)  Mig ( A o , A I )
j 1
ECE692 2009
9
System Modeling
• Migration Cost
– Independent of the background workload
– Can be estimated a priori
• Performance modeling
– This paper does not design a performance controller
– pMapper can be integrated with other performance controllers
or models
• Power model
– It is infeasible to have an accurate power model in practice
• Server power consumnption depends on the hosted applications
• The potential server-VM mappings can be huge
– This paper only relies on the power efficiency of servers
ECE692 2009
10
Optimization Formulations
• Cost performance tradeoff
N
max
M
  B(x
i 1
M
i, j
)
j 1
 P(A
I
)  Mig ( Ao , A I )
j 1
Performance benefit
Power
Migration Cost
• Cost Minimization with Performance Constraint
M
min
 P(A
I
)  Mig ( A o , A I )
j 1
• Performance benefit maximization with power constraint
N
max
M
  B(x
i 1
i, j
)  Mig ( A o , A I )
j 1
ECE692 2009
11
mPP Algorithm
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
Server 1
Server 2
Server 3
Server 4
ECE692 2009
VM
VM
VM
Server 5
12
mPP Algorithm
Server 1
VM
VM
Server 2
VM
VM
Server 3
VM
VM
Server 4
VM
VM
Server 5
VM
VM
VM
VM
VM
ECE692 2009
13
mPP Algorithm
Sort VMs by size
Server 1
Server 2
VM
VM
VM
Server 3
VM
VM
VM
Server 4
VM
VM
VM
ECE692 2009
Server 5
VM
VM
VM
VM
14
mPP Algorithm
Sort servers by slope (power efficiency)
Server 1
Server 2
VM
VM
VM
Server 3
VM
VM
VM
Server 4
VM
VM
VM
ECE692 2009
Server 5
VM
VM
VM
VM
15
mPP Algorithm
Allocate the VM to servers using First Fit
VM
Server 1
Server 2
Server 3
VM
VM
VM
VM
VM
Server 4
VM
VM
VM
ECE692 2009
Server 5
VM
VM
VM
VM
16
mPP Algorithm
VM
VM
VM
VM
Server 1
VM
VM
VM
VM
VM
VM
VM
VM
VM
Server 2
Server 3
Server 4
Server 5
•mPP algorithm is oblivious of the last configuration
•May entail large-scale migrations.
ECE692 2009
17
mPPH Algorithm : mPP with History
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
mPP
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
ECE692 2009
18
mPPH Algorithm : mPP with History
VM
VM
VM
VM
VM
Target
Util.
VM
VM
VM
VM
VM
VM
VM
VM
Target
Util.
Target
Util.
Receiver
Receiver
Donor
ECE692 2009
Donor
Donor
19
mPPH Algorithm : mPP with History
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
Target
VM
Donor
ECE692 2009
Donor
Donor
20
mPPH Algorithm : mPP with History
Pick the smallest VMs that add to a migration list
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
Target
VM
Migration list
Donor
ECE692 2009
Donor
Donor
21
mPPH Algorithm : mPP with History
VM
VM
VM
VM
Receiver
Receiver
VM
VM
Donor
Donor
Donor
VM
VM
Migration list
ECE692 2009
VM
VM
VM
VM
VM
22
mPPH Algorithm : mPP with History
VM
VM
VM
VM
Receiver
Receiver
VM
VM
Donor
Donor
Donor
VM
Migration list
Target
VM
VM
VM
VM
VM
VM
Target
ECE692 2009
23
mPPH Algorithm : mPP with History
VM
VM
VM
VM
VM
Receiver
Receiver
VM
VM
Donor
Donor
Migration list
Target
Donor
VM
VM
VM
VM
VM
VM
Target
ECE692 2009
24
mPPH Algorithm : mPP with History
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
Receiver
Receiver
Target
Target
VM
VM
Donor
Donor
Donor
•mPPH algorithm tries to minimize migrations by
migrating as few VMs as possible
•pMaP algorithm: before each migration, consider
the benefit and migration cost
ECE692 2009
25
System Implementation
• Testbed
–
–
–
–
Virtulization platform: VMware ESX 3.0
Power monitor: IBM Active Energy Manager
Performance manager: EWLM
DVFS is not implemented
• Simulator
– Replace performance manager with trace data
– Simulating 4 blades, 42 VMs
• Baselines
– Load balanced
– Static
ECE692 2009
26
Power Consumption
mPP, mPPH saves 25% of power
ECE692 2009
27
Algorithm Cost
mPP fails
at high
utilization
pMaP has the least overall cost most of the time
ECE692 2009
28
Conclusion
• Application placement controller pMapper
– Minimize power and migration cost
– Meet performance guarantees
• Also avaiable in the paper:
– More simulation results
– Proof of the propoties of the algorithm for a given platform
ECE692 2009
29
Performance-Controlled Power
Optimization for Virtualized Clusters
with Multi-tier Applications
Yefu Wang and Xiaorui Wang
University of Tennessee, Knoxville
Introduction
• Power saving techniques
– DVFS have a small overhead
– Server consolidation saves more power
• Performnce guarantee
– Multi-tier applications may span multiple VMs
– A controller must respond to a workload variation quickly
• Integrating performance control and server consolidation
is important
ECE692 2009
31
System Architecture
ApplicationApplicationApplicationlevel
Response
level
levelResponse
Response
Time
Controller
Time
TimeController
Controller
ApplicationApplicationApplicationlevel
Response
level
levelResponse
Response
Time
Monitor
Time
TimeMonitor
Monitor
CPU Resource Demands
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM migration and server sleep/active commands
Power
Optimizer
ECE692 2009
32
Response Time Controller
Application
HTTP requests
VM1
VM2
VM2
Response
Time
Monitor
CPU requirements
MPC Controller
2
P
J (k ) 

t ( k  i | k )  ref ( k  i | k )
i 1
ECE692 2009
2
M 1

Q
Response
Time Model

i0
c(k  i | k )
R (i)
33
CPU Resource Arbitrator
• CPU resource arbitrator
– Runs at server level
– Collect the CPU requirements of all VMs
– Decides CPU allocations of all VMs and DVFS level of the server
• The CPU resource a VM receives depends on:
– CPU allocation a ( k )
• Example: give 20% CPU to VM1
– DVFS level f (k )
• Example: Set the CPU frequency to 0 . 66 Fmax
ECE692 2009
34
CPU Resource Arbitrator
• Two observations
– Performance depends
on a ( k ) f ( k )
– Keeping a ( k ) f ( k ) a
constant, a lower f (k )
leads to less power
consumption
• CPU resource arbitrator
– Use the lowest possible
CPU frequency to meet
the CPU requirements of
the hosted VMS
f i ( k )  min{
c
VM
jl
( k )  c 0 , Fmax }
jl  S i
ECE692 2009
35
VM Consolidation for Power Optimization
• Problem formulation
– Minimize the total power consumption of all servers
– Meet all CPU resource requirements
• Power Model
p i ( k )  a i f i ( k )  bi
p i ( k )  a i min{
c
VM
jl
( k )  c 0 , Fmax }  bi
jl  S i
ECE692 2009
36
Optimization Algorithm
1. Minimum slack problem for a single server
2. Power aware consolidation for a list of servers
3. Incremental Power Aware Consolidation (IPAC)
algorithm
4. Cost-aware VM migration
ECE692 2009
37
Minimum slack problem for a single server
Slack=1
Minimum Slack=1
Server
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
ECE692 2009
38
Minimum slack problem for a single server
Slack=0.8
Minimum Slack=0.8
VM
Server
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
ECE692 2009
39
Minimum slack problem for a single server
Slack=0.2
Minimum Slack=0.2
VM
VM
VM
Server
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
ECE692 2009
40
Minimum slack problem for a single server
Slack=0.2
VM
VM
VM
VM
Server
VM
VM
VM
VM
VM
VM
VM
VM
VM
ECE692 2009
41
Minimum slack problem for a single server
Slack=0.5
Minimum Slack=0.2
VM
VM
Server
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
ECE692 2009
42
Minimum slack problem for a single server
Slack=0.2
Minimum Slack=0.2
VM
VM
VM
Server
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
ECE692 2009
43
Minimum slack problem for a single server
Slack=0
Minimum Slack=0.2
VM
• Algorithm stops if minimum slack <
• Sounds like exhaustive search?
• Complexity: O ( n u ) u : the maximum
number of VMs a server can host
• Fast in practice [Fleszar’02]
• Gives better solution than FFD
VM
VM
VM
Server
VM
VM
VM
VM
VM
VM
VM
VM
VM
ECE692 2009
44
Consolidation Algorithm
• Power aware consolidation for a list of servers
– Begin from the most power efficient server
– Use minimum slack algorithm to fill the server with VMs
– Repeate with the next server untill all the VMs are hosted
• Incremental Power Aware Consolidation (IPAC) algorithm
– Everytime only consider these VMs for consolidation:
• Selected VMs on overloaded servers
• The VMs on the least power efficient server
– Repeate until the number of servers does not decrease
• Cost-aware VM migration
– Consider the benefit and migration cost before each migration
• Benefit: power reduction estimated by the power model, etc.
• Cost: administrator-defined based on their policies
ECE692 2009
45
System Implementation
• Testbed
– 4 servers, 16 VMs
– 8 applications (RUBBoS)
– Xen 3.2 with DVFS
• Simulator
– Use CPU utilization trace file for 5415 servers to simulate 5415
VMs
– 400 physical servers with random power models
– 3 different type of CPUs
ECE692 2009
46
Response Time Control
Violation of performance
requirements
Lower power consumption
resulted from DVFS
Our solution
Baseline: pMapper
ECE692 2009
47
Server Consolidation
• 69.6% power is saved
• Response time is still
guaranteed after the
consolidation
ECE692 2009
48
Simulation Results
• IPAC saves more power
than pMapper
– Algorithm gives better
consolidation solutions
– Even more power is saved
by DVFS
• IPAC runs even faster
than pMapper
– Only a small number of
VMs are considered in
each period
ECE692 2009
49
Conclusion
• Performance-controlled power optimization solution for
virtualized server clusters
– Application-level performance guaranteed
– Power savings are provided by DVFS and server consolidation
• Compared with pMapper
– Provides performance guarantee
– Consumes less power
– Less computational overhead
ECE692 2009
50
Critiques to pMapper
• Too many components are only disscussed without
implementation
• Lacks hardware experiments
• Only provides small scale simulations
ECE692 2009
51
Critiques to IPAC
• Realword constraints are not shown in experiments
– Network constraint, etc.
• Centralized solution
– Incures a heavy conmunication overhead to the optimizer
• The settling time of the response time controller is too
long
ECE692 2009
52
Comparisons of the Two Papers
pMapper
IPAC
Performance guarantee
No
Yes
DVFS
No
Yes
Hardware experiments
No
Yes
Based algorithms
FFD
Minimum Slack
ECE692 2009
53