Arindam Mallik Jack Cosgrove Robert P. Dick Gokhan Memik Peter Dinda Northwestern University Department of Electrical Engineering and Computer Science Evanston, Illinois, USA ASPLOS • March 3, 2008

Download Report

Transcript Arindam Mallik Jack Cosgrove Robert P. Dick Gokhan Memik Peter Dinda Northwestern University Department of Electrical Engineering and Computer Science Evanston, Illinois, USA ASPLOS • March 3, 2008

Arindam Mallik
Jack Cosgrove
Robert P. Dick
Gokhan Memik
Peter Dinda
Northwestern University
Department of Electrical Engineering and Computer Science
Evanston, Illinois, USA
ASPLOS • March 3, 2008 • Seattle, Washington, USA
1



Traditional performance metrics do not
measure user-perceived performance well
Our performance metrics measure userperceived performance better
PICSEL is a power management policy that
uses our metrics to achieve system power
improvements of up to 12.1% compared to
existing policies
2
Main Memory
Display
Screenshot
Compare
consecutive
screenshots
Redraw
screen
Change
frequency
CPU
3
Main Memory
Display
Screenshot
“The ultimate goal
of a computer
system is to satisfy
the user”
Compare
consecutive
screenshots
Redraw
screen
Change
frequency
CPU
4

Power problem
 DVFS

System performance
 Traditional vs. user-perceived

PICSEL
 How it works
 Results

Conclusions
5

Energy-hungry processors present three
major problems:
 Higher energy consumption
 Shorter battery life
 Higher temperatures
6

Dynamic voltage and frequency scaling
(DVFS) addresses all three problems
 Trades off processor frequency for energy savings
 Commonly used

Ideal DVFS policy: Find the lowest level of
performance acceptable to the user to
maximize power savings
7

Human in loop is often rate-limiter
Output Devices
(kHz)
User
(Hz)
Processor
(GHz)
Input Devices
(kHz)
8

Traditional performance metrics focus on
processor performance
 “Close to metal”
Output Devices
Processor
(IPS)
User
Input Devices
9

User-perceived performance metrics focus on
interface device performance
 “Close to flesh”
Output Devices
(Display, Speakers)
User
(N/A)
Processor
Input Devices
(Mouse, Keyboard)
10

Use change in pixel intensities as metric for
user-perceived performance
11
Perception
Informed
CPU performance
Scaling to
Extend battery
Life
12

Windows GDI Screenshot
 Capture contiguous area of screen
 Repeat periodically
 Compare RGB intensities across samples
Cached
Ri-1
Ri
Gi
Bi
-
Gi-1
Bi-1
RΔ
=
GΔ
BΔ
13

Average Pixel Change (APC)
 APC = (RΔ + GΔ + BΔ) / 3
 Averaged across all pixels
 Measures “slowness” of display

Rate of Average Pixel Change (APR)
 APR = (APCi – APCi-1)/(Ti – Ti-1)
 Measures “jitter” of display
14


PICSEL uses <2% CPU utilization
Cost of target applications is 50-100% CPU
utilization
15
16
“No change”
band
APC
Increase
frequency
“No change”
band
APR
Make a decision on these marks
Time
17
State Variables
Adaptation Parameters
Processor frequency (f)
Hysteresis factor (α)
APC in the last interval (μAPC)
APC change threshold (ρ)
APR in the last interval (μAPR)
APR change threshold (γ)
IF (APCinit - μAPC) < ρ ×(1-α) × APCinit
OR |APRinit - μAPR| < γ ×(1-α) × APRinit
Reduce f by one level
Reset α of the last level to 0.0
ELSE
Increase f by one level
Increment α by 0.1
18
PICSEL Version
Tinitialize
(sec)
Tdecide
(sec)
APC
Change
APR
Change
Hyst.
Factor
Conservative PICSEL
(cPICSEL)
10
7
0.05
0.15
0.0
Aggressive PICSEL
(aPICSEL)
10
7
0.10
0.30
0.0



All values chosen by authors after testing
using target applications
Too long (243 days) to construct ideal values
User evaluation “closed the loop”
19




20 users
Shockwave animation and DVD movie play
for 2 minutes
FIFA game plays for 3.5 minutes
Three randomly selected trials per application
 One double-blind DVFS policy for each trial
 User rates satisfaction from one (lowest) to five
(highest) after each trial
20
2.5
CPU Frequency [GHz]
2
1.5
1
0.5
DVFS
cPICSEL
aPICSEL
0
0
7
14
21
28
35
42
49
56
63
70
77
84
91
98
Time [sec]
21
2.5
CPU Frequency [GHz]
2
1.5
1
0.5
DVFS
cPICSEL
aPICSEL
0
0
7
14
21
28
35
42
49
56
63
70
77
84
91
98
Time [sec]
22
2.5
CPU Frequency [GHz]
2
1.5
1
0.5
DVFS
cPICSEL
aPICSEL
0
0
14
28
42
56
70
84
98
112
126
140
154
168
182
196
210
Time [sec]
23
DVFS Policy
System
Dynamic
CPU Peak
Power
Power
Temperature
Improvement Improvement Reduction
User
Satisfaction
(out of five)
aPICSEL
12.1%
18.2%
4.3C
3.65*
cPICSEL
7.1%
9.1%
1.7C
3.80**
Windows
DVFS
Control
Control
Control
3.68
* Not Different with 95% confidence
** Different with 90% confidence
24
DVFS Policy
System
Dynamic
CPU Peak
Power
Power
Temperature
Improvement Improvement Reduction
User
Satisfaction
(out of five)
aPICSEL
12.1%
18.2%
4.3C
3.65*
cPICSEL
7.1%
9.1%
1.7C
3.80**
Windows
DVFS
Control
Control
Control
3.68
* Not Different with 95% confidence
** Different with 90% confidence
25
26
Perceived
slowdown
27

DVFS Policy
Total Thermal Emergencies
during Game for All Users
aPICSEL
52
cPICSEL
51
Windows DVFS
59
User satisfaction is maximized by cPICSEL
 Frequency is high enough to deliver good
performance but not high enough to trigger thermal
emergencies
28

Display performance is a better metric for
controlling DVFS than processor
performance
 Existing processor performance-based DVFS
policies have slack that can be exploited
 Cost of monitoring the display output is low
 User satisfaction is the same or better
29

Based on GUI events
 Gurun, S. and Krintz, C. 2005. AutoDVS: an Automatic, General-purpose,
Dynamic Clock Scheduling System for Hand-held Devices. In Proc. of the 5th
ACM Int. Conf. on Embedded Software (EMSOFT’05), 218-226.

Based on application messages
 Flautner, K. and Mudge, T. 2002. Vertigo: Automatic Performance-Setting for
Linux. ACM SIGOPS Operating Systems Review 36, SI (Winter 2002), 105-116.
30
Check out “Empathic Computer Architectures
and Systems” at Wild and Crazy Ideas and
visit
empathicsystems.org
for more user-centered systems research
31