Chap 3 outlines

Download Report

Transcript Chap 3 outlines

Chapter 3
Forecasting
Forecasting Demand
Why is demand forecasting important?
What is bad about poor forecasting?
What do these organizations forecast:
• Sony (consumer products division)
• Foley’s
• Dallas Area Rapid Transit (DART)
• UTA
Questions in Demand Forecasting
For a particular product or service:
• What exactly is to be forecasted?
• What will the forecasts be used for?
• What forecasting period is most useful?
• What time horizon in the future is to be
forecasted?
• How many periods of past data should be used?
• What patterns would you expect to see?
• How do you select a forecasting model?
Demand Management
Recognizing and planning for all sources of demand
Can demand be controlled or influenced?
• appointment schedules
– doctor’s office
– attorney
– SAM telephone registration
• sales promotions
–
–
–
–
restaurant discounts before 6pm
video rental store discounts on Tuesdays
golf course discounts if you start playing after 4pm
theater matinee movie discounts
Qualitative vs. Quantitative
Forecasting Methods
Some Qualitative Methods:
• Experienced guess/judgement
• Consensus of committee
• Survey of sales force
• Survey of all customers
• Historical analogy
– new products
• Market research
– survey a sample of customers
– test market a product
Steps for Quantitative Forecasting Methods
1.
2.
3.
4.
Collect past data—usually the more the better
Identify patterns in past data
Select one or more appropriate forecasting methods
Forecast part of past data with each method
–
–
5.
Determine best parameters for each method
Compare forecasts with actual data
Select method that had smallest forecasting errors on
past data
Forecast future time periods
Determine prediction interval (forecast range)
Monitor forecasting accuracy over time
6.
7.
8.
–
Tracking signal
Types of Quantitative Forecasting Methods
Pattern Projection
– time series regression
– trend or seasonal models
Data Smoothing
– moving average
– exponential smoothing
Causal
– multiple regression
Data Pattern Components
Time
Time
LEVEL
Time
TREND
SEASONALITY
1986
Sales
1980
Sales
Time
CYCLICALITY
Dec
Sales
Dec
Sales
Dec
Sales
Time
NOISE
Identifying Data Patterns for Time Series
Always Plot Data First
– After plotting data, patterns are often obvious.
Average or level
– Use mean of all data
Trend
– Use time series regression – slope is trend – time period is
independent variable
Seasonality
– Deseasonalize the data
Cyclicality
– Similar to deseasonalizing
Random noise
– No pattern – try to eliminate in forecasts
Forecast Accuracy
E t  forecast error for period t
E t  D t  Ft
D t or At  actual demand
for period t
Ft  forecast for period t
n
Mean Absolute
Deviation
MAD 

Et
t 1
n
Forecast Accuracy
Mean Squared Error
n
 E 
t
t 1
MSE 
n
Mean Error (Bias)
n
ME 
E
t 1
n
t
2
Forecast Accuracy Example
Period
1
2
3
4
5
6
At
32
28
31
34
34
36
Ft
30
31
33
35
33
34
Totals:
Et
|Et|
(Et)2
Forecast Accuracy Example
Bias =
MAD =
MSE =
Quantity of Electric Irons Shipped by U.S. Mfgs.
14
12
8
6
4
2
88
19
87
19
86
19
85
19
84
19
83
19
82
19
81
19
80
19
79
0
19
million units
10
Electric Irons Example -- Data
Year
1979
1980
1981
1982
1983
Qty
12.079
11.478
11.013
6.616
7.279
10-year average =
Last-7-year average =
Year
1984
1985
1986
1987
1988
Qty
7.843
6.834
7.660
5.918
7.115
Do time series regression analysis
Y = a + bX
Y = dependent variable (actual sales)
X = independent variable (time period in this case)
a = y-intercept (value of Y when X=0)
b = slope or trend
b
N  XY 
N X
2
 X Y
  X 
2
a
Y
where N = number of periods of data
N

 b


 X 
N


Electric Irons Example
X
4
5
6
7
8
9
10
==
49
Y
6.616
7.279
7.843
6.834
7.660
5.918
7.115
=====
49.265
X2
16
25
:
:
:
:
:
===
371
XY
26.46
36.40
:
:
:
:
:
=====
343.45
b=
a=
Y=
Y11 =
Y12 =
19
88
Su
m
Fa
ll
W
in
Fa
ll
Sp Win
19
91
Su
m
Fa
ll
W
Sp in
19
90
Su
m
Fa
ll
Sp Win
19
89
Su
m
Sp
Number of Trucks Leased
Moving Company Sales
250
200
150
100
50
0
Overlay the Years
Number of Trucks Leased
250
1988
1989
1990
1991
200
150
100
50
0
Spring
Summer
Fall
Winter
Seasonality and Trend Patterns (Seasonalized
Regression)
Steps:
1. Deseasonalize the data to remove seasonality
– divide by seasonal index (SI)
2. Use regression to model trend
3. Make initial forecasts to project trend
4. Seasonalize the forecast
– multiply by SI
Moving Company Example
1988
1989
1990
1991
Total:
Avg:
SI:
Spring Summer Fall Winter
90
160
70
120 Overall Avg.
130
200
90
100
2020/16
80
170
130
140
= 126.25
130
210
80
120
430
740
370
480
107.5
185
92.5 120
Deseasonalize the Data
1988
1989
1990
1991
*
Spring Summer Fall
105.7* 109.2
95.5
152.7
136.5 122.8
94.0
116.0+ 177.4
152.7
143.3 109.2
Winter
126.3
105.2
147.3
126.3
Spring 1988: 90/.851 = 105.7
+ Summer 1990: 170/1.465 = 116.0
Perform Time Series Regression
X
1
2
3
4
:
16
===
136
Y
X2
XY
105.7
1
105.7
109.2
4
218.4
95.5
9
286.6
:
:
:
:
:
:
126.3
256
2020.0
====== ===== =======
2,020.0 1,496 17,773.5
Totals
b
N  XY 
b=
a=
Y=
N X
2
 X Y
  X 
2
a
Y
N

 b


 X 
N


Make initial forecasts:
Y17 =
Y18 =
Y19 =
Y20 =
Make final forecasts: (Seasonalize
F17 =
F18 =
F19 =
F20 =
F = Y x SI)
Gasoline Service Station Monthly Sales
13
12
billion $
11
10
9
8
7
6
0
6
12
18
24
30
36
42
month
48
54
60
66
72
78
Gasoline Service Station Monthly Sales
13
1985
1986
1987
1988
1989
1990
12
billion $
11
10
9
8
7
6
Jan
Feb Mar
Apr
May Jun
Jul
month
Aug Sep
Oct
Nov Dec
Deseasonalized Sales
13
12
billion $
11
10
9
8
7
6
0
6
12
18
24
30
36
42
month
48
54
60
66
72
78
Regression Line
13
12
billion $
11
10
9
8
7
6
0
6
12
18
24
30
36
42
month
48
54
60
66
72
78
Final Forecasts
13
12
billion $
11
10
9
8
7
6
0
6
12
18
24
30
36
42
month
48
54
60
66
72
78
Actual Sales
Past Sales
Forecasts
Actuals
13
12
billion $
11
10
9
8
7
6
0
6
12
18
24
30
36
42
month
48
54
60
66
72
78
Forecast Ranging
Forecasts are rarely perfect!
A forecast range reflects the degree of confidence
that you have in your forecasts.
Forecast ranging allows you to estimate a
prediction interval for actual demand
“There is a ___% probability that actual demand
will be within the upper and lower limits of the
forecast range.”
Standard Error of the Forecast
(a measure of dispersion of the forecast errors)
s yx 

y  a  y  b  xy
2
n2
Upper Limit = Fi + t(syx)
Lower Limit = Fi - t(syx)
Need desired level of significance (α) and degrees
of freedom (df) to look up t in table
Forecast Confidence Intervals
(Forecast Ranging)
(1 – α)
α/2
α/2
t(Syx)
Lower
Limit
Ft
Upper
Limit
Actual Sales
in Future
t-statistic and degrees of freedom
For a confidence interval of 95%,α = .05 (.025 in
each tail), and df=16
From table, t =
Why does df = n-2 for simple regression?
If the forecast was from a multiple regression
model with 3 independent variables, what would
be the degrees of freedom? df = n - __
Example: Judy manages a large used car dealership that has
experienced a steady growth in sales during the last few years.
Using time series regression and sales data for the last 20 quarters,
Judy obtained a forecast of 800 car sales for next quarter. With her
model and the past data the standard error of the forecast was 50
cars. What are the limits for a 95% forecast range? for an 80%
forecast range?
Example: A manager’s forecast of next month’s sales of product Q was
1500 units using time series regression based on the last 24 months
of sales, which had a standard forecast error of 29 units. Her boss
asked how sure she was that actual sales would be within 50 units
of her forecast.
Short Range Forecasting
• A few days to a few months
• Assumes there are no patterns in the data
• Random noise has a greater impact in the short
term
• These approaches try to eliminate some of the
random noise
• Random walk, moving average, weighted
moving average, exponential smoothing
Random Walk
The next forecast is equal to the last period’s
actual value
Period
1
2
3
4
Sales
21
30
27
?
Forecast
Moving Average Method
The next forecast is equal to an average of the last
AP periods of actual data
Period
1
2
3
4
5
Sales
21
28
35
30
?
AP=4
AP=3
AP=2
Impulse Response – how fast the forecasts react to
changes in the data
The higher the value of AP, the less the forecast will react
to changes in the data, so the lower the impulse
response is.
Noise Dampening – how much the forecasts are smoothed
Noise dampening is the opposite of impulse response.
A moving average model with AP=1 has high impulse
response and low noise dampening characteristics.
Weighted Moving Average method
Like the moving average method except that each of the
AP periods can have a different weight
Period
1
2
3
4
5
Actual
Sales
21
28
35
30
?
AP=4
Weight
.1
.15
.25
.5
Usually the recent periods have more weight
Exponential Smoothing
Most common short-term quantitative forecasting method
(especially for forecasting inventory levels)
Why?
– surprisingly accurate
– easy to understand
– simple to use
– very little data is stored
Need 3 pieces of data to make forecast
1. most recent forecast
2. actual sales for that period
3. smoothing constant (α)
Exponential Smoothing method
– gives a different weight to each period
Ft = Ft-1 + α(At-1 – Ft-1)
α is the smoothing parameter and is between 0 and 1
Interpretation: the next forecast equals last period’s
forecast plus a percentage of last period’s forecasting
error.
Alternative formula:
Ft = αAt-1 + (1 - α)Ft-1
(rearranging terms)
Example: assume α = 0.3
We must assume a forecast for an earlier period
Period
1
2
3
4
5
6
Sales
21
24
23
19
22
?
Forecast
Find best value for α by trial and error
The larger α is, the more weight that is placed on
the more recent periods’ actual values, so the
higher the impulse and the lower the noise
dampening.
Tracking Signal
After a forecasting method has been selected,
tracking signal is used to monitor accuracy of
the method as time passes
Particularly good at identifying underforecasting or
overforecasting trends
Tracking Signal =
Sum of Errors (E t )
MAD
Ideal value for tracking signal is ___
Guidelines would be used if the value exceeds specified
limits
Example: Suppose exponential smoothing is used (α = .2)
If |TS| < 2.3 then do not change α
If |TS| > 2.3 then increase α by .1
If |TS| > 3.0 then increase α by .3
If |TS| > 3.6 then increase α by .5
After tracking signal goes back down, restore original value
of α or calculate new α
Double Exponential Smoothing
(Exponential Smoothing with Trend)
Two smoothing constants are used:
α smoothes out random variations
β smoothes out trends
An alternative to time series regression
Especially useful if there is much random variation
Winter’s Exponential Smoothing
Accounts for trend and seasonality
Three smoothing constants are used
α smoothes out random variations
β smoothes out trends
γ smoothes out seasonality
There are many other variations of exponential
smoothing
Box-Jenkins Forecasting Approach
Relatively accurate, but complex and time
consuming to use
Needs at least 60 points
Good choice if there are not many time series to
forecast, and accuracy is very important
Works best when random variation is a small
component
Example: monthly automobile registrations in U.S.
Forecast = Dt + Dt-11 – Dt-12 – 0.21Et – 0.21Et-1
– 0.85Et-11 + 0.18Et-12 + 0.22Et-13
where
Dt = Actual demand for time period t
Et = Error term for time period t
Focus Forecasting
(Forecasting Simulation)
Bernard Smith at American Hardware Supply developed
this method to make forecasts for 100,000 items
Based on 2 principles:
– sophisticated methods don’t always work better
– no single method works best for all items
Buyers tended not to use the previous exponential
smoothing model because they did not trust or
understand it. Instead, they were making up their own
simple rule-of-thumb approaches.
Smith selected 7 forecasting methods to use, such as
1. sales = last month’s sales plus a percentage
2. sales = sales for same month last year plus a %
3. 2-month moving average
4. exponential smoothing
etc. (most were relatively simple)
All methods were used to forecast each product.
Whichever method worked best for the previous month,
that method was used to forecast the next month.
Approach worked very well, and people understood and
used it. Smith wrote a popular book describing his
approach and success.
Multiple Regression Forecasting
Sales = f($advertising, #salespeople, $price)
Sales
Adv
People
Price
5200
350
18
53
5600
520
18
52
5100
400
15
54
3800
320
13
64
5200
410
16
51
4900
290
17
60
5200
390
17
54
5400
470
20
55
4700
450
14
61
5000
500
15
58
5100
470
18
60
SUMMARY OUTPUT
Regression Statistics
Multiple R
0.952
R Square
0.907
Adjusted R Square
0.867
Standard Error
170.988
Observations
11
ANOVA
df
SS
MS
F
Significance F
Regression
3
1991704.974
663901.7
Residual
7
204658.662
29236.95
10
2196363.636
Coefficients
Standard Error
5839.347
1236.003
4.724
0.002
1.742
0.765
2.277
0.057
People
100.207
30.723
3.262
0.014
Price
-56.478
14.999
-3.765
0.007
Total
Intercept
Adv
t Stat
22.708
P-value
0.00055
Multiple Regression Example
Suppose the manager wants to
forecast sales if $430 in advertising,
19 salespeople, and a price of $64
per unit are planned.
Coefficients
Intercept
Adv
5839.347
1.742
People
100.207
Price
-56.478
Forecasting equation:
Sales = 5839.347 + 1.742(adv) + 100.207(people) – 56.478(price)
Sales =
Sales =