Evaluating FVS-NI Basal Area Increment Model Revisions under

Download Report

Transcript Evaluating FVS-NI Basal Area Increment Model Revisions under

Evaluating FVS-NI Basal Area Increment Model
Revisions under Structural Based Prediction
Robert Froese, Ph.D., R.P.F.
School of Forest Resources and Environmental Science
Michigan Technological University, Houghton MI 49931
This presentation has four parts
Introduction
The issue, the question and the model
formulations examined
Approach
The methods and the data sets
Performance
How does SBP affect model building and
model application?
Relevance
How do data structure, assumptions and
the methodology interact?
This presentation has four parts
Introduction
The issue, the question and the model
formulations examined
Approach
The methods and the data sets
Performance
How does SBP affect model building and
model application?
Relevance
How do data structure, assumptions and
the methodology interact?
This presentation has four parts
Introduction
The issue, the question and the model
formulations examined
Approach
The methods and the data sets
Performance
How does SBP affect model building and
model application?
Relevance
How do data structure, assumptions and
the methodology interact?
This presentation has four parts
Introduction
The issue, the question and the model
formulations examined
Approach
The methods and the data sets
Performance
How does SBP affect model building and
model application?
Relevance
How do data structure, assumptions and
the methodology interact?
Competition variables have sampling error
that varies in forestry problems
• Why?
– Inventory plot size and density are far from standardized in forestry
– As a stand becomes patchy or older, sampling variance increases
• Sampling errors attenuate regression coefficients towards zero,
leading to type II errors in model development
• If sampling variance of predictors is different between fitting and
application ordinary least squares (OLS) regression coefficients
are not unbiased
Fuller (1987) derived an unbiased estimator for
the underlying linear structural model
  Mˆ
1
xx
Mˆ
 uw
xy
w here
Mˆ
Mˆ
xy
xx
 n
1
 n
1
 X Y
n
t
 
n
t
  u w tt

X t X t   u u tt

'
t
'
t
 uu
is a vector of
covariances
between errors in
Y and X
is a matrix of
error variances
and covariances
for errors in X
Stage and Wykoff (1998) developed
Structural Based Prediction
1. Derive estimators for sampling variance of
competition variables
2. Estimate coefficients following Fuller’s (1987) logic
3. Revise coefficients during simulation to take into
account the current estimate of sampling variance
1
ˆ
   Mˆ xx  ˆ uutt   Mˆ xy  ˆ uw tt 
This study had two objectives
1. Wykoff (1997) and Froese (2003) tested revisions
using OLS and afterwards fit the model using SBP;
would they have reached the same conclusions if they
tested revisions using SBP?
2. SBP has not been tested on independent data. Does
SBP perform in practice according to theory; namely,
are predictions made using the DDS model fit using
SBP less biased than predictions made with the
model fit using OLS?
The Prognosis BAI model is a multiple linear
regression on the logarithmic scale
Wykoff 1997
ln  D D S   H AB  LO C  b1  ln  D BH   b 2  LO C : D BH
 b 3  cos  ASP   SL  b 4 sin  ASP   SL  b 5  SL  b 6  SL  b 7  EL  b 8  EL
2
 b 9  C R  b10 
CR
ln  D BH  1 
 b11 
SBA
D BH
2
 b12  H AB : SBA  b13  (1  P 90 )  PBA
Froese 2003
ln  D D S   H AB  LITH  b1  ln  D BH   b 2  D BH
 b 3  cos  ASP   SL  b 4 sin  ASP   SL  b 5  SL  b 6  SL  b 7  AN P  b 8  G SP  b 9  G ST
2
 b10  C R  b11 
CR
ln  D BH  1 
 b12 
SBA
D BH
 b13  H AB : SBA  b14  (1  SPC T )  PBA
The approach involves two parts
• evaluating model revisions
Introduction
Approach
– Froese 2003 revisions form the basis
– Repeat under SBP using FIA data
– compare RMSE of prediction residuals under
OLS and SBP
• testing on independent data
Performance
Relevance
– use the Froese 2003 model formulation, fit using
FIA data under OLS and SBP
– generate predictions for independent testing data
– compare bias and RMSE of prediction residuals
under OLS and SBP
The fitting data came from FIA inventories in
the Inland Empire
FIA “map” design
8,295 trees (20%)
All increment data from increment cores!
FIA “old” design
32,754 trees (80%)
The testing data came from the USFS
Region 1 Permanent Plot Program
7,932 trees (44%) from control plots
10,659 trees (56%) from treated plots
• Installed in managed stands, mostly pre-commercial thinning
• Control plots were left untreated
• Geographically restricted to National Forests
– Coeur d’Alene, Flathead, Kanisku, Kootenai, Lolo and St. Joe
• Diameter increment from successive re-measurements, not cores
Evaluating model revisions
similar results for all species:
Introduction
– when change in precision due to revisions in the
model formulation is assessed the outcome of
revisions is more favourable under SBP
Approach
– when change in precision due to the model
framework is assessed, SBP always results in a
degradation in model performance, but the
degradation is less for every species under the
revised DDS model formulation developed in
Chapter 4
Performance
Relevance
– SBP increased RMSE by 1.0 - 4.8% for the
Wykoff 1997 version, but only 0.7 – 3.6% for the
Froese 2003 version
SBP usually reduced bias as expected when
applied to independent data
Species
OLS Bias
SBP Bias
PRMSE (bias corrected)
Abs.
Rel. %
Abs.
Rel. %
OLS
SBP
Diff. %
ABGR
-0.02
-0.5
-0.11
-2.4
0.630
0.641
1.7
ABLA
-0.08
-1.8
-0.08
-1.7
0.604
0.579
-4.3
HARD
-0.45
-11.5
-0.44
-11.2
0.662
0.643
-3.0
HIEL
0.03
0.8
0.02
0.5
0.459
0.417
-10.1
LAOC
-0.20
-4.8
-0.16
-3.9
0.610
0.597
-2.2
PICO
-0.16
-3.7
-0.16
-3.8
0.620
0.588
-5.4
PIEN
-0.26
-6.0
-0.22
-5.0
0.616
0.595
-3.5
PIPO
-0.49
-10.5
-0.52
-11.3
0.678
0.667
-1.6
PSME
-0.24
-5.5
-0.24
-5.5
0.586
0.583
-0.5
THPL
0.09
2.1
0.05
1.2
0.677
0.695
2.6
Precision was improved more consistently
with SBP
OLS Bias
SBP Bias
Abs.
Rel. %
Abs.
Rel. %
OLS
SBP
Diff. %
ABGR
-0.02
-0.5
-0.11
-2.4
0.630
0.641
1.7
ABLA
-0.08
-1.8
-0.08
-1.7
0.604
0.579
-4.3
HARD
-0.45
-11.5
-0.44
-11.2
0.662
0.643
-3.0
HIEL
0.03
0.8
0.02
0.5
0.459
0.417
-10.1
LAOC
-0.20
-4.8
-0.16
-3.9
0.610
0.597
-2.2
PICO
-0.16
-3.7
-0.16
-3.8
0.620
0.588
-5.4
PIEN
-0.26
-6.0
-0.22
-5.0
0.616
0.595
-3.5
PIPO
-0.49
-10.5
-0.52
-11.3
0.678
0.667
-1.6
PSME
-0.24
-5.5
-0.24
-5.5
0.586
0.583
-0.5
THPL
0.09
2.1
0.05
1.2
0.677
0.695
2.6
Species
PRMSE (bias corrected)
Predictions are similar in magnitude under
each method, with exceptions
Trends in residuals across stand basal area
were slightly improved with SBP
Results for Pseudotsuga menziesii
Trends in residuals across PBAL were also
slightly improved with SBP
Results for Pseudotsuga menziesii
SBP effects may be overwhelmed by poor
model performance on these data
Results for Pseudotsuga menziesii
The effect of SBP is confounded with other
issues in the test and test data
Introduction
• The test data are
different in more
ways than sampling
design
Approach
Performance
Relevance
• SBP would be
enhanced by
methodological
revisions
– Poisson model
– Estimation algorithm
SBP produces stable results despite
complexity and confounding influences
• model testing very
encouraging
– bias reduced for all species
but those that have other
problems
– precision actually improved
for most species
• at minimum, these results
suggest model users need not
fear spurious results using the
DDS model implemented with
SBP
Summary
1  P 90   PBA 
1  SPCT   PBA
La rix o ccid enta lis
R M S E  1.7 %
P R M SE  2.2 %
Larix occidentalis
O L S B IA S -4.8 %
SB P B IA S -3 .9%
Model revision decisions are insensitive
to regression methodology
SBP increases RMSE but decreases
PRMSE
SBP reduces bias in most situations as
expected
Methodological revisions are desirable