Understanding Your Data Set

Download Report

Transcript Understanding Your Data Set

Understanding Your Data Set

Statistics are used to describe data sets

Gives us a metric in place of a graph

What are some types of statistics used to describe data sets?

Average, range, variance, standard deviation, coefficient of variation, standard error

Table 1. Total length (cm) and average length of spotted gar collected from a local farm pond and from a local lake.

Number

1 2 3 4 5 6 7 8 9 10

Average= Pond

34 78 48 24 64 58 34 66 22 44

47.2

Length Lake

38 82 58 76 60 70 99 40 68 91

68.2

Number

1 2 3 4 5 6 7 8 9 10

Average= Pond

34 78 48 24

Length Lake

38 82 58 76 64 58 34 66 22 44

47.2

60 70 99 40 68 91

68.2

Are the two samples equal?

What about 47.2 and 47.3?

If we sampled all of the gar in each water body, would the average be different?

How different?

Would the lake fish average still be larger?

Range

Simply the distance between the smallest and largest value

Lake Overlap Pond 0 20 40 Length (cm) 60 80 100

Figure 1. Range of spotted gar length collected from a pond and a lake. The dashed line represents the overlap in range.

Does the difference in average length (47.2 vs. 68.2) seem to be much as large as before?

0 Pond 20 Lake Overlap 40 Length (cm) 60 80 100

Variance

An index of variability used to describe the dispersion among the measures of a population sample.

Need the distance between each sample point and the sample mean.

100 80 60 40 20 0 0 Distance from point to the sample mean 2 4

Number

6 8 10

Figure 2. Mean length (cm) of each spotted gar collected from the pond. The horizontal solid line represents the sample mean length.

• • • •

We can easily put this new data set into a spreadsheet table. By adding up all of the differences, we can get a number that is a reflection of how scattered the data points are.

Closer to the mean each number is, the smaller the total difference.

After adding up all of the differences, we get zero.

This is true of all calculations like this What can we do to get rid of the negative values?

#

6 7 8 9 10 1 2 3 4 5

Length Mean Difference

34 78 48 24 64 58 34 66 22 44 47.2

47.2

47.2

47.2

47.2

47.2

47.2

47.2

47.2

47.2

Sum =

-13.2

30.8

0.8

-23.2

16.8

10.8

-13.2

18.8

-25.2

-3.2

0

#

1 2 3 4 5 6 7 8 9 10

Sum of Squares

Difference 2 Length Mean Difference

34 78 48 24 64 58 34 66 22 44 47.2

47.2

47.2

47.2

47.2

47.2

47.2

47.2

47.2

47.2

Sum =

-13.2

30.8

0.8

-23.2

16.8

10.8

-13.2

18.8

-25.2

-3.2

0

174.24

948.64

0.64

538.24

282.24

116.64

174.24

353.44

635.04

10.24

3233.6

Now 3233.6 is a number we can use! This value is called the SUM OF SQUARES.

Back to Variance

• •

Sum of Squares (SOS) will continue to increase as we increase our sample size.

A sample of 10 replicates that are highly variable would have a higher SOS than a sample of 100 replicates that are not highly variable.

To account for sample size, we need to divide SOS by the number of samples minus one (n-1).

We’ll get to the reason (n-1) instead of n later

Calculate Variance (σ

2

)

σ 2 = S 2 =

(X i – X m ) 2 / (n – 1) SOS Degrees of Freedom Variance for Pond = S 2 = 3233.6 / 9 = 359.29

100 80 60 40 20 0 0 2 Distance from point to the sample mean 4

Number

6 8 10

More on Variance

Variance tends to increase as the sample mean increases

For our sample, the largest difference between any point and the mean was 30.8 cm. Imagine measuring a plot of cypress trees. How large of a difference would you expect (if measured in cm)?

The variance for the lake sample = 400.18.

Standard Deviation

• •

Calculated as the square root of the variance.

Variance is not a linear distance (we had to square it). Think about the difference in shape of a meter stick versus a square meter.

By taking the square root of the variance, we return our index of variability to something that can be placed on a number line.

Calculate SD

For our gar sample, the Variance was 359.29. The square root of 359.29 = 18.95.

Reported with the mean as: 47.2 ± 18.95 (mean ± SD).

Standard Deviation is often abbreviated as σ (sigma) or as SD.

SD is a unit of measurement that describes the scatter of our data set.

Also increases with the mean

Standard Error

Calculated as: SE = σ / √(n)

Indicates how close we are to estimating the true population mean

For our pond ex: SE = 18.95 / √10 = 5.993

– –

Reported with the mean as 47.2 ± 5.993 (mean ± SE). Based on the formula, the SE decreases as sample size increases.

Why is this not a mathematical artifact, but a true reflection of the population we are studying?

Sample Size

The number of individuals within a population you measure/observe.

Usually impossible to measure the entire population

As sample size increases, we get closer to the true population mean.

Remember, when we take a sample we assume it is representative of the population.

Effect of Increasing Sample Size

• •

I measured the length of 100 gar Calculated SD and SE for the first 10, then included the next additional 10, and so on until all 100 individuals were included.

Raw Data

120 100 80 60 40 20 0 0 20 40 60 Sample Size 80 100 120

90 80 70 60 50 40 0

SD = Square root of the variance

( Var =

(X i – X m ) / (n – 1) ) SD

20 40 Sample Size 60 80 100

SE = SD / √(n)

SE

90 80 70 60 50 40 0 20 40 Sample Size 60 80 100

SD

24 22 20 18 16 14 12 0 20 40

SE

60 80 100 10 8 6 4 2 0 0 20 40 60 80 100

Population : a data set representing the entire entity of interest - What is a population?

Sample : a data set representing a portion of a population Population Sample

Population mean – the true mean for that population -a single number Sample mean – the estimated population mean -a range of values (estimate ± 95% confidence interval) Population Sample

As our sample size increases, we sample more and more of the population. Eventually, we will have sampled the entire population and our sample distribution will be the population distribution Increasing sample size

Mean = x =

N

i= x

N

Variance =

(x-x) 2

N-1

Standard Deviation

=

(x-x) 2

N-1

SD Standard Error = √N Individual

1 2 3 4 N=6 N-1=5 5 6

Weight

26 32 25 26 30 30 169

Mean

28.17

28.17

28.17

28.17

28.17

28.17

SOS= (Weight - Mean) 2

4.7089

14.6689

10.0489

4.7089

3.3489

3.3489

40.8334

Mean = 169/6 = 28.17

Range = 25 – 32 SOS = 40.83

Variance = 40.83 / 5 = 8.16

Std. Dev. =

40.83/5 = 2.86

Std. Err. = 2.86 / √ 6 = 1.17 Go to Excel

MEAN ± CONFIDENCE INTERVAL When a population is sampled, a mean value is determined and serves as the point-estimate for that population.

However, we cannot expect our estimate to be the exact mean value for the population.

Instead of relying on a single point-estimate, we estimate a range of values, centered around the point-estimate, that probably includes the true population mean.

That range of values is called the confidence interval .

Confidence Interval

Confidence Interval : consists of two numbers (high and low) computed from a sample that identifies the range for an interval estimate of a parameter.

There is a 5% chance (95% confidence interval) that our interval does not include the true population mean.

y ± (t

/0.05

)[(

) / (

n)] 28.17 ± 2.29

25.88

  

30.45

Hypothesis Testing

Null versus Alternative Hypothesis

Briefly:

Null Hypothesis : Two means are not different

Alternative Hypothesis : Two means are not similar

A test statistic based on a predetermined probability (usually 0.05) is used to reject or accept the null hypothesis

< 0.05

then there is a significant difference

> 0.05

then there is NO significant difference

Are Two Populations The Same?

• •

Boudreaux

: ‘My pond is better than your lake, cher’!

Alphonse

: ‘Mais non! I’ve got much bigger fish in my lake’!

How can the truth be determined?

Two Sample t-test

Simple comparison of a specific attribute between two populations

If the attributes between the two populations are equal, then the difference between the two should be zero

This is the underlying principle of a t-test

If P-value > 0.05

the means are not significantly different; If P < 0.05

the means are significantly different

Analysis of Variance

Can compare two or more means

Compares means to determine if the population distributions are not similar

Uses means and confidence intervals much like a t-test

Test statistic used is called an F statistic (F-test), which is used to get the P value

If P-value > 0.05 the means are not significantly different; If P< 0.05 the means are significantly different

Post-hoc test separates the non-similar ones

Analysis of Variance

Compares means to determine if the population distributions are not similar

Uses means and confidence intervals much like a t-test

Test statistic used is called an F statistic (F-test)

Normal Distribution

Most characteristics follow a normal distribution

For example: height, length, speed, etc.

One of the assumptions of the ANOVA test is that the sample data is ‘normally distributed.’

Sample Distribution Approaches Normal Distribution With Sample Size

10 8 2 0 6 4 Population Sample

Sample Distribution Approaches Normal Distribution With Sample Size

4 2 0 10 8 6 Population Sample

Sample Distribution Approaches Normal Distribution With Sample Size

10 8 6 4 2 0 Population Sample

Mean = x = 

N

i= x

N

Variance =  (x-x) 2

N-1

Standard Deviation =

 (x-x) 2

N-1

Standard Error = SD

√N Individual

1 2 3 4 N=6 N-1=5 5 6

Weight

26 32 25 26 30 30 169

Mean

28.17

28.17

28.17

28.17

28.17

28.17

SOS= (Weight - Mean) 2

4.7089

14.6689

10.0489

4.7089

3.3489

3.3489

40.8334

Mean = 169/6 = 28.17

Range = 25 – 32 SOS = 40.83

Variance = 40.83 / 5 = 8.16

Std. Dev. =

40.83/5 = 2.86

Std. Err. = 2.86 / √ 6 = 1.17

120 100 80 60 40 20 0 0

ANOVA – Analysis of Variance Calculate a SOS based on an overall mean ( total SOS )

Pond Lake 1 2 3

Lake Lake Lake Lake Lake Lake Lake Lake Lake Trtmnt Pond Pond Pond Pond Pond Pond Pond Pond Pond Pond Lake 2 3 4 5 6 7 8 9 10 Replicate 1 2 3 4 5 6 7 8 9 10 1 82 58 76 60 70 99 40 68 91 Length 34 78 48 24 64 58 34 66 22 44 38 Overall Mean 57.7

57.7

57.7

57.7

57.7

57.7

57.7

57.7

57.7

57.7

57.7

57.7

57.7

57.7

57.7

57.7

57.7

57.7

57.7

57.7

SOS Total 561.69

412.09

94.09

1135.69

39.69

0.09

561.69

68.89

1274.49

187.69

388.09

590.49

0.09

334.89

5.29

151.29

1705.69

313.29

106.09

1108.89

9040.2

120 100 80 60 40 20 0 0

This provides a measure of the overall variance ( Total SOS ).

1 Pond Lake 2 3

120 100 80 60 40 20 0 0

Calculate a SOS based for each treatment (Treatment or Error SOS ).

Pond Lake 1 2 3

Lake Lake Lake Lake Lake Lake Lake Lake Lake Trtmnt Pond Pond Pond Pond Pond Pond Pond Pond Pond Pond Lake 2 3 4 5 6 7 8 9 10 Replicate 1 2 3 4 5 6 7 8 9 10 1 82 58 76 60 70 99 40 68 91 Length 34 78 48 24 64 58 34 66 22 44 38 Trtmnt Mean 47.2

47.2

47.2

47.2

47.2

47.2

47.2

47.2

47.2

47.2

68.2

68.2

68.2

68.2

68.2

68.2

68.2

68.2

68.2

68.2

SOS Error 174.24

948.64

0.64

538.24

282.24

116.64

174.24

353.44

635.04

10.24

912.04

190.44

104.04

60.84

67.24

3.24

948.64

795.24

0.04

519.84

6835.2

120 100 80 60 40 20 0 0

This provides a measure of the reduction of variance by measuring each treatment separately ( Treatment or Error SOS ).

What happens to Error SOS when the variability w/in each treatment decreases?

1 Pond Lake 2 3

Calculate a SOS for each predicted value vs. the overall mean ( Model SOS )

Predicted_Pond Predicted_Lake Overall_Avg 120 100 80 60 40 20 0 0 1 2 3

Lake Lake Lake Lake Lake Lake Lake Lake Lake Trtmnt Pond Pond Pond Pond Pond Pond Pond Pond Pond Pond Lake 2 3 4 5 6 7 8 9 10 Replicate 1 2 3 4 5 6 7 8 9 10 1 82 58 76 60 70 99 40 68 91 Length 34 78 48 24 64 58 34 66 22 44 38 Trtmnt Mean 47.2

47.2

47.2

47.2

47.2

47.2

47.2

47.2

47.2

47.2

68.2

68.2

68.2

68.2

68.2

68.2

68.2

68.2

68.2

68.2

Overall Mean 57.7

57.7

57.7

57.7

57.7

57.7

57.7

57.7

57.7

57.7

57.7

57.7

57.7

57.7

57.7

57.7

57.7

57.7

57.7

57.7

SOS Model 110.25

110.25

110.25

110.25

110.25

110.25

110.25

110.25

110.25

110.25

110.25

110.25

110.25

110.25

110.25

110.25

110.25

110.25

110.25

110.25

2205 This provides a measure of the distance between the mean values ( Model SOS ). What happens to Model SOS when the two means are close together?

What if the means are equal?

Detecting a Difference Between Treatments

Model SOS gives us an index on how far apart the two means are from each other.

Bigger Model SOS = farther apart

Error SOS gives us an index of how scattered the data is for each treatment.

More variability = larger Error SOS = more possible overlap between treatments

Magic of the F-test

The ratio of Model SOS to Error SOS (Model SOS divided by Error SOS) gives us an overall index (the F statistic ) used to indicate the relative ‘distance’ and ‘overlap’ between two means.

A large Model SOS and small Error SOS = a large F statistic . Why does this indicate a significant difference?

A small Model SOS and a large Error SOS = a small F statistic . Why does this indicate no significant difference??

Based on sample size and alpha level (P-value), each F statistic has an associated P-value.

– –

P < 0.05 (Large F statistic) there is a significant difference between the means P ≥ 0.05 (Small F statistic) there is NO significant difference

Showing Results

35 30 25 20 15 10 5 0

A

1

B

2

A

3

Regression

• For the purposes of this class: – Does Y depend on X ?

– Does a change in X cause a change in Y ?

– Can Y be predicted from X ?

• Y = m X + b

Predicted values

180 160 140 120 100 30

Actual values

40 70 50 60

Independent Value

80

When analyzing a regression-type data set, the first step is to plot the data:

X 35 45 55 65 75 55 Y 114 120 150 140 166 138 180 160 140 120 100 30 40 50 60

Independent Value (X)

70

The next step is to determine the line that ‘best fits’ these points. It appears this line would be sloped upward and linear (straight).

80

The line of best fit is the sample regression of Y on X , and its position is fixed by two results :

180 160 140 120 100 30

Rise/Run

40

Y = 1.24( X ) + 69.8

slope Y-intercept (55, 138)

50 60

Independent Value

70 80

1) The regression line passes through the point (X avg , Y avg ).

2) Its slope is at the rate of “m” units of Y per unit of X, where m = regression coefficient (slope; y=mx+b )

Testing the Regression Line for Significance

• An F-test is used based on Model, Error, and Total SOS.

– Very similar to ANOVA • Basically, we are testing if the regression line has a significantly different slope than a line formed by using just Y_avg.

– If there is no difference, then that means that Y does not change as X changes (stays around the average value) • To begin, we must first find the regression line that has the smallest Error SOS.

Error SOS

The regression line should pass through the overall average with a slope that has the smallest Error SOS (Error SOS = the distance between each point and predicted line: gives an index of the variability of the data points around the predicted line).

180 160 140

138

120 100 30 40

overall average is the pivot point 55

50 60

Independent Value

70 80

For each X, we can predict Y: Y = 1.24(X) + 69.8

Error SOS is calculated as the sum of (Y Actual – Y Predicted ) 2 This gives us an index of how scattered the actual observations are around the predicted line. The more scattered the points, the larger the Error SOS will be. This is like analysis of variance, except we are using the predicted line instead of the mean value.

X 35 45 55 65 75 Y_Actual 114 120 150 140 166 Y_Pred 113.2

125.6

138 150.4

162.8

SOS Erro r 0.64

31.36

144 108.16

10.24

294.4

Total SOS

• Calculated as the sum of (Y – Y avg ) 2 • Gives us an index of how scattered our data set is around the overall Y average.

180 160

Regression line not shown

140

Overall Y average

120 100 30 40 50 60

Independent Value

70 80

Total SOS gives us an index of how scattered the data points are around the overall average. This is calculated the same way for a single treatment in ANOVA.

X 35 45 55 Y_Actual 114 120 150 Y Average 138 138 138 SOS Total 576 324 144 65 75 140 166 138 138 4 784 1832

What happens to Total SOS when all of the points are close to the overall average? What happens when the points form a non-horizontal linear trend?

Model SOS

• Calculated as the Sum of (Y Predicted – Y avg ) 2 • Gives us an index of how far all of the predicted values are from the overall average.

180 160 140 120 100 30 40 50 60

Independent Value

70 80

Distance between predicted Y and overall mean

Model SOS

• Gives us an index of how far away the predicted values are from the overall average value X 35 45 55 65 75 Y_Pred 113.2

125.6

138 150.4

162.8

Y Avera ge 138 138 138 138 138 SOS Mod el 615.04

153.76

0 153.76

615.04

predicted values are close to the average value?

X

35 45 55 65 75

All Together Now!!

Y_Actual

114 120 150 140 166

Y_Pred

113.2

125.6

138 150.4

162.8

SOS Error

0.64

31.36

144 108.16

10.24

294.4

Y_Avg

138 138 138 138 138

SOS Total

576 324 144 4 784

1832 SOS Model

615.04

153.76

0 153.76

615.04

1537.6

SOS Error SOS Total SOS Mode l =  (Y_Actual – Y_Pred) 2 =  (Y_Actual –Y_ Avg) 2 =  (Y_Pred – Y_Avg) 2

Using SOS to Assess Regression Line

• Model SOS gives us an index on how ‘different’ the predicted values are from the average values.

– Bigger Model SOS = more different – Tells us how different a sloped line is from a line made up only of Y_avg.

– Remember, the regression line will pass through the overall average point.

• Error SOS gives us an index of how different the predicted values are from the actual values – More variability = larger Error SOS = large distance between predicted and actual values

Magic of the F-test

• The ratio of Model SOS to Error SOS (Model SOS divided by Error SOS) gives us an overall index (the F statistic ) used to indicate the relative ‘difference’ between the regression line and a line with slope of zero (all values = Y_avg.

– A large Model SOS and small Error SOS = a large F statistic . Why does this indicate a significant difference?

– A small Model SOS and a large Error SOS = a small F statistic . Why does this indicate no significant difference??

• Based on sample size and alpha level (P-value), each F statistic has an associated P-value.

– P < 0.05 (Large F statistic) there is a significant difference between the regression line a the Y_avg line.

– P ≥ 0.05 (Small F statistic) there is NO significant difference between the regression line a the Y_avg line.

Mean Model SOS = F Mean Error SOS

180 160 140 120 100 30

Basically, this is an index that tells us how different the regression line is from Y_avg, and the scatter of the data around the predicted values.

180 160 140 120 100 30 40 50 60

Independent Value

70 80 40 50 60

Independent Value

70 80

Correlation (r): A nother measure of the mutual linear relationship between two variables.

• ‘r’ is a pure number without units or dimensions • ‘r’ is always between –1 and 1 • Positive values indicate that y increases when x does and negative values indicate that y decreases when x increases.

– What does r = 0 mean?

• ‘r’ is a measure of intensity of association observed between x and y.

– ‘r’ does not predict – only describes associations between variables

180 160 140 120 100 30 180 160 140 120 100 30

r > 0

40 50 60

Inpendent Variable

70 80 180 160 140 120 100 30

r = 0

40

r < 0

50 60

Independent Variable

70

r is also called Pearson’s correlation coefficient .

40 50 60

Independent Variable

70 80 80

R-square

• If we square r, we get rid of the negative value if it is negative) and we get an index of how close the data points are to the regression line.

• Allows us to decide how much confidence we have in making a prediction based on our model.

• Is calculated as Model SOS / Total SOS

r

2

= Model SOS / Total SOS

180

= Model SOS = Total SOS

160 140 120 100 30 40 50 60

Independent Value

70 80

180 160 140 120 100 30

= Model SOS = Total SOS R 2 = 0.8393

40 50 60

Independent Value

70 80

r2 = Model SOS / Total SOS

numerator/denominator Small numerator Big denominator

1.2

1 0.8

0.6

0.4

0.2

0 0

R 2 = 0.0144

10 20 30 40 50

1.2

1 0.8

0.6

0.4

0.2

0 0 1.2

1 0.8

0.6

0.4

0.2

0 0 10 20

R-square and Prediction

R 2 = 0.0144

Confidence

1.2

1

R 2 = 0.5537

0.8

0.6

0.4

0.2

0 0 30 40 50 60 10 20 30 40 50 60

R 2 = 0.7605

10 20 30 40 50 60 1.2

1 0.8

0.6

0.4

0.2

0 0

R 2 = 0.9683

10 20 30 40 50 60

Finally……..

• If we have a significant relationship (based on the p-value), we can use the r-square value to judge how sure we are in making a prediction.