Transcript Title

Chapter 3

Test Worthiness

1

Test Worthiness

Four cornerstones to test worthiness:     Validity Reliability Practicality Cross-cultural Fairness But first, we must learn one statistical concept:  Correlation Coefficient 2

Correlation Coefficient

Correlation –  Statistical expression of the relationship between two sets of scores (or variables) Positive correlation  Increase in one variable accompanied by increase in other  “Direct relationship” - They move in the same direction Negative correlation  Increase in one variable accompanied by decrease in other  “Inverse relationship” – Variables move in opposite directions 3

Examples of Correlation Relationships

What is the relationship between:       Gasoline prices and grocery prices?

Grocery prices and good weather?

Stress and depression?

Depression and job productivity?

Partying and grades?

Study time and grades?

4

Correlation (Cont’d)

Correlation coefficient ( r )  A number between -1 and +1 that indicates direction and strength of the relationship  As “r” approaches +1, strength increases in a direct relationship (positive)  As “r” approaches -1, strength increases in an inverse relationship (negative)  As “r” approaches 0, the relationship is weak or non existent (at zero) 5

Correlation (cont)

Strong -1 Correlation coefficient “r” 0 to +.3 = weak strong 0 to -.3 = weak strong +.4 to +.6 = medium -.4 to -.6 = medium +.7 to +1.0 = -.7 to -1.0 = 0 +1 Strong Weak Inverse Direct 6

Correlation Examples

SAT score 930 750 1110 625 885 950 605 810 1045 910 r = .35

Coll. GPA 3.0

2.9

3.8

2.1

3.3

2.6

2.8

3.2

3.0

3.5

Missed Classes 3 5 2 8 1 6 3 1 3 0 r = -.67

Coll. GPA 3.0

2.9

3.8

2.1

3.3

2.6

2.8

3.2

3.0

3.5

7

Correlation Scatterplots

Plotting two sets of scores from the previous examples on a graph?

  Place person A’s SAT score on the x-axis, and his/her GPA on the y-axis Continue this for person B,C, D etc.

This process forms a “Scatterplot.” 8

Examples of Scatterplots

4.0

3.5

3.0

2.5

2.0

1.5

1.0

0.5

0.0

0 200

SAT & GPA Correlation

400 600

SAT

800 1000

Missed Classes & GPA

1200 4.0

3.5

3.0

2.5

2.0

1.5

1.0

0.5

0.0

0 1 2 3 4 5

Missed classes

6 7 8 9 9

Scatterplots (cont)

What correlation ( do you think this graph has?

r ) How about this correlation?

10

More Scatterplots

What might this correlation be?

This correlation?

11

More Scatterplots

This correlation?

Last one 12

Coefficient of Determination (Shared Variance)

The square of the correlation ( r = .80, r 2 = .64) A statement about factors that underlie the variables that account for their relationship.

Correlation between depression and anxiety = .85.

  Shared variance = .72. What factors might underlie both depression and anxiety?

Shared Trait Variance Depression Anxiety 13

Validity

What is validity?

   The degree to which all accumulated evidence supports the intended interpretation of test scores for the intended purpose Lay Def’n: Does a test measure what it is supposed to measure?

It is a unitary concept; however, there are 3 general types of validity evidence 1.

2.

3.

Content Validity Criterion-Related Validity Construct Validity 14

Content Validity

Is the content of the test valid for the kind of test it is?   Developers must show evidence that the domain was systematically analyzed and concepts are covered in correct proportion Four-step process:  Step 1 - Survey the domain    Step 2 - Content of the test matches the above domain Step 3 - Specific test items match the content Step 4 - Analyze relative importance of each objective (weight) 15

Content Validity (cont)

Survey of Domain Content Matches Domain

Step 1: Survey the Domain Step 2: Content Matches Domain Item 1 x 3 Item 2 x 2 Item 3 x 1 Item 4 x 2 Item 5 x 2.5

Step 3: Test items reflect content Step 4: Adjusted for relative importance 16

Content Validity (cont)

Face Validity    Not a real type of content validity A quick look at “face” value of questions Sometimes, questions may not “seem” to measure the content, but do (e.g., panic disorder example in book (p. 48) How might you show content validity for an instrument that measures depression?

17

Criterion-Related Validity: Concurrent and Predictive Validity

 Criterion-Related Validity The relationship between the test and a criterion the test should be related to   Two types: Concurrent Validity – Does the instrument relate to another criterion “now” (in the present)?

Predictive Validity – Does the instrument relate to another criterion in the future?

18

Criterion-Related Validity: Concurrent Validity

Example 1  100 clients take the BDI  Correlate their scores with clinicians’ ratings of depression of the same group of clients.

Example 2  500 people take test of alcoholism tendency  Correlate their scores with how significant others rate the amount of alcohol they drink.

19

Criterion-Related Validity: Predictive Validity

    Examples: SAT scores correlated with how well students do in college.

ASVAB scores correlated with success at jobs.

GREs correlated with success in graduate school. (See Table 3.1, p. 50) Do Exercise 3.2, p. 49 20

Construct Validity

Construct Validity   Extent to which the instrument measures a theoretical or hypothetical trait Many counseling and psychological constructs are complex, ambiguous and not easily agreed upon:  Intelligence  Self-esteem  Empathy  Other personality characteristics 21

Construct Validity (cont)

Four methods of gathering evidence for construct validity: 1.

2.

3.

4.

experimental design factor analysis convergence with other instruments discrimination with other measures.

22

Construct Validity: Experimental Design

 Creating hypothesis and research studies that show the instrument captures the correct concept  Example:  Hypothesis: The “Blank” depression test will discriminate between clinically depressed clients and “normals.”   Method:  Identify 100 clinically depressed clients   Identify 100 “normals” Show statistical analysis Second Example: Clinicians measure their depressed clients before, then after, 6 months of treatment 23

Construct Validity: Factor Analysis

Factor analysis:    Statistical relationship between subscales of test How similar or different are the sub-scales?

Example:  Develop a depression test that has three subscales: self-esteem, suicidal ideation, hopelessness.  Correlate subscales correlate:  Self-esteem and suicidal ideation: .35

  Self-esteem and hopelessness: .25

Hopelessness and suicidal ideation: .82

 What implications might the above scores have for this test?

24

Construct Validity: Convergent Validity

  Convergent Evidence – Comparing test scores to other, well-established tests Example:     Correlate new depression test against the BDI Is there a good correlation between the two?

Implications if correlation is extremely high? (e.g., .96) Implications if correlation is extremely low? (e.g., .21) 25

Construct Validity: Discriminant Validity

Discriminant Evidence –    Correlate test scores with other tests that are different Hope to find a meager correlation Example:    Compare new depression test with an anxiety test. Implications if correlation is extremely high? (e.g., .96) Implications if correlation is extremely low? (e.g., .21) 26

Validity Recap

Three types of validity    Content Criterion  Concurrent  Predictive Construct validity   Experimental Factor Analysis   Convergent Discriminant 27

Reliability

Accuracy or consistency of test scores.

Would you score the same if you took the test over, and over, and over again?

Reported as a reliability(correlation) coeffiecient. The closer to test.

r = 1.0, the less error in the 28

Three Ways of Determining Reliability

1.

Test-Retest 2.

Alternate, Parallel, or Equivalent Forms 3.

Internal Consistency a. Coefficient Alpha b. Kuder-Richardson c. Split-half or Odd Even 29

Test-Retest Reliability

Give the test twice to same group of people.

 E.g. Take the first test in this class, and very soon after, take it again. Are the scores about the same?

  1 st 2 nd person 1 person 2 person 3 person 4 person 5 others….

test: 35 42 43 34 38 test: 36 44 41 34 37 A Time A Graphic:  Problem: Person can look up answers between 1 st testing and second 30

Alternate, Parallel, or Equivalent Forms Reliability

Have two forms of same test Give students two forms the same time Correlate scores on first form with scores on second form. Graphic: A B Problem: Are two “equivalent” forms ever really equivalent?

31

Internal Consistency Reliability

How do individual items relate to each other and the test as a whole?

Internal Consistency reliability is going “within” the test rather than using multiple administrations High speed computers and only one test administration has made internal consistency popular Three types:    Split-Half or Odd-Even Cronbach’s Coefficient Alpha Kuder-Richardson 32

Split-half or Odd-even Reliability

Correlate one half of test with other half for all who took the test Example:      Person 1 scores 16 on first half of test and 16 on second half Person 2 scores 14 on first half and 18 on second half Also get scores for persons 3, 4, 5, etc. Correlate all persons scores on first half with their scores on second half The correlation = the reliability estimate Use “Spearman Brown formula to control for shortness of test 33

Split-half or Odd-even Reliability Internal Consistency

Example Continued:   Person Score on 1 st Half Score on 2 nd 1 16 16       half 2 14 18 3 12 20 4 15 17 And so forth…..

Problem: Are any two halves really equivalent?

Graphic: A 34

Cronbach’s Alpha and Kuder Richardson Internal Consistency

Other types of Internal Consistency:    Average correlation of all of the possible split-half reliabilities Two popular types:  Cronbach’s Alpha  Kuder-Richardson (KR-20, KR-21) Graphic: A 35

Cross-Cultural Fairness

Issues of bias in testing did not get much attention until civil rights movement of 1960’s.

Series of court decisions established is was unfair to use tests to track students in schools.

 Black and Hispanic students were being unfairly compared to whites-not their norm group.

36

Cross-Cultural Fairness

Americans with Disabilities Act:  Accommodations for individuals taking tests for employment must be made  Tests must be shown to be relevant to the job in question.

Buckley Amendment (FERPA):  Right to access school records, including test records.

 Parents have the rights to their child’s records 37

Cross-Cultural Fairness

Carl Perkins Act:  Individuals with a disability have the right to vocational assessment, counseling and placement.

Civil Rights Acts:  Series of laws concerned with tests used in employment and promotion.

Freedom of Information Act:   Assures access to federal records, including test records. Most states have expanded this law so that it also applies to state records.

38

Cross-Cultural Fairness

Griggs v. Duke Power Company   Tests for hiring and advancement much show ability to predict job performance.

Example: Can’t give a test to measure intelligence for those who want to get a job as a road worker. IDEIA and PL 94-142:   Assures rights of students (age 2 – 21) suspected of having a learning disability to be tested at the school’s expense.

Child Study Teams and IEP set up when necessary 39

Cross-Cultural Fairness

Section 504 of the Rehabilitation Act:  Relative to assessment, any instrument used to measure appropriateness for a program or service must measure the individual’s ability, not be a reflection of his or he disability.

The Use of Intelligence Tests with Minorities: Confusion and Bedlam  (See Insert 3.1, p. 58) 40

Disparities in Ability

Cognitive differences between people exist, however, they are clouded over by issues of SES, prejudice, stereotyping, etc: are there real differences?

Why do differences exist and what can be done to eliminate these differences?

Often seen as environmental-No Child Left Behind Exercise 3.4, p. 58: Why might their be differences among cultural groups on their ability scores?

41

Practicality

Several practical concerns :      Time Cost Format(clarity of print, print size, sequencing of questions and types of questions) Readability Ease of Administration, Scoring, and Interpretation 42

Selecting & Administering Tests

Five Steps: 1.

2.

3.

4.

5.

Determine your client’s goals Choose instruments to reach client goals.

Access information about possible instruments:  Source books: E.g.,: Buros Mental Measurement Yearbook and Tests in Print     Publisher resource catalogs Journals in the field Books on testing Experts  The internet Examine validity, reliability, cross-cultural fairness, and practicality.

Make a wise choice.

43