Reading Assessments for Elementary Schools

Download Report

Transcript Reading Assessments for Elementary Schools

Reading Assessments for Elementary Schools

Tracey E. Hall Center for Applied Special Technology Marley W. Watkins Pennsylvania State University Frank C. Worrell University of California, Berkeley

REVIEW: Major Concepts

• Nomothetic and Idiographic • Samples • Norms • Standardized Administration • Reliability • Validity

Nomothethic

• Relating to the abstract, the universal, the general.

• Nomothetic assessment focuses on the group as a unit.

• Refers to finding principles that are applicable on a broad level.

• For example, boys report higher math self-concepts than girls; girls report more depressive symptoms than boys..

Idiographic

• Relating to the concrete, the individual, the unique • Idiographic assessment focuses on the individual student • What type of phonemic awareness skills does Joe possess?

Populations and Samples I

• A population consists of all the representatives of a particular domain that you are interested in • The domain could be people, behavior, curriculum (e.g. reading, math, spelling, ...

Populations and Samples II

• A sample is a subgroup that you actually draw from the population of interest • Ideally, you want your sample to represent your population – people polled or examined, test content, manifestations of behavior

Samples

• A random sample is one in which each member of the population had an equal and independent chance of being selected.

• Random samples are important because the idea is to have a sample that represents the population fairly; an unbiased sample.

• A sample can be used to represent the population. • Sampling in which elements are drawn according to some known probability structure.

• Probability samples are typically used in conjunction with subgroups (e.g., ethnicity, socioeconomic status, gender).

Norms I

• Norms are examples of how the “average” individual performs.

• Many of the tests and rating scales that are used to compare children in the US are norm-referenced.

– An individual child’s performance is compared to the norms established using a representative sample.

Norms II

• For the score on a normed instrument to be valid, the person being assessed must belong to the population for which the test was normed • If we wish to apply the test to another group of people, we need to establish norms for the new group

Norms III

• To create new norms, we need to do a number of things: – Get a representative sample of new population – Administer the instrument to the sample in a standardized fashion.

– Examine the reliability and validity of the instrument with that new sample – Determine how we are going to report on scores and create the appropriate tables

Standardized Administration

• All measurement has error.

• Standardized administration is one way to reduce error due to examiner/clinician effects.

• For example, consider these questions with different facial expressions and tone: • Please define a noun for me :-) • DEFINE a noun if you can ? :- (

Normal Curve

• Many distributions of human traits form a normal curve • Most cases cluster near middle, with fewer individuals at extremes; symmetrical • We know how the population is distributed based on the normal curve

Ways of Reporting Scores

Mean, standard deviation

• Distribution of scores – 68.26% ± 1; 95.44 ± 2; 99.72 ±3 • • • Stanines (1, 2, 3, 4, 5, 6, 7, 8, 9) • Standard scores - linear transformations of scores, but easier to interpret

Percentile ranks* Box and Whisker Plots*

Percentiles

• A way of reporting where a person falls on a distribution.

• The percentile rank of a score tells you how many people obtained a score equal to or lower than that score.

• Box and whisker plots are visual displays or graphic representations of the shape of a distribution using percentiles.

20 18 16 14 90th Percentile Performance 12 10 8 6 4 10th Percentile 2 0 Explanation of the Box Plot Grade 2 Students Individual Out liers 75th Percentile 50th Percentile 25th Percentile The box plot is a picture of the distribu tion of scores on a me asure .

Correlation

• We need to understand the correlation coefficient to understand the manual • The correlation coefficient,

r

, quantifies the relationship between two sets of scores • A correlation coefficient can have a range from -1 to + 1 – Zero means the two sets of scores are not related.

– One means the two sets of scores are identical (a perfect correlation)

Correlation 2

• Correlations can be positive or negative.

• A + correlation tells us that as one set of scores increases, the second set of scores also increases. they can be negative. Examples?

• A negative correlation tells us that as one set of scores increases, the other set decreases. Think of some examples of variables with negative

r’s

.

• The absolute value of a correlation indicates the strength of the relationship. Thus .55 is equal in strength to -.55.

How would you describe the correlations shown by these charts?

14 12 10 8 6 4 2 0 1 3 5 7 9 11 13 2 7 9 10 1 4 3 3 4 5 1.4

1.2

1 0.8

0.6

0.4

0.2

0 12 10 8 6 4 2 0 1 10 9 8 7 6 5 2 1.2

1.2

1.2

1.2

1.2

1 2 3 3 4 4 5 5 6 6 Seri es1 Seri es1 Seri es1

Reliability

• Reliability addresses the stability, consistency, or reproducibility of scores.

– Internal consistency – Split half, Cronbach’s alpha – Test-retest – Parallel/Alternate forms – Inter-rater

Validity

• Validity addresses the accuracy or truthfulness of scores. Are they measuring what we want them to?

– Content – Criterion - Concurrent – Criterion - Predictive – Construct – Face – (Cash)

Content Validity

• Is the assessment tool representative of the domain (behavior, curriculum) being measured?

• An assessment tool is scrutinized for its (a) completeness or representativeness, (b) appropriateness, (c) format, and (d) bias – E.g., MSPAS

Criterion-related Validity

• What is the correlation between our instrument, scale, or test and another variable that measures the same thing, or measures something that is very close to ours?

• In

concurrent

validity, we compare scores on the instrument we are validating to scores on another variable that are obtained at the same time.

• In

predictive

validity, we compare scores on the instrument we are validating to scores on another variable that are obtained at some future time.

Construct Validity

• Overarching construct: Is the instrument measuring what it is supposed to?

– Dependent on reliability, content and criterion-related validity.

• We also look at some other types of validity some times – Convergent validity:

r

with similar construct – Discriminant validity:

r

with unrelated construct – Structural validity: What is the structure of the scores on this instrument?

Elementary Normative Sample

• Stratified by educational region • Males and females represented equally.

• School, class, and individuals chosen at random.

• Final sample consists of 700 students (50% female).

p. 2

Table 1.1

Normative Sample by Educational Region

Division St. George West St. George East St. Andrew/St. David Caroni Nariva/Mayaro Victoria St. Patrick Tobago Total Population Number Percent 31, 948 14, 255 3, 859 10, 913 2, 287 24. 1 16. 5 6. 3 15. 0 3. 7 26, 197 12, 711 19. 0 11. 0 3, 059 4. 3 105, 22 9 100.0

Norm Sample Number 160 116 44 105 29 133 80 33 700 Percent 22. 9 16. 6 6. 3 15. 0 4. 1 19. 0 11. 4 4. 7 100.0

Table 1.2

Normative Sample by Age

Age 4 5 6 7 8 9 10 11 12 13 14 Number 14 4 1 18 88 96 98 98 99 95 89 Percent 2. 6 12. 6 13. 7 14. 0 14. 0 14. 1 13. 6 12. 7 2. 0 0. 6 0. 1 p. 2

Table 1.3

Normative Sample by Gender

Gender Male Female Number Percent 350 350 50. 0 50. 0

p. 2

Table 1.4

Normative Sample by Grade

Grade 1 st Year 2 nd Year Standard 1 Standard 2 Standard 3 Standard 4 Standard 5 Number 100 100 100 100 100 100 100 Percent 14. 3 14. 3 14. 3 14. 3 14. 3 14. 3 14. 3 p. 3

Table 1.5

Normative Sample by Ethnic Background

Ethnic Background African East Indian Mixed Other Not Reported Number 269 257 149 24 1 Percent 38. 4 36. 7 21. 3 0. 1 3. 4 p. 3

Table 1.6

Normative Sample by Parental Education Level

Highest level completed Primary Form 3 Form 5 Form 6 University Unknown Mothers Fathers Number 209 49 203 15 219 5 Percent 29. 9 7. 0 29. 0 2. 1 0. 7 31. 3 Number 216 31 169 16 10 258 Percent 30. 9 4. 4 24. 1 2. 3 1. 4 36. 9 p. 3

Measures

• First and Second Year/Infants 1 and 2 – Mountain Shadows Phonemic Awareness Scale (MS PAS) - group administered.

– Individual Phonemic Analysis • Second Year/Infant 2 to Standard 5 – Oral Reading Fluency • Standards 1 and 2 – The Cloze Procedure

Assessment Instruction Cycle

Initial Evaluation • Archival Assessment • Diagnostic Assessments • Formal Standardized Measures Assessment • Determine starting point • Analyze Errors • Monitor Progress • Modify Instruction Instructional Design • Determine Content • Select Language of Instruction • Select examples • Schedule scope and sequence • Provide for cumulative review Instructional Delivery • Secure student attention • Pace instruction appropriately • Monitor student performance • Provide feedback Madigan, Hall, & Glang(1997)