Transcript Document

ASSESSMENT CRITERIA
Jessie Johncock
Mod. 2 SPE 536
October 7, 2012
ASSESSMENT
 “Assessment in special education is the process used to determine a
student’s specific learning strengths and needs and to determine whether
or not that student is eligible for special education services (Pierangelo &
Giuliani, 2008).”
DESCRIPTIVE STATISTICS
 Summary statistics such as percent, averages and measures of
variability that are calculated on a specific group of individuals.
 Descriptive statistics are used to define groups of numerical data
such as test scores, number or hours of instruction or the number of
students enrolled in a particular classroom setting.
(Farflex, 2012)
DIMENSIONS OF
DESCRIPTIVE STATISTICS
 Variable (x) - A measurable characteristic. Individual

Bar graph or histogram - A frequency graph with number of
measurements of a variable are called varieties, observations,
blocks or length of bar representing the frequency of occurrence.
or cases.

 Population (X) - All subjects or objects possessing some
common specified characteristic. The population in a
statistical investigation is arbitrarily defined by naming its
unique properties.
 Parameter - A measurable characteristic of a population. A
Frequency polygon - A modification of the bar graph with lines
connecting the midpoints of the highest point on each bar.

Frequency curve - A modification of a frequency polygon with
the sharp corners rounded. The area under the connecting line of
the bar graph, frequency polygon, and frequency curve are
equivalent and represent frequency of occurrence.

Mean (µ) or () Arithmetical mean - A number having an
measurable quantity derived from a population, such as
intermediate value between several other numbers in a group
population mean or standard deviation.
from which it was derived and of which it expressed the average
 Sample - A smaller group of subjects or objects selected from
a large group (population).
 Frequency graph - A picture depicting the number of times an
event occurred.
value. It is the simple average formed by adding the numbers
together and dividing by the number of numbers in the group
(Keys, 1997)
RELIABILITY
 Reliability is used to describe
the overall consistency of a
measure. A measure is said to
have a high reliability if it
produces similar results under
consistent conditions.
VALIDITY
 Validity is the degree to
which the evidence supports
the interpretation of test
scores.
(AERA, 1999)
NORM-REFERENCED TESTS
(NRTS) VS. CRITERION REFERENCED TESTS (CRTS)
These two tests differ in their proposed purposes, their content selection and
the scoring process that is used to define how the test results should be
interpreted.
PURPOSE
CRT
NRT
 To rank each student with
respect to the achievement of
 To conclude if each student has
achieved specific skills or concepts and
determine students’ initial background
others in broad areas of
knowledge on specific content.
knowledge
 The goal with these tests is to
 To discriminate between high
and low achievers.
decided if a student has displayed
mastery performance of a certain skill
or set of skills.
(Huitt, 1996)
CONTENT
NRT
CRT
 Measures extensive skill areas
 Measures particular skills which
taken from a variety of
textbooks, syllabi and the
judgments of curriculum
experts.
make up a designated
curriculum in which theses
skills are identified by
educators and each skill is
expressed as an instructional
objective.
(Huitt, 1996)
SCORE INTERPRETATION
CRT
NRT
 Each individual is compared
 Each student is compared with
to other examinees and
a specific standard for acceptable
designated a score (typically
expressed as a percentile or a
grade equivalent score).
achievement. The performance of
other examinees is not taken into
consideration.
 Student achievement is
reported for individual skills.
(Huitt, 1996)
CONCLUSION
 In order to properly assess a student’s abilities and comprehension,
teachers themselves need to make sure that they are aware of
different types of assessment, how to properly determine the results
from the assessment and determine what the proper response for a
student should be from the attained evidence (i.e. should the student
receive special services).
REFERENCES

American Educational Research Association, Psychological Association, & National Council on
Measurement in Education. (1999). Standards for Educational and Psychological Testing. Washington, DC: American
Educational Research Association

Huitt, W. (1996). Measurement and evaluation: Criterion- versus norm-referenced testing. Educational
Psychology Interactive. Valdosta, GA: Valdosta State University. Retrieved October 1, 2012
fromhttp://www.edpsycinteractive.org/topics/measeval/crnmref.html

Key, J. P. (1997). Descriptive Statistics. Research Design in Occupational Education. Oklahoma City, OK:
Oklahoma State University. Retrieved October 1, 2012 from
http://www.okstate.edu/ag/agedcm4h/academic/aged5980a/5980/newpage22.htm

Pierangelo, R. & Guiliani, G. A. (Eds.). (2008). Understanding Assessment in the Special Education Process:
A Step-by-Step Guide for Education. Thousand Oaks, CA: Corwin Press. ISBM 13:9781412917919

Popham, J. W. (1975). Educational evaluation. Englewood Cliffs, New Jersey: Prentice-Hall, Inc.