Critical Issues in Early Childhood Assessment and Accountability Kathy Hebbeler

Download Report

Transcript Critical Issues in Early Childhood Assessment and Accountability Kathy Hebbeler

Critical Issues in Early
Childhood Assessment and
Accountability
Kathy Hebbeler
ECO at SRI International
Early Childhood Outcomes Meeting
Baltimore, Maryland August 2007
Terminology


Assessment = Single Tool
Assessment = Process
2
What Is Assessment?
“Assessment is a generic term that
refers to the process of gathering
information for decision-making.”
(McLean, Wolery, and Bailey, 2004)
3
What Is Assessment?
“Early childhood assessment is a flexible,
collaborative decision-making process in
which teams of parents and professionals
repeatedly revise their judgments and reach
consensus....”
Bagnato and Neisworth (1991)
Quoted in DEC Recommended Practices (2005)
4
Possible uses of assessment in
EI/ECSE
Eligibility determination


Norm-referenced test
% delay
Individual program planning


Curriculum-based tools – e.g., LAP,
Carolina
Ongoing individualized progress
monitoring


Curriculum-based tools?
Accountability assessment/program
improvement


E.g., Head Start reporting system
5
Differences in these uses




Individualized vs. aggregate
Who uses the data
Who derives benefit
Who suffers consequences of
assessment done poorly
6
Issue #1: Variation in assessment
use by practitioners

Do not know how many EI/ECSE
programs routinely use assessment
tools for anything other than
eligibility



Presumed eligibility?
In some programs, the only formal
assessment is for eligibility
Appears to be much state-to-state
variation
7
Issue #2: Does each purpose
require a different tool?

Can an assessment (process) being
conducted for eligibility determination
also provide information for



Program planning?
Accountability?
Can the same assessment (process) be
used to plan a program and monitor
process?
8
Accountability in particular


Can assessments already being
used by programs for other
purposes (whatever they are) be
used for accountability purposes?
Does this apply to all tools or only
some categories of tools?

Screening tools for accountability?
9
Issue #3: Changing perspective on
assessment in general early
childhood community



Major changes in last 15 years in
how assessment of young children
is viewed
Old position: Do not test little kids
New position: Ongoing
assessment is part of a high
quality early childhood program
10
What changed

New and different tools became
available for general EC



Curriculum-based assessments were
developed, e.g., Creative Curriculum, Work
Sampling, etc.
Tools for 3-5 came first; 0-3 tools are
coming now
Interesting sidebar: Curriculum-based
assessments for programs serving
children 0-5 with disabilities have been
around for years
11
What changed



The purpose of assessment was
redefined
Not about: sorting, labeling, using to
deny access
Now about: Getting a rich picture of
what children can do and can’t do and
using that information to help them
acquire new skills

“progress monitoring”
12
What changed




Assessment had always been seen as a
process with multiple purposes
Distinctions have been made been good
and bad uses of assessment with young
children
Good uses are now promoted
For more information: NAEYC web site
(Position statement on Curriculum,
Assessment and Evaluation)
13
Position Statement of the National Association for
the Education of Young Children and the National
Association of Early Childhood Specialists in State
Departments of Education (2003)
Policymakers, early childhood
professionals, and others have a
shared responsibility to
“make ethical, appropriate, valid,
and reliable assessment a central
part of all early childhood
programs.”
14
Interesting Irony


Even though the disability community
had developed many curriculum-based
assessment tools, currently [many?
some?] programs do not practice
ongoing assessment
The push for ongoing assessment to
monitor how a child is doing and plan
for instruction/intervention is coming
from the general education community
15
Issue #4: Limitations of existing
assessment tools
“Assessment of young children poses
greater challenges than people generally
realize….assessment results—in particular,
standardized tests, that reflect a given
point in time—can easily misrepresent
children’s learning…There is widespread
dissatisfaction with traditional normreferenced standardized tests which are
based on early 20th century psychological
theory.”
National Research Council, 2001
16
Problem: Nature of the young
child




Not well suited to a standardized
testing situation
Performance varies from day to
day, place to place, person to
person
Don’t perform well for strangers
or on demand
Growth is sporadic and uneven
17
Problem: Response capabilities
of children with disabilities



Same issue as with school-age
children: assessment assumes child
who can see, hear and understand
spoken language, point, etc.
Few assessments include
accommodations nor were children
with disabilities included in the
norming sample
Very little data on validity of
accommodations with young children
18
Problem: Impact of
disability/delay on development

Typically developing children tend to
develop in multiple areas
simultaneously



Language, cognition, motor skills march
forward more or less together
Even though development has been
divided into domains for assessment and
research, much of development is
intertwined
These interconnections present
challenges for obtaining a “pure” domain
score
19
Problem: Impact of
disability/delay on development

More difficult to accurately portray
the development of children
developing atypically with available
assessments, esp. children with
language delays



Do they understand the directions?
Is the assessment tapping cognition or
language?
Are other behavioral/attentional
factors influencing performance?
20
Problem: Psychometric properties of
existing instruments


Some of the most common
instruments are being used with
limited or no reliability and
validity data
None have validity or reliability
data reporting when used for
outcomes and accountability
21
Response: New forms of
assessment



Growing recognition that the only way
to get a valid picture of what a child
can do/does to is look at performance
over a variety of settings and people
including what the child does
spontaneously with familiar adults and
in familiar situations
Can’t base conclusions about child’s
capabilities on elicited responses alone
“Authentic assessment”
22
Position Statement:
NAEYC and NAECS/SDE

“To assess young children’s strengths,
progress, and needs, use assessment
methods that are developmentally
appropriate, culturally and linguistically
responsive, tied to children’s daily
activities, supported by professional
development, inclusive of families, and
connected to specific, beneficial
purposes”
23
Response: Use multiple
sources of information (best
practice)
“A single test, person, or occasion is
not a sufficient source of information.
This means that we must gather
information from several sources,
instruments, settings and occasions
to produce the most valid description
of the child’s status or progress”
 ---DEC Recommended Practices
24
Issue #5: Strategies for synthesizing
multiple sources of information


And just how is that information
supposed to be put together?
Especially for aggregated data
(accountability/program
improvement)
25
Issue #6: Validity and
Reliability


Are not characteristics of an
assessment per se
Validity – context dependent on the use
of the results


Individual vs. group decisions
“Validity , the degree that an
assessment measures what it purports
to measure, relates to the use of the
test, rather than the test itself.”
Score Reliability, Pg. 113
26
Issue #6: Validity and Reliability



Reliability is a characteristic of a set of scores,
not of a test
“…reliability refers to the degree of
consistency of the information obtained from
an information gathering process”
“…reliability of the scores provided by an
instrument or procedure may fluctuate
depending on how, when, and to whom the
instrument or procedure is administered..”
Joint Committee on Standards for Educational
Evaluation, 1994
Quoted in Score Reliability, pg.95
27
Implications for State Outcome
Data Collections


Cannot assume that your state’s
scores are valid and reliability
because your state is using a
tool/process that has
demonstrated validity or reliability
Validity and reliability need to be
established for each use/context
28
Validity in an accountability
system

Validity question: Do the assessment
results lead to the “right” decisions?



Framework from the Council of Chief State
School Officers
How does one assess validity in an
accountability system?
How should a state determine the
validity of its child outcome data? The
data being submitted to OSEP?
29
Issue #7: Validity vs. credibility
dilemma for accountability
Strangers can’t elicit valid data on
young children’s performance
capabilities in a testing situation
BUT
can data produced by those who know
the child and whose programs are
being evaluated, be credible in an
accountability system?
30
States have spoken…


For child outcomes, states are
collecting data through those
familiar with the child
Implications:


Data are subject to credibility
challenge
Need to put safeguards in place so
you can defend the credibility of your
data
31
Issue #8: How to use current
assessment tools to look at
functional outcomes



OSEP outcomes are functional
and cut across domains
Existing assessments provide
scores for domains, not 3
outcomes
Existing assessments vary in the
extent to which they assess
functioning vs. isolated skills
32
Outcomes Are Functional
Functional outcomes:
 Refer to things that are meaningful
to the child in the context of
everyday living
 Refer to an integrated series of
behaviors or skills that allow the
child to achieve the important
everyday goals
33
Question to ask

Is the information provided by
the assessment really
functional?
34
Issue #9: Variation in provider
knowledge of assessment
(Based on ECO work with states)
 Some practitioners are skilled in
administering and interpreting multiple
assessment tools, some in one, some
rarely use any.
 Many children served in programs for
typically developing children where
knowledge and use of assessment is
limited.
 How will practitioners be trained and
supervised?
35
Issue #10: Variation of provider
knowledge of child across settings


Some practitioners only see
children in clinic settings or for a
very short period of time
How can practitioners obtain more
comprehensive information about
children’s behavior and daily
routines?
36
Issue #11: Role of families in
the assessment process



Families provide a unique
perspective on the child’s functioning
Not all assessment tools have good
procedures for incorporating the
family’s perspective
Need good tools/procedures for
learning about child from the family
37
Role of families in the
assessment process



Programs vary in how much and how
they share assessment data with
families, especially with regard to
communicating developmental ages
or extent of a child’s delay.
Some providers are “soft pedaling”
the assessment results
Providers may need training in:


eliciting information about child’s day to
day functioning and
sharing results with families
38
Issue #12: Multiple assessment
systems


Children 0-5 participating in IDEA
programs also will be participating
in the required OSEP reporting
(approx. 1 million children)
Some of these children also may
be participating in other
assessment systems
39
Child
Care
Head
Start
OSEP
Reporting
Participation in
Multiple
Accountability
Systems??
Bottom Line


What needs to happen to make sure
assessments make a meaningful
contribution to improved outcomes
and program improvement?
What can be done to insure
assessment data used for outcomes
measurement are:




Meaningful
Valid
Reliable
Credible?
41
Responsibilities as State Leaders

Ensure that:



Practitioners understand recommended
practices with regard to assessment of
young children
Practitioners have the skills necessary to
engage in recommended practices
Practitioners actively and appropriately
involve families in the assessment process
42
More Responsibilities

Ensure that:


Practitioners have the skills to
sensitively and accurately explain
assessment results to families
Practitioners use ongoing assessment
to monitor children’s process AND to
make adjustment to the child’s
program based on the results
43
And More…

Put mechanisms in place to
promote quality assessment for all
purposes including accountability:




Supervisors, coaches
Data checks and verification
Create a culture promoting data use,
assessment data in particular
Involve the entire state in using data
for program improvement
44
And More…

Collaborate with


Higher education to ensure new
practitioners are entering the field with
necessary knowledge and skills related to
assessment
Other programs serving the same children
to learn their “message” and possible
requirements related to assessment (Goal:
Families hear one message)
45
I
Love
Good
Outcome
Data
Hats off to
you for
leading the
charge!