Survey of Super LEAs Evaluation Systems

Download Report

Transcript Survey of Super LEAs Evaluation Systems

Survey of Super LEAs
Evaluation Systems
Performance Evaluation Advisory Council
July 16th, 2010
1
Update
• Surveyed 7 of 13 Super LEAs
• We are scheduled to talk to one additional districts
• We will continue to reach out to districts and intend to
reach all of them
• Super LEAs range in size from very small (less than 500
students to 27,000 students)
• Schools involved in process are high schools
• Spoke to the Superintendent or designee including HR
Directors, Asst. Superintendents, and Principals
• Still receiving data back from districts
Objective
2
List of Super
LEAs
3
Elements of an
Evaluation System
The survey tool used was developed in partnership with The New Teacher
Project and addresses three elements of the evaluation system:
Practice - An effective evaluation system must have the right criteria or
structure
Process - the evaluation process must be completed with fidelity (i.e. number
of observations and summative evaluations)
Performance - Districts need to use the information for human capital
decisions, and various stakeholders need to be held accountable for the
integrity of the tool (this includes demonstrating a link between teacher
ratings and student performance).
4
Questions
Broadly the questions will ask the following information:
1. What is your current process for evaluating teachers and principals?
2. Are you satisfied with your existing system?
3. How do you measure student performance?
More detailed questions included:
1. Does the district use an evaluation tool that includes three or more summative performance rating categories
for teachers?
2. Does the district use an established and well documented rubric to evaluate teacher performance? (i.e.
Charlotte Danielson's Framework for Teaching)
3. Does the district evaluation tool identify areas of strength for the teacher that could be leveraged for students
and teachers?
4. Does the district evaluation tool identify areas weakness that can inform the professional development plan,
student assignment and overall scheduling?
5. Do individuals who perform teacher evaluations receive training for objective observation and how to identify
strong practice?
6. Are tenured teachers who receive unsatisfactory ratings subject to remediation, and if there is no improvement,
dismissal?
7. Is there a system in place that allows for teachers' evaluations to be electronically recorded?
8. Does this system provide a facility to allow aggregation of teachers' evaluation across dimensions such as
school, grade, subject, or the entire district?
5
Practice
• Districts have a wide-range of sophistication levels in how they measure
practice:
– A number of districts had observation tools without evaluation rubrics
– 3 districts that we surveyed are currently using modified versions of the
Danielson Framework
– Methods for developing a summative grade include average score, a
point scoring system, and professional judgment
• Districts were open to replacing or modifying their tools based on the state
model
– 4 districts already have joint committees that would allow them to
move quickly towards updating or modifying their tools
• Some districts had models for both teachers and support staff (these districts
tended to use modified versions of Danielson)
6
Process
• Most districts used standard process evaluating non-tenured
teachers annually and tenured teachers bi-annually
– Non-tenured teachers often evaluated twice a year ; the
number of observations ranged from 2 to 6
– Tenured teachers rarely evaluated more than once every 2
years
– A number of districts had “needs improvement” ratings
• Created professional growth plans and triggered an
additional evaluation
• One district created a professional growth plan for year’s in which
the tenured was not evaluated
– Districts generally believed that goals developed in plan
should have more focus on student performance
7
Performance
•
•
•
•
No districts using student data for evaluation or goal setting
Districts are not correlating evaluations with student data
Some districts experimenting with student portfolios
Variety of interim and formative assessments:
• Less use of interim assessment at high school level
• Cost of assessment is a driver in the decision for which tool to use
– AIMS Web
– ThinkLink (4 districts)
– Star Reading and Math
• Looking for state to provide leadership on collection and use of data
– Need to be cognizant of policy implications; must give teachers notice
in March – but don’t necessarily have the data
• Should avoid using formative data as part of evaluation
8
Other Key Findings
• Language matters
– One district had modified the Danielson levels from Basic,
Proficient, Distinguished replacing Basic with Meets
Standard
– They found this led grade inflation, in that experienced
teachers with basic skills were still perceived as satisfactory
• Need to collect tools from districts and leverage their work on
practice
• None of the districts are validating the reliability of evaluators
• None of the districts are collecting data at the granular level of
observation to identify district weaknesses
• Districts had various degrees of training – one district used
stimulus funds to bring in Danielson trainers
9
Possible Next Steps
• Bring districts in based on topics
– Professional Growth Planning
– Teacher practice
– Evaluator Training
– Evaluation of service personnel
• Reach out to joint committees and include them in
PEAC conversations
10