Name of presentation

Download Report

Transcript Name of presentation

Educational Evaluation: Some basic principles

Evaluation and Testing in Language Education

Session 2 Dr Kia Karavas

Evaluation: when did it all begin?

• The rise in systematic evaluation activity began in the field of education in the late forties in the US and later in the UK.

• • Disappointed with the unprincipled and ad hoc approach to curriculum development in the US, Ralf Tyler in 1949 with the publication of his book “The Basic Principles of Curriculum and Instruction”, book, Tyler proposed a gave evaluation a prominent place in the curriculum development process. In this systematic and simple approach to curriculum planning.

Tyler’s model was extremely influential and was adopted in the US and in the UK in the 1950’s and 1960’s

1 2 3 4

Tyler’s objectives model

• Objectives • Content • Organisation • Evaluation

Criticisms of the Tylerian model of evaluation

• The results of large scale evaluations in the 60’s which adopted the Tylerian approach were quite disappointing . • • Stenhouse in 1975, as Director of the Humanities Curriculum Project in the UK, expressed his concerns with the Tylerian model. “The objectives model of evaluation, may give us an indication of whether objectives have been achieved but gives us no indication of how these objectives were achieved. It pays no attention to the processes by which these objectives were achieved; as such it does little or nothing to improve the quality of teaching and learning”.

Move from product to process…

• • Stenhouse’s critique gave rise to a new wave of an alternative approach to curriculum evaluation which focused more on the in the evaluation scene and led to the development process of curriculum development and relied on description and interpretation . At the same time (late 60’s to late 70’s) a plethora of models or approaches to evaluation were developed...

Evaluation in language teaching

• Within language education , the first evaluation studies to be carried out were the so-called methods comparison studies which set out to compare the effectiveness of language teaching methods following an experimental approach much along the lines set by the Tylerian tradition. • As Alderson (1992:283) points out: A common evaluation paradigm for language education in the 1960’s and 1970’s was to identify a suitable set of groups of learners, to match them with appropriate control students, to administer a treatment to the experimental group and compare the results of such an experiment with the outcomes of the control group.

Move from product to process in language program evaluation

• • The results of such methods comparison studies were largely discouraging towards the . With the development of alternative models of evaluation, language program evaluation moved investigation of program processes and not only program outcomes. As Lynch (1996: 39) asserts: The history of program evaluation in applied linguistics can be seen, thus far, as a move away from a concern with tightly controlled experiments focusing on the analysis of product, or student achievement, to a concern for describing and analysing the process of programs as well. This move has paralleled the paradigm dialog in educational evaluation…Specifically, the concern for investigating process as a part of program evaluation has encouraged the use of naturalistic methods.

Large scale ELT project evaluation

• The implementation of fairly large scale ELT projects funded by government and other agencies (British Council, Overseas Development Agency) from the 1980’s and onwards, gave rise to the need to identify the impact that such projects had and whether they were “value for money”.

TWO BASIC TRUTHS ABOUT EVALUATION 1: there is no best way of conducting an evaluation.

• The models of evaluation that have been developed reflect the very many choices available to evaluators and should not act as blueprints or straitjackets How one decides to conduct an evaluation depends on the purposes of the evaluation, the nature of the program or project being evaluated, the individuals involved and their interrelationships and timescales and resources available. What is essential is that evaluation should be explicit, principled and systematic

TWO BASIC TRUTHS ABOUT EVALUATION

2. evaluation plays a pivotal role in each and every stage of the curriculum development process and is no longer confined to the last stage of the curriculum design process when decisions have been made and implemented.

why all this fuss about evaluation?

“We have a social and moral responsibility towards our students and towards society at large to state as clearly as we can what it is that we do for them and why what we do is valuable.”

Purpose of evaluation

How you decide to carry out an evaluation depends, to a great extent, on the purposes of your evaluation, i.e. why you are evaluating . • There are three general evaluation purposes: A. evaluation for accountability purposes, B. evaluation for purposes of curriculum development and C. evaluation for purposes of self-development of teachers. Rea-Dickins and Germaine (1992)

Evaluation for accountability purposes

• Governments and funding bodies need evidence on the basis of which they can stop, cut, continue or extend funding of projects and programmes. They need to determine whether there has been value for money and whether a particular project has been effective and efficient . • Accountability, “ the taxpayer refers to the answerability of staff to others for the quality of their work… ‘Others’ in this case could be bureaucrats, employers, senior school staff, parents, students, the community or ” ( Weir and Roberts (1994:4 )

Evaluation for accountability purposes

•    Evaluations for purposes of accountability, take place at the end of a project or after the project has been running for some time, aim to report on a product and give an evaluative judgement. information derived is not used in any major way to improve the functioning of the curriculum or classroom practice. It is used to decide whether or not to continue funding the project.

    

Summative evaluation

Assesses project’s success and impact , focuses on program outcomes or end-products, uses readily observed or measurable phenomena as criteria for success and seeks answers to questions like…  Was the project successful?

 To what extent did the project meet the overall goals?

 What components were the most effective?

 Were the results worth the projects cost?

 Is the project replicable?

Decisions resulting from summative evaluations will be fairly large scale and may result in sweeping changes .

The problem with summative evaluation

• “when evaluation is conducted only at the end of programmes or projects, it frequently means that crucial information for the evaluation is no longer available”. Weir and Roberts (1994: 15) • Since in this type of evaluation, data on how a program was implemented is not collected, there is no way of knowing whether it was the programme that produced the results or whether the programme was implemented at all.

Evaluation for purposes of curriculum development

•    seeks to improve the educational quality of a programme and is carried out while a programme is in progress.

involves information from teachers and other relevant professionals and is usually carried out by an external evaluator and insiders or by insiders only. “Evaluation for development should be guided by the concerns of insiders; by the identification of strengths which can be built upon (for example, parts of the course, materials or teaching which are working well) as well as by the identification of obstacles to progress and the introduction of more effective means to achieve desired objectives” Weir and Roberts (1994: 7)

Formative evaluation

     answers questions such as: what are the strengths and weaknesses of the program, how can it be improved and become more effective? is ongoing and takes place during the life of a project.

Main goal is to check, monitor, and improve goals - to see if activities are being conducted and components are progressing toward project involves the evaluation of all aspects of a programme entail tests and measurement. and collects information which is largely descriptive and qualitative and need not designed to provide information that may be used for future planning and action i.e. to modify or improve products, programs, or activities,

Summative vs formative evaluation

“… when the cook tastes the soup it is formative evaluation and when the guest tastes the soup it is summative. The key is not so much when as why…Both lead to decision making but towards different decisions”.

Hopkins (1989:16)

Both are necessary….

• • Formative and summative evaluations represent two ends of a continuum and ideally any evaluation should involve both formative and summative dimensions. Both dimensions are important; knowing that goals were reached without knowing whether a program was implemented or not, does not tell you why a program was successful. On the other hand, looking only at the implementation of a program without assessing whether original aims were achieved cannot tell you whether a program was effective

Evaluation for purposes of teacher self-development

• • Evaluation is an intrinsic part of teaching and can provide a wealth of information for improvement of classroom practice.

By evaluating their own classrooms, teachers gain a deeper understanding of what actually happens in their classrooms (as opposed to what is supposed to happen), become more aware of the parameters in which they are working, can confirm the validity of their teaching practices and become more knowledgeable of the processes that lead to successful teaching and learning. They can also become more aware of the need for change and also when and how change can take place and develop their confidence and skills in exploring and presenting issues of professional concern.

• it is only by involving teachers in the curriculum renewal and evaluation process that we can close the gap between curriculum intentions and practice (Stenhouse 1975).

Illuminative evaluation

• • In this approach to evaluation the stress is on description of the process, i.e. on describing and seeking to find out how different aspects of the programme are working. Illuminative evaluation includes three characteristic stages: a) observation, b) further enquiry and c) explanation.

• “…illuminative evaluation concentrates on the information gathering rather than the decision-making component of the evaluation. The task is to provide a comprehensive understanding of the complex reality (or realities) surrounding the programme – in short to illuminate” Hopkins (1989: 24)

The Evaluation Process

Steps in conducting an evaluation (six phases) 1. Initiation and planning: Setting the boundaries of the evaluation and establishing the evaluation framework 2. Designing the evaluation instruments; collecting the data 3. Processing and interpreting data 4.

Reporting findings 5. Using findings to make decisions about further action 6. Taking action • 7. Returning to step 1. Based on Alderson 1992, Hopkins 1989, Lynch 1996, Rea-Dickins and Germaine 1992, Taylor Fitz Gibbon and Morris 1987.

Step 1: Planning the evaluation (evaluation framework)

• • • • • • • • • • Why are we carrying out the evaluation and who is the information for?

What aspect(s) of the teaching and learning curriculum will be evaluated?

What are the criteria to be used in the evaluation? What are your evaluation questions?

When will the evaluation take place?

What procedures will be used to collect information?

Who are we going to get the information from?

Who will be involved in the evaluation?

How are we going to manage the evaluation?

What are we going to do with the information we get?

What are the constraints and problems that need to be taken into account when planning and carrying out the evaluation?

What aspect(s) of teaching and learning will be evaluated?

• • • • • • • • Curriculum design : how effective was the planning and organisation of the programme? How feasible were its objectives?

Classroom processes : to what extent was the programme implemented?

Materials of instruction: did materials aid student learning?

Monitoring of pupil progress : what are students learning?

Teacher practices : are teachers aiding learners in achieving the objectives of the programme?

Learning environment : Is the learning environment responsive to student needs?

Staff development : Is staff provided with opportunities for further development?

Decision making : Are decision making structures effective in achieving the goals of a programme?

• (Based on Sanders 1992)

What are your evaluation questions?

   Identify key stakeholders and audiences early to help shape questions. Formulate potential evaluation questions of interest considering stakeholders and audiences.

Define outcomes in measurable terms, including criteria for success.

Determine feasibility and prioritize and eliminate questions.

 • • • • • Examples Are teachers using the suggested method in the classroom?

Do the materials reflect the principles of the curriculum?

Do the materials respond to student needs?

Are assessment methods compatible with syllabus objectives?

What procedures will be used to collect information?

• Depending on the purpose of the evaluation and the evaluation questions appropriate methods of data collection need to used.

Diary studies Tests (proficiency, achievement etc) Documentary analysis field notes Records analysis; Classroom observation Interaction analysis Individual /group interviews Opinion surveys; questionnaire s

Who will evaluate?

Evaluation Experts • They have evaluation expertise • They have no vested interest in the object of the evaluation • They are less likely to be biased towards particular outcomes • They can offer a “fresh” perspective on results • They can address issues that insiders find uncomfortable/controversial • Have greater credibility Insiders • They know the situation well and can better explain or offer different intepretations of phenomena • Can help trace developments in the project • They have better recollections of decisions taken • They understand human relationships/predispositions better • It helps participants become more aware and critical of their practices • Raises awareness of evaluation practices

Conditions to be met

• Information gathered is not perceived as valuable or useful (Wrong questions asked) • Information gathered is not seen as credible or convincing (wrong techniques used) • Report is late or not understandable (does not contribute to decision making process)

Based on clearly stated objectives Findings contribute to decision making Features of good evaluation Planned, not ad hoc Objective systematic Continuous and integral part of the teaching – learning process