Training in Experimental Design (TED): Developing Scalable

Download Report

Transcript Training in Experimental Design (TED): Developing Scalable

Training in Experimental Design (TED):
Developing Scalable and Adaptive Computer-based Science Instruction (Year 2)
Stephanie Siler, Mari Strand Cary, Cressida Magaro, Kevin Willows, and David Klahr
Background
Year 2 Activity and Results
The ability to design and evaluate experiments is critical in both science classes and in the real
world. Although elementary and middle school students often do not have these skills, they
can be taught with relatively brief direct instruction (e.g., Chen & Klahr, 1999; Klahr &
Nigam, 2004). However, such instruction is less successful in more challenging schools.
Our activities in Year 2 built on important results of our Year 1 work:
• Learning of CVS was found to be highly correlated with reading ability. Thus, we sought to decrease
reading and processing demands in Version 2.1 to help lower-reading/knowledge students. Specifically,
we presented the procedural rules prior to providing the reasoning for those rules.
• Students often misinterpreted the goal of instructional activities. They often thought that the primary
instructional goal was to learn about the specific domain, rather than the more abstract, and domaingeneral goal of learning how to design good experiments. Thus, in Version 3.2, we will use variables
with “nonsense” names as choices to help prevent students’ goal misconceptions.
• Identification of successful instructional techniques (e.g., “step-by-step” remedial instruction we will
use in Version 3.3).
Instructional goals:
• Significantly increase elementary and middle school children’s understanding of
scientific experimentation, including procedural (how to design good experiments) and
conceptual (why experiments are/are not informative) understandings. Students will
develop general schemas for designing and interpreting experiments that will enable
them to correctly answer achievement test questions on experimental design.
Was Version 2.1 an improvement over Version 1?
• Immediate ramps post-test (near-transfer) scores suggest the answer is “no.” Although there was a trend
for V2.1 > V1, the difference was not significant.
• Delayed TED post-test (far-transfer) scores also suggest the answer is “no”:
• There was a Version x Pre-test x Reading interaction:
• V1 > V2.1 for high-reading, low-pre-test students
• V1 > V2.1 for low-reading, medium-pre-test students
• V1 = V2.1 for low-reading, low-pre-test students
• These results suggested a sequential focus on procedural before conceptual knowledge was not
beneficial to the students we are most hoping to help.
• Close the achievement gap between high- and low-SES students in this specific, yet
broadly applicable, area of science education.
Design goal: Develop a computer-based intelligent tutor for Training in Experimental Design
(TED) that will provide adaptive instruction based on individuals’ reading level, knowledge
state and mastery in real time across a variety of tasks and science content domains.
Participants: Fifth through seventh graders from five local Catholic schools serving a diverse
student population in terms of SES, reading ability, and experience in experimental design.
Version 2.2 Focus and Implementation
• Based on the comparisons between V2.1 and V1, we returned to the traditional Klahr lab approach
(Chen & Klahr, 1999) of focusing on conceptual knowledge (from which the procedural rules emerge).
We retained, however, the discussion emphasis present since V1.
• To simulate our final Intelligent (knowledge-tracing) Tutor that tailors instruction to the individual
student, we:
• Used human tutors to diagnose students’ initial knowledge state from their pre-test response
patterns (i.e., which CVS rules and misconceptions students held). Members of the research team
tutored students one-to-one or in small groups (where members shared a common diagnosis).
• Used a fully navigable V2.2 interface and a “database” of questions (practice and remedial) that
corresponded to CVS rule(s). Tutors selected questions for individual students from this database
based on their assessments of their students’ knowledge states.
Instructional components of TED:
•
•
•
•
•
TED pre/post-tests (6 questions): evaluate and design experiments in three domains.
Ramps pre/post-test (4 questions): design experiments for different ramps variables.
Introduction: Purpose of lesson, term definitions, etc.
Explicit CVS instruction (e.g., Klahr & Chen, 1999).
Remedial CVS instruction (e.g., “step-by-step” walk-through of setting up informative
experiments) for students showing difficulty understanding explicit instruction.
• CVS practice: Students apply their CVS knowledge to problems selected based on students’
knowledge states.
Refer to “TED Tutor Evolution” figure below for instructional components by TED version.
Acknowledgments
This research was supported by
the Department of Education,
Institute of Education Sciences
(# R305H060034)
Was Version 2.2 an improvement over Version 1?
• Delayed TED post-test scores suggest the answer is “no.” (V1 = V2.2).
• No significant interactions between the Pre-test, Reading achievement scores and Version.
• No differential effects of instruction based on reading, prior knowledge.
• The relationship between reading achievement and post-test was positive and highly significant (p <
.001). (The relationship between pre-test and post-test was weaker, p = .09).
How can we help low-reading students and students with goal misconceptions?
• Version 3.2 (in development) will address this question by incorporating explicit instruction on CVS
concepts prior to any other tutor interactions.
• We adopted this in V3.2 because the heavy emphasis on discussion may have been confusing to lowreading students.
• To minimize elicitation of domain knowledge and in turn reduce goal misconceptions, two of the four
variables used in the explicit instruction have “mystery” values (i.e., nonsense names are used).
• We will compare students’ CVS learning in V3.2 to analogous face-to-face learning outcomes (e.g., Klahr &
Nigam, 2004; Strand-Cary & Klahr (to appear)) as well as earlier versions of TED instruction.
Are computerized TED pre- and post-tests at least as good as paper/pencil tests (Version 3.1)?
• Online assessment of student knowledge prior to instruction will allow automatic scoring and diagnoses of
student knowledge that will contribute to the tutor’s knowledge-tracing capabilities.
• Computerized and paper/pencil test scores were not significantly different; thus, we are confident we can
incorporate the computerized tests into all future TED instruction.
• Scoring trend was in favor of the computerized version.
• All students who took the computerized test reported they preferred it to a paper version.
Future work (Year 3 goals):
• Develop and test Wizard of Oz tutor (Version 3.3):
• The purpose of V3.3 is to test TED architecture, and provide further information about the pedagogical
actions to take given the student’s current knowledge state.
• It will be a full TED tutor with computerized pre- and post-tests, introduction, CVS instruction,
remediation, and problem-solving. Artificial intelligence (AI) component will not be present.
• Through a computer interface, a human tutor (acting as the AI component) will observe a student as he
works through TED instruction. Based on the student’s current actions and knowledge state (visible to
the tutor), the computer and human tutor will jointly determine instruction.
• Develop final intelligent TED tutor (Version 4) which will be identical in format to final Wizard of Oz
versions, but will have the AI component in place of the human tutor (refer below for architecture).
TED Tutor Evolution…
Year 1
Version 1
Teacher-delivered
whole-classroom discussion-based
instruction (no computerization)
Inflexible
(No differentiation)
• TED pre-test
• Ramps pre-test
• CVS instruction discussion of 3
ramps set-ups via:
• Ask students if experiment
is good and why/why not.
• Students generate and justify
ideas for improving the
experiment.
• Ramps post-test
• TED post-test
Year 3
Year 2
Version 2.2
Human tutor-delivered
small group or one-to-one discussionbased instruction (computer assisted)
Version 2.1
Teacher-delivered
whole-classroom discussion-based
instruction (computer-assisted)
Adaptive: Groups divided based on pretest diagnoses (rules/misconceptions);
tutoring decisions adapted to students.
Limited flexibility
(Specified points for differentiation)
• TED pre-test
• Sequential CVS instruction via:
• Introduction
• Procedural rules (Study habits),
• Conceptual (Ramps figures and tables)
• Ramps post-test
• TED post-test
•
•
•
•
TED pre-test
Introduction (e.g., definitions)
Conceptual/procedural (ramps, study)
CVS practice (many domains)
• Remedial: focus on individual
“rules” of CVS (many domains)
• TED post-test
Version 3.1
Computerized TED
pre/post-tests.
TED pre/post-test
• “Design” questions and
“explain” prompts added to
increase diagnostic info
(below) .
Version 3.2
Computer-delivered “explicit
instruction”
Version 4
Brief teacher-delivered
instruction
Individual computer use
Ongoing teacher facilitation
Version 3.3
Wizard of Oz (computer-delivered,
supplemented by human)
Flexible
(Multiple branching paths)
• Ramps pre-test,
• Computerized explicit CVS
instruction
• Ramps post-test.
• TED pre-test
• Version 3.2 explicit instruction,
followed by either:
• “Step-by-step” instruction on
designing good experiment if NO
CVS mastery following explicit
instruction, OR
• Scaffolded CVS practice in
problems (including problems
similar to those on standardized
achievement tests) if CVS mastery.
• TED post-test
Adaptive / “Intelligent”
(“Web” of branches)
Data Streams Overview
Student
Student
Student
Server Architecture
StudentUIUI
Student
Student
UI
Sockets
Teacher
Audit mux
Stream
User
User
Threads
Threads
User
Threads
SAI Events
Input
Stream
Automation
Stream
State Events
Encryption
Server
Automation
Stream
Log Stream
Log
NTP
Knowledge
Knowledge
Tracing
Knowledge
Tracing
Tracing
(per user)
Tutor
State
Automation
Commands
Playback
Engine
Production
System
Logging
Teacher
UI
Teacher
Tutor Logic Overview
• Server logging of student
responses, including diagnostic
information.
• Logging of user actions (during
instruction and problem-solving) in
place.
Ramps
Discussion
Get it
Holistic 1
...
Don’t Get
Step by Step
Step repetition possible
Holistic Qj
Error1
Holistic Qj + 1
Error1
...
Holistic Qn
Alternate
Remediation
Holistic 1
Scaffolded
Fading Scaffolding
Alternate
Remediation
(if necessary)
Holistic Qn