Learned-non use. Myth or reality? Preliminary
Download
Report
Transcript Learned-non use. Myth or reality? Preliminary
Optimal Therapy After Stroke:
Insights from a Computational
Model
Cheol Han
June 12, 2007
What is rehabilitation?
Behavioral Compensation
Use the un-paralyzed arm instead of using the
paralyzed arm
Develop the alternative strategy
Neural Recovery
Use the paralyzed arm in order to be as same
as the normal person
Change neuro-plasticity to use the peri-lesion
neurons
Learned non-use
Dr Taub: (1966)
"Right after a stroke, a limb is paralyzed,"
“Whenever the person tries to move an arm, it simply doesn't
work."
“Even when all the cells that represent the arm in the brain are not
dead, the patient, expecting failure, stops trying to move it.
"We call it learned non-use,"
(from http://www.mult-sclerosis.org/news/Aug2001/RehabTherapy.html)
The first question:
Is “Learned non-use” myth? or reality?
Hypothesis (Learned non-use is myth): Less use
of the arm due to lower performance after stroke,
but the use is more or less proportional to
performance.
Alternative hypothesis (Leaned non-use is reality):
% of spontaneous hand use is very small even
with non zero performance
Learned non-use
"
% of spontaneous hand use
How to define or measure
Motor performance?
Another possible explanatio
From Dr Gordon and Dr Winst
Learned Non-use
Motor Performance
The second question: How to find
optimal schedule of rehabilitation?
Rehabilitation program is expensive
Optimal duration of rehabilitation may be
different, when
Speed of learning is different
Size of stroke is different
And so on.
One-fits-all rehabilitation is not costefficient.
-> Optimal therapy fitting to individuals.
Approach
Find “optimal therapy schedule”
using a SIMPLE computational model that has
TWO components:
1. Motor cortex for arm reaching
Motor learning and re-learning
Motor lesion due to stroke
Error-based learning
2. Adaptive spontaneous arm use
“Action chooser”
Reward-based learning
Reward-based
Learning
Desired
Initial
Direction
Error-based
Learning
Left
Motor
Cortex
Action
Choice
Module
Reward
Function
+
-
Right
Motor
Cortex
Executed
Initial
Direction
Error-driven learning vs. Reward-driven learning
Reward-driven learning
(Reinforcement learning)
Error-driven learning
(Supervised learning)
Specific
Error
“Therapist”: Your initial direction is
off 20 degree leftward and your final
hand position is 5 cm far from the
target in the left.
Specify how much and which
direction patient should update
“Therapist”: Your movement was
better than what it was before!
Great, your are making progress
Tell patient whether the
movement was good or not.
Experimental Setup for simulation
Each hand starts at the same
position.
Reach to the randomly
selected target (equal
distance)
Two conditions after stroke
Free choice (no rehabilitation)
Rehabilitation: force to use the
affected arm in all directions.
“constraint induce therapy”
Motor cortex model:
simplifying assumptions
Assumption 1: The motor cortex has directional coding
neurons with signal dependent noise. (Georgopoulos et al.,
1982 and Reinkensmeyer, 2003)
Todorov (2000) showed with a simple model that directional
coding is correlated with muscle movements.
Assumption 2: Stroke lesions part of preferred direction
coding.
Based on Beer et al.’s
Assumption 3: Rehabilitation retunes preferred directions of
remaining cells.
Li et al.(2001)’s data showed that directional tuning of the
muscle EMG is retuned during motor training.
Based on Todorov (2000)’s idea above, retuning in
directional tuning of muscle EMG (Li et al., 2001) may be
interpreted as retuning in directional tuning of the motor
cortex neurons.
Each neuron in the motor cortex
has directional coding
a b k cos( d p )
d : desired direction
p : preferred direction of a neuron
Georgopoulos et al, 1982
Population coding is a vector sum
of each neuron’s activation
Georgopoulos et al, 1986
Stroke deteriorates part of
movements
Thin line: unaffected arm, Solid line: affected arm
RF Beer, JPA Dewald, ML Dawson, WZ Rymer (2004, Exp Brain Res)
Motor Learning induces change in
directional tuning of muscle EMG
Li et al, Neuron, 2001
Motor Cortex model
Cosine coding extended with signal dependent noise (Reinkensmeyer, 2003)
Each cell has its own preferred direction.
Same activation rule with Georgopoulos et al.’s.
Stroke lesions preferred direction with equal distribution.
# of cell surviving affects the motor variance.
a b k cos( d p )
d : desired direction
p : preferred direction of a neuron
Supervised learning in the motor cortex
We extended the
model with different
simulation of stroke
and learning process.
Stroke lesions
preferred direction with
unequal distribution
Rehabilitation retunes
preferred directions of
remaining cells
How to retune the
preferred direction?
Activation Rule
a b k cos( d p )
d : desired direction
p : preferred direction of a neuron
Error-driven (Supervised) Learning
p p ( d r ) a
p : preferred direction of a neuron
d : desired direction
r : executed movement direction
Action Chooser: Action value
1.5
Action value
“Action value” is an expected
cumulative sum of rewards by
performing a specific action
Here, for each target, we have two
action values: one for the left arm
VL(theta), and one for the right arm
Vr(theta).
The arm selected will be the arm that
correspond to the higher value.
Reward
1
0.5
0
Three types of rewards
1. Directional Reward (transformation
from directional error)
2. Reward for workspace efficiency
Right arm uses for right hand side
workspace is rewarded
Left arm uses for left hand side
workspace is rewarded
3. Possible learned non-use negative
rewards (punishments).
-0.5
-20
-15
-10
-5
0
5
Directional error (degree)
10
15
20
Action Chooser: Probabilistic
selection
Based on the action value,
probabilistically select which
arm will be used to generate
movement.
Probabilistic formulation
implies competition between
two arms
1
1 exp( g(V ( R) V ( L)))
V ( R) : action value of the right arm
P( R)
V ( L) : action value of the left arm
: a parameter for sharpness
Spontaneous hand use (probability to choose the right hand)
1
0.9
Spontaneous Hand-use of the right hand
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
Action value(R) - Action Value (L)
0.6
0.8
1
Results
CIT retunes preferred directions
Spontaneous hand use improves
Preferred direction redistribution
Affected
range
Initial
After
stroke
After
rehabilitation
Free Choice Condition
Rehabilitation condition
Efficacy and Efficiency
Future work
Model “learned non-use” by modeling “expected failures”
(add negative rewards).
Motor cortex model
More realistic lesions
Unsupervised learning to account for spontaneous recovery
Mapping the direction coding to the muscle coding
Experiments with stroke subjects
using the new VR system
Updating the model parameters based on real data
Acknowledgements
Dr
Arbib
Dr
Schweighofer
Dr
Winstein
Jimmy
Bonaiuto