Engineering Learning in Unicellular Organisms

Download Report

Transcript Engineering Learning in Unicellular Organisms

Design for a Genetic System
Capable of Hebbian Learning
Chrisantha Fernando
Systems Biology Centre
Birmingham University
January 2006
Hebbian Learning



Donald Hebb 1949. “Let us assume that the persistence or repetition of a
reverberatory activity (or "trace") tends to induce lasting cellular changes that
add to its stability. When an axon of cell A is near enough to excite a cell B and
repeatedly or persistently takes part in firing it, some growth process or
metabolic change takes place in one or both cells such that A's efficiency, as
one of the cells firing B, is increased.”
Long term potentiation in neurons subsequently demonstrated, along with LTD.
Hebbian Learning can implement



Associative Learning, e.g. LTP in Hippocampus.
Auto-associative Memory, e.g. Cerebellar motor memories.
Self-Organized Map Formation, e.g. ocular dominance columns.
QuickTime™ and a
TIFF (LZW) decompressor
are needed to see this picture.
Examples of Hebbian Learning
Associative learning:
Classical Conditioning
UCS = Collision
CS = Proximity
Hebbian Learning in Rabbits
QuickTime™ and a
TIFF (LZW) decompressor
are needed to see this picture.
Self-Organized Maps
QuickTime™ and a
TIFF (LZW) decompressor
are needed to see this picture.
QuickTime™ and a
TIFF (LZW) decompressor
are needed to see this picture.
http://web.bryant.edu/~bblais/pdf/job_talk/sld033.htm
How is Hebbian Learning
Implemented in Brains?
Pre
QuickTime™ and a
TIFF (Uncompressed) decompressor
are needed to see this picture.
+/-
Post
What about Learning in Single
Cells?
 No conclusive evidence that unicellular
organisms can do associative learning on
their own.
 Bacteria can habituate and be sensitized but
they have not been shown to undertake
associative learning.
Shock
Vibration
Slower Replicating Eukaryotes in
Complex Environments Might
Benefit from Lifetime Learning.
 Fine tuning of obstacle avoidance/motion.
 Learn that chemical A (or a spatio-temporal
pattern of A) is associated with food or
harm rather than chemical B or C or D…
 Achieve robust development in the face of
lifetime noise due to external perturbations
and internal (mutational) perturbations.
Intra-cellular Molecular
Networks


Dennis Bray (Nature 2003). Neural networks have a remarkable ability to learn different
patterns of inputs by changing the strengths of their connections (10). They are widely used
in a variety of tasks of machine recognition. From the standpoint of a living cell, the closest
approximation to a neural network is probably found in the pathways of intracellular
signals (11). Multiple receptors on the outside of a cell receive sets of stimuli from the
environment and relay these through cascades of coupled molecular events to one or a
number of target molecules (associated with DNA, for example, or the cytoskeleton).
Because of the directed and highly interconnected nature of these reactions, the ensemble
as a whole should perform many of the functions commonly seen in neural networks. Thus,
in aggregate, the signaling pathways of a cell are capable of recognizing sets of inputs and
responding appropriately, with their connection "strengths" having been selected during
evolution.
Eric Winfree and Hopfield view gene transcription networks as CTRNNs, pto.
QuickTime™ and a
TIFF (LZW) decompressor
are needed to see this picture.
Incomplete promotor region
QuickTime™ and a
TIFF (LZW) decompressor
are needed to see this picture.
Sigmoidal activation function is sequence programmable but
hard wired.
The New RNA World
 John Mattick, Sean Eddy etc… are investigating
RNA networks in Eukaryotes.
 98% of RNAs in human cells is Intron or noncoding.
 10x more RNA on Chromosome 21 and 22 is noncoding than exons.
 Many types of novel RNA have been discovered
capable of regulation of gene expression.
 It may eventually explain the C-value paradox.
1
QuickTime™ and a
TIFF (LZW) decompressor
are needed to see this picture.
What if RNA networks are really
like little neural networks?
 Current neural network metaphors for
cellular networks have not included
plasticity.
 How difficult would it be to design (or
evolve) a network capable of Hebbian
learning?
 What sorts of tasks could an intra-cellular
Hebbian learning mechanism solve?
k2
W*2
W*1
RNA + u1
u1 + W*1
u1W*1
u2 + W*2
u2W*2
RNA + u2
k1
k2
W
W
k1
10
10
RNA
Promotor
Gene
RNAu1
RNAu2
0.005
QuickTime™ and a
TIFF (LZW) decompressor
are needed to see this picture.
Experiment 1: Sensitization
[Conc]
[RNA]
[TF1]
Stimulate with unit TF2
[TF2]
Stimulate with unit TF1
[RNATF1]
[RNATF2]
Time
‘Weight’ Change in Experiment 1
Time
[W1*]
[W2*]
[Conc]
QuickTime™ and a
TIFF (LZW) decompressor
are needed to see this picture.
Associative Learning
 UCS (e.g. glocuse) stimulates TF1, which binds
strongly to the promotor and produces RNA
(innately).
 CS (e.g. potassium permanganate, or NO or
something else) stimulates TF2 but these TFs bind
only very weakly to the promotor.
 Paired exposure to UCS and CS result in
strengthening of the TF2 binding to promotor, and
a response to the ‘smell’ associated with glucose.
QuickTime™ and a
TIFF (LZW) decompressor
are needed to see this picture.
Self-Organized Maps





Two TFs act on two genes
Now add lateral inhibition
Gene ‘receptive field’ changes
Gene A represents TF1 level
Gene B represents TF2 level
QuickTime™ and a
TIFF (LZW) decompressor
are needed to see this picture.
Supervised Learning: Training a
gene perceptron.
QuickTime™ and a
TIFF (LZW) decompressor
are needed to see this picture.
QuickTime™ and a
TIFF (LZW) decompressor
are needed to see this picture.
Applications
 Engineering cells as biosensors in complex
environments. Put cell in, then measure W*
levels to obtain the recorded weights.
 To train a cell to produce a protein under
required conditions, without having to hardwire the promotor-TF interaction perfectly.
Thanks to




Dov Stekel
Jon Rowe
Bruce Shapiro
Sally Milwidsky