Integrate and Fire Neural Network

Download Report

Transcript Integrate and Fire Neural Network

Integrate and Fire Neural
Network
Conceptual Chip Design
Second draft March 27, 2006
Computational Approaches to Cortical Functions
The Banbury Center
Robert Shapiro: Cape Visions and Global 360
Al Davis: School of Computing, University of Utah
Objective
• Explore the possibility of using custom
VLSI* chip assembly to aid in simulating
large Integrate and Fire Neural Networks
– between 105 and 106 neurons.
– Current and conductance models
• Possibly exponential models
– Setup and analysis controlled by conventional
digital work station.
* Very Large Scale Integrated Circuit
Challenge
• Custom chips offer significant speed and
capacity benefits but sacrifice flexibility
– Can the user community determine now the
modeling requirements: current-based,
conductance-based, exponential etc.
– Can we determine accuracy requirements.
– Can we determine ???
• Expected lifetime ???
Design Decisions
• Maximize parallelism and circuit homogeneity
– parallelism both intra- and inter-chip
– Building blocks: two chips (first design)
• System needs to be extensible
– # of chips determine the size of the system
• Ideal performance
– roughly scale linearly with # of chips
• will be sub linear
– higher stage ripple for final neural excitation contribution
– longer wires in array will lead to longer axon excitation
delay
• This is a digital system for doing IAF simulations, not an
analog chip for simulating neurons.
– Speed improved by factor of 103 over conventional digital.
One NPU and one SPU
NPU
Dataflow
synapses contributing
to a single neuron
SPU
output axon
100 NPUs and 100 x 100 array of SPUs
NPU
NPU
SPU
SPU
SPU
SPU
Computation for a single Neuron in NPU
Time
step
size
??
Compute
gex,gin
Increment
time
Start
Sw
Done
with all
spikes
How many
different time
constants??
Broadcast
spike
Start
SSw
Refractory
Period?
Done with
spike
Compute
Vinf, V
Compute
spike
Done with
spikes
collector
Start
trigger
Computation for a single Synapse in SPU
Synaptic
Plasticity
Habituation,
Modulation and
Learning
NPU
Start
Sw
Transmission
delay
C
W
S
synapse
A
C
Sw
Wout
From action
potential spike to
synapse
Synapse Column
Compute
gex,gin
Start
SSw
Start
Sw
C W S
C W S
w
w
C W S
C W S
w
w
Sw
Sw
axon
dG
V
SSw
q
p
params
start
C W S
C W S
C W S
w
w
C W S
w
w
axon
C W S
C W S
w
C W S
w
Sw
C W S
w
Sw
C W S
w
C W S
C W S
w
C W S
w
Sw
Sw
w
C W S
w
w
Sw
w
C W S
C W S
C W S
w
Sw
w
Sw
Sw
Major Missing Pieces
• How should inputs be handled
– Membrane Potential of specific neurons set at specific
times???
• How should outputs be handled
– Spike trains must be output: each spike in a time step
characterized by neuron identifier
– What about spikes from/to other boards or devices?
• What software is required on digital workstation
controlling the IAF engine?
– Wiring and initialization
– Input generation
– Output analysis and display
Questions (so far)
• How important is it to make initialization
fast?
• What is the range of weight values at the
SPU’s
– integer, fixed point, floating point??
• What is the range of firing thresholds?
– is it potentially different for each neuron?
• And many many more
Candidate for Study
• Model of the Lateral Geniculate Nucleus
and Primary Visual Cortex
– Much is known about the wiring, allowing
study of neural network dynamics with
random connections replaced by anatomical
data.
What do you Think ?
• What would this device need to do for it to
be useful in your work ?
Acknowledgements
• Larry Abbott
– General guidance, direction and references
• Tim Vogels
– Simulation specifics, intro to neural network
models, suggestions for this presentation
• Stefano Fusi
– Discussions about time step size, synaptic
plasticity, propagation delays
Comparison with Blue Brain
• Objective
– IAF simulator, not ‘brain’
• Approach
– Design optimized to task, not a general
purpose digital computer
Blue Brain Quotes
• For now, Markram sees the BlueGene
architecture as the best tool for modeling the
brain. Blue Brain has some 8,000 processors,
and by mapping one or two simulated brain
neurons to each processor, the computer will
become a silicon replica of 10,000 neurons.
"Then we'll interconnect them with the rules [in
software] that we've worked out about how the
brain functions," says Markram.
Synapse Processing Unit
•
•
•
Actions
– initialization
• set connected and weight registers
– axon spike
• if it sees a spike and is connected then weight is placed in shift register else
act as a shunt
• forwards left inputs to right inputs
– intra-SPU add phase
• acts as shift register element or shunt
• shifts up on shift enabled
Contents
– registers: inhibit weight, excitation weight, connected flag, shunt
I/O’s
– tbd – need to think about programming model and need to know the range of
weight values
Neuron Processing Unit
•
•
•
Actions
– initialization
• set parameters
– Start cycle
– intra-NPU add phase
• adds up shifted values and places in NPU-total register
• shifts up on shift enabled
– inter-NPU communication
• Signals completion of synapse summing and recognizes when all neurons are finished with spikes.
– Intra-npu update phase
• membrane threshold is updated with new value in the NPU-total register and if the threshold exceeds
the firing value then set spike flag output axon
• this involves 2 multiplies
• When all neurons finished with spike info, broadcast new spikes
• Update time
Contents
– registers: fire threshold, membrane potential, inhibit multiplicand and current sum, excite multiplicand and
current sum,
– adder, comparator, multiplier, (table for exponential calculation in conductance model)
TBD
– number of I/O’s
– use one multiply per chip and apply to neuron row sequentially vs. parallel multiply per TSU w/ neuron flag
set
Project Time Line
ToDo
•
Lots to figure out
– how many SU’s will fit within area, power and I/O constraints
• circuitry is pretty trivial
• balance of parallelism vs. faster adder tradeoff will be more tricky
– vertical forwarding mechanism
• synapses will be sparsely connected
• take advantage to minimize shift register length
– Figure out done
• vertical shift registers will vary in length in each NPU
• all NPU’s must be done to move to inter-NPU add phase
– inject done values on unconnected bottom lines
– ripple neuron TSU done values
• intra-NPU done will be sped up if early completion can be figured out
– maybe extra register on bottom SU to issue a done token for intra-NPU
add
– ripple TSU values to emit NPU done signal?
Excitatory Synapse Effect
gex
gex = (gex + SSw) x e
e
expFacEx
SSw
expFacEx = exp( -dt / tAMPAParam)
expFacIn = exp( -dt / tGABAParam)
What about NMDA ??
Membrane Potential Change
VRest
gain
VInf = VRest + gain *(gEx - gIn + input + Theta)
gin
Vinf
gex
input
gTot = gLeak + gEx + gIn
theta
VInf = ((gLeak* VRest + gEx * EAMPA+ gIn *EGABA
+ iMag*iExt) / gTot
New Membrane Potential
V
V = VInf + (V - Vinf) * expV where
expV= exp( -dt / tau)
Vinf
expV
V = VInf + (V - VInf)*exp((-dt/tau)*gTot);
Since gTot is not a constant, this
exponential must be computed: use table
lookup to approximate
Computation for a single Synapse in SPU
Swout
C
Clock pulse
W
weight
S
synapse
wout
A
axon
C
Clock pulse
Shift and add
Wout = (S==1 and A==1) ? W : 0
Sources
• Books
– The Computational Brain, Churchland and Sejnowski,
1992
– Essentials of Neural Science and Behavior, edited by
Kandel, Schwartz and Jessell, 1995
– Pulsed Neural Networks, edited by Maass and
Bishop, 1999
– Principles of Neural Science, edited by Kandel,
Schwartz and Jessell, 2000
– Theoretical Neuroscience, Dayan and Abbott, 2001
– Spiking Neuron Models, Gerstner and Kistler, 2002
Sources
•
Articles
–
–
–
–
–
–
–
–
–
–
–
Pyramidal cell communication within local networks in layer 2/3 of rat neocortex; Holmgren,
Harkany, Svennenfors and Zilberter; J Physiol (2003), 551.1, pp. 139–153
Activity dynamics and propagation of synchronous spiking in locally connected random
networks, Mehring, Hehl, Kubo, Diesmann, Aertsen, Biol. Cybern. 88, 395–408 2003.
Mexican hats and pinwheels in visual cortex; Kang, Shelley, and Sompolinsky; 2848–2853
PNAS March 4, 2003 vol. 100 no. 5
An egalitarian network model for the emergence of simple and complex cells in visual cortex;
Tao, Shelley, McLaughlin, and Shapley; 366–371 PNAS January 6, 2004 vol. 101 no. 1
Distributed High-Connectivity Network Simulation, A. Morrison, C. Mehring, T. Geisel, A.
Aertsen, and M. Diesmann, Neural Computation 17, 1776–1801, 2005.
Adaptive Exponential Integrate-and-Fire Model as an Effective Description of Neuronal
Activity, Romain Brette and Wulfram Gerstner, J Neurophysiol 94: 3637–3642, 2005.
Neural Network Dynamics, Tim P. Vogels, Kanaka Rajan, and L.F. Abbott, 2005
Signal Propagation and Logic Gating in Networks of Integrate-And-Fire Neurons, Vogels and
Abbott, Journal of Neuroscience 2005.
Geometric and functional organization of cortical circuits; Shepherd, Stepanyants, Bureau.
Chklovskii and Svoboda; June 2005 Nature Neuroscience
Highly Nonrandom Features of Synaptic Connectivity in Local Cortical Circuits; Song,
Sjostrom, Reigl, Nelson and Chklovskii, PLoS Biology March 2005 | Volume 3 | Issue 3 | e68
Excitatory cortical neurons form fine-scale functional networks; Yoshimura, Dantzker &
Callaway; NATURE |VOL 433 | 24 FEBRUARY 2005