Diplomarbeit

Transcription

Diplomarbeit
Mathematisch-Naturwissenschaftliche Fakultät I
Institut für Biologie
Diplomarbeit
ZUM
E RWERB DES AKADEMISCHEN G RADES
D IPLOM -B IOPHYSIKER
Automated Optimization of a
Reduced Layer 5 Pyramidal Cell Model
Based on Experimental Data
vorgelegt von:
Armin Bahl
Matr.-Nr. 504502
geb. am 3. Juni 1983 in Berlin
Stand der Arbeit: 30. August 2009
1. Gutachter: Prof. Dr. Andreas Herz1
2. Gutachter: Prof. Dr. Hanspeter Herzel2
1. Betreuer: Dr. Arnd Roth3
2. Betreuer: Dr. Martin Stemmler1
1 BCCN
Munich, Ludwig-Maximilians-Univ. Munich, Munich D-80539, Germany
Berlin, Institute for Theoretical Biology, Invalidenstraße 43, 10115 Berlin, Germany
3 Wolfson Institute for Biomedical Research, University College London, London WC1E 6BT, UK
2 Humboldt-Univ.
Deutsche Zusammenfassung
Genaue Modelle für Pyramidenzellen im Neokortex werden benötigt um realistische Simulationen der Informationsverarbeitung in kortikalen Schaltkreisen durchzuführen, allerdings fehlen geeignete Modelle in der Literatur. Gängige biophysikalische Modelle
von Nervenzellen sind entweder zu komplex, ihre Parameter nur schwer einzustellen, und
sie machen daher nur qualitativ korrekte Aussagen. Andere Modelle sind zu einfach und
behandeln nur eine bestimmte biophysikalische Fragestellung.
In dieser Arbeit wird eine systematische Herangehensweise beschrieben, um automatisch ein vereinfachtes Kompartimentmodell einer Pyramidenzelle der fünften Schicht
des Neokortex zu konstruieren. Die Optimierung des Modells erfolgt auf der Grundlage
experimenteller Daten. Das resultierende Modell gibt das Verhalten von Pyramidenzellen
quantitativ genau wieder.
Um eine geeignete Geometrie für unser Modell zu finden, präsentieren wir zunächst
eine neuartige Methode, um eine realistische Morphologie zu vereinfachen und dabei
die passiven Antworteigenschaften der Zelle möglichst wenig zu verändern. Um dann im
nächsten Schritt die ionischen Leitfähigkeiten zu schätzen, hatten wir bereits zuvor vorgeschlagen, die Daten aus einer kürzlich veröffentlichten Arbeit (Bekkers & Häusser, 2007),
in welcher die Dendriten physisch verschlossen wurden (Pinching), als Zieldaten für eine
schrittweise “Parameter-Schälung” zu nutzen (Roth & Bahl, 2009). Wir präsentieren hier
einen vergleichbaren Ansatz, nutzen aber eine Multiple-Richtungen Optimierungsstrategie (Deb et al., 2002; Druckmann et al., 2007) um die 18 freien aktiven und passiven
Modellparameter in einem einzelnen Optimierungsdurchlauf zu schätzen.
Unser Modell reproduziert einige der experimentellen Messkurven und generalisiert.
Wir nutzen dann das automatisch generierte Modell, um den Einfluss der dendritschen
Leitfähigkeiten auf das Ruhepotential und die Form von zurücklaufenden Aktionspotentialen zu untersuchen. Ebenfalls sehen wir , dass die zurücklaufenden Aktionspotentiale
die somatische Hyperpolarisierung verändern können und dass unser Modell den schnellen somatischen Aktionspotentialbeginn reproduziert. Diese Ergebnisse stimmen gut mit
experimentellen Studien überein. Das resultierende leitfähigkeitsbasierte Kompartimentmodell einer Pyramidenzelle ist unseres Wissens das erste, bei dem mehrere Zellregionen
gleichzeitig und automatisch an experimentelle Daten angepasst wurden.
iii
Abstract
Accurate models of pyramidal neurons are desperately needed to perform network simulations of cortical information processing but models of this type are lacking at present
in the literature. Current models are either too complex, hard to constrain and give only
qualitative results or they are too simple and focus only on one specific biophysical question. In this study we present a systematic approach to automatically create a reduced
model of a layer 5 pyramidal neuron based on experimental data. The model reproduces
the response properties of pyramidal neurons in a quantitatively exact manner.
To obtain a reasonable geometry for our model we first present a novel approach
to simplify a realistic morphology while maintaining the neuron’s passive response
properties. In order to estimate the ionic conductances in our model we have suggested
previously to use the data from a recent publication (Bekkers & Häusser, 2007) in which
the dendrites were physically occluded (Pinching) as a target data set for a stepwise
“parameter-peeling” optimization (Roth & Bahl, 2009). Here we present a comparable
approach, but we use a multi-objective optimization strategy (Deb et al., 2002;
Druckmann et al., 2007) to estimate the 18 free active and passive model parameters in a
single optimization run.
Our model is able to reproduce several experimental recordings and does also generalize. We use the automatically constrained neuron model to study the influence of the
dendritic conductances on the resting potential and the shape of backpropagating action
potentials. We see how backpropagating action potentials modulate the somatic afterhyperpolarization as well as that the model can reproduce the sharp somatic action potential
onset. These modelling results are in acceptable agreement with experimental findings.
To our knowledge the resulting conductance based multi-compartment model of a pyramidal neuron is the first neuron model that has been optimized in several cellular regions
at once and automatically to experimental data.
v
Acknowledgments
The work described in this diploma thesis was done in two different laboratories. I started
with the first part in the group around Michael Häusser (UCL, London) and finished my
work in the group around Andreas Herz (LMU, Munich). Both places and groups were
very stimulating and it was very interesting to compare how neuroscientific questions are
asked by experimentalists and theoreticians.
Primarily I want to thank Andreas Herz who – almost three years ago – offered me the
opportunity to join his group when he was still in Berlin. I thank him that he allowed me
to play around in the field of neuroscience and that he has shown me how joyful scientific
work can be. I want to thank Martin Stemmler for the many important questions and
suggestions that helped me to adjust my focus.
During my time in Berlin I developed a special interest in the detailed biophysics of
neuronal information processing and decided to go to London to get closer to experimental
data. The following month were highly efficient and I am thankful to Arnd Roth that he
has made that exchange possible, for his many brilliant ideas and his supervision.
I am grateful to John Bekkers for providing the experimental data, to Shaul Druckmann, Idan Segev, Michael London and Hermann Cuntz for discussions in the very early
phase of this project and to Arnd Roth and Martin Stemmler for many very helpful comments on this manuscript.
I want to thank Andreas Herz and Arnd Roth for organizing the financial support
during the whole period.
Finally I want to thank all the many colleagues, friends and my family who enriched
my time when I did not think about this thesis.
vii
Contents
Contents
ix
List of Figures
xiii
List of Tables
xv
List of Abbreviations
1
Introduction
1
1.1
Layer 5 Pyramidal Neurons . . . . . . . . . . . . . . . . . . . . . . . . .
4
1.2
Compartmental Modelling of Neurons . . . . . . . . . . . . . . . . . . .
5
1.2.1
Membrane Properties . . . . . . . . . . . . . . . . . . . . . . . .
5
1.2.2
Single-Compartment Models . . . . . . . . . . . . . . . . . . . .
6
1.2.3
Multi-Compartment Models . . . . . . . . . . . . . . . . . . . .
9
1.3
Geometry Reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12
1.4
Problems Constraining Neuron Models . . . . . . . . . . . . . . . . . . .
13
1.4.1
Experimental Uncertainties . . . . . . . . . . . . . . . . . . . . .
13
1.4.2
Constraining Parameters by Hand . . . . . . . . . . . . . . . . .
15
Automatic Fitting Strategies . . . . . . . . . . . . . . . . . . . . . . . .
15
1.5.1
16
1.5
2
xvii
A Brief Review of Earlier Studies . . . . . . . . . . . . . . . . .
Methods
21
2.1
Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
21
2.2
High-Resolution Alignment of APs . . . . . . . . . . . . . . . . . . . . .
21
2.3
Modelling in NEURON . . . . . . . . . . . . . . . . . . . . . . . . . . .
23
2.3.1
Using Python to Control NEURON . . . . . . . . . . . . . . . .
25
Multi-Objective Optimization using EAs . . . . . . . . . . . . . . . . . .
26
2.4.1
Evolutionary Algorithms . . . . . . . . . . . . . . . . . . . . . .
27
2.4.2
Multi-Objective Sorting . . . . . . . . . . . . . . . . . . . . . .
31
2.4.3
Parallelization . . . . . . . . . . . . . . . . . . . . . . . . . . .
33
2.4
ix
CONTENTS
3
The Cell Model
35
3.1
Neuronal Morphology . . . . . . . . . . . . . . . . . . . . . . . . . . .
35
3.1.1
Geometry Reduction . . . . . . . . . . . . . . . . . . . . . . . .
35
3.1.2
Axon Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . .
39
3.1.3
Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . .
40
Ion Channel Kinetics and Distribution . . . . . . . . . . . . . . . . . . .
40
3.2.1
Hyperpolarization-Activated Cation Channel . . . . . . . . . . .
40
3.2.2
Transient Sodium Channel . . . . . . . . . . . . . . . . . . . . .
42
3.2.3
Fast Potassium Channel . . . . . . . . . . . . . . . . . . . . . .
43
3.2.4
Slow Potassium Channel . . . . . . . . . . . . . . . . . . . . . .
44
3.2.5
Persistent Sodium Channel . . . . . . . . . . . . . . . . . . . . .
44
3.2.6
Muscarinic Potassium Channel . . . . . . . . . . . . . . . . . . .
45
Defining the Static and the Free Parameters . . . . . . . . . . . . . . . .
45
3.2
3.3
4
Results
47
4.1
Experimental Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
47
4.2
Fitting Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
47
4.2.1
Checking Response Properties . . . . . . . . . . . . . . . . . . .
48
4.2.2
Distance Functions . . . . . . . . . . . . . . . . . . . . . . . . .
49
4.2.3
Combining Intact and Pinching Data . . . . . . . . . . . . . . . .
51
4.2.4
Selection of the Optimal Solution . . . . . . . . . . . . . . . . .
52
Fitting Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
53
4.3.1
Surrogate Data Optimization . . . . . . . . . . . . . . . . . . . .
53
4.3.2
Experimental Data Optimization . . . . . . . . . . . . . . . . . .
59
4.3.3
Generalization for Other Input Currents . . . . . . . . . . . . . .
65
Model Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
67
4.4.1
Resting Potential . . . . . . . . . . . . . . . . . . . . . . . . . .
67
4.4.2
AP Backpropagation . . . . . . . . . . . . . . . . . . . . . . . .
67
4.4.3
Currents Shaping the Somatic AP Waveform . . . . . . . . . . .
69
4.3
4.4
5
Discussion
73
5.1
Neuronal Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
73
5.1.1
Geometry Reduction . . . . . . . . . . . . . . . . . . . . . . . .
73
5.1.2
Passive Influence of the Basal Dendrite . . . . . . . . . . . . . .
75
5.1.3
Axonal Geometry . . . . . . . . . . . . . . . . . . . . . . . . . .
75
5.1.4
Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . .
75
Ion Channel Composition . . . . . . . . . . . . . . . . . . . . . . . . . .
76
5.2.1
Choice of Ion Channel Models . . . . . . . . . . . . . . . . . . .
76
5.2.2
Ion Channel Distribution . . . . . . . . . . . . . . . . . . . . . .
76
Fitting Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
78
5.2
5.3
x
CONTENTS
5.4
5.5
5.3.1 Choosing the Free Parameters . . . . . . . .
5.3.2 Surrogate Data Optimization . . . . . . . . .
5.3.3 Experimental Data Optimization . . . . . . .
5.3.4 AP Initiation . . . . . . . . . . . . . . . . .
5.3.5 Effects of Pinching . . . . . . . . . . . . . .
Model Evaluation . . . . . . . . . . . . . . . . . . .
5.4.1 Resting Potential . . . . . . . . . . . . . . .
5.4.2 Rapid AP Onset . . . . . . . . . . . . . . .
5.4.3 AP Backpropagation . . . . . . . . . . . . .
5.4.4 Currents Shaping the Somatic AP Waveform
Outlook . . . . . . . . . . . . . . . . . . . . . . . .
References
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
78
79
80
81
81
83
83
83
84
84
85
87
xi
List of Figures
1.1
Golgi Staining and Some of Cajal’s Drawings . . . . . . . . . . . . . . .
2
1.2
The Neuron as an RC-Circuit . . . . . . . . . . . . . . . . . . . . . . . .
6
1.3
An RC-circuit with Hodgkin-Huxley Like Voltage-Dependent Conductances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7
1.4
Circuit Representation of a Three-Compartment Model . . . . . . . . . .
10
1.5
Comparison of the Squared-Distance Measure with LeMasson and Maex’
Distance Measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
17
2.1
High-Resolution Alignment of APs . . . . . . . . . . . . . . . . . . . . .
22
2.2
Example NEURON-Python Simulation Results . . . . . . . . . . . . . .
26
2.3
Illustration of a Simple Two-Objective Optimization Problem . . . . . . .
27
2.4
Flowchart of the Working Principle of an EA . . . . . . . . . . . . . . .
28
2.5
Visualization of the Crossover and Mutation Operators . . . . . . . . . .
30
2.6
Illustration of the Ranking-Concept using Pareto Fronts and the Crowding
Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
32
3.1
Morphology and Passive Properties for the Complex and the Reduced Model 38
3.2
Comparison of the Voltage Traces in the Complex and Reduced Model in
Response to Noisy Input Current . . . . . . . . . . . . . . . . . . . . . .
39
3.3
Geometry of the Axon for the Reduced Model . . . . . . . . . . . . . . .
39
3.4
Ion Channel Gating Particles Used in This Study . . . . . . . . . . . . .
41
4.1
Experimental Recordings before and after Pinching . . . . . . . . . . . .
48
4.2
Best Solution of the Initial Random Population before the Surrogate Data
Optimization, Trial 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . .
55
4.3
Best Solution after the Surrogate Data Optimization, Trial 1 . . . . . . . .
56
4.4
Evolution of the Four Objective-Distance Functions and of the Total-Error
Value during the Surrogate Data Optimization, Trial 1 . . . . . . . . . . .
57
4.5
Best Solution after the Surrogate Data Optimization, Trial 2 . . . . . . . .
58
4.6
Best Solution of the Initial Random Population before the Experimental
Data Optimization, Trial 1 . . . . . . . . . . . . . . . . . . . . . . . . .
60
xiii
LIST OF FIGURES
4.7
4.8
4.9
4.10
4.11
4.12
4.13
4.14
4.15
Best Solution after the Experimental Data Optimization, Trial 1 . . . . . .
Evolution of the Four Objective-Distance Functions and of the Total-Error
Value during the Experimental Data Optimization, Trial 1 . . . . . . . . .
Best Solution after the Experimental Data Optimization, Trial 2 . . . . . .
Best Solution after the Experimental Data Optimization, Trial 3 . . . . . .
Model Prediction of Firing Frequency . . . . . . . . . . . . . . . . . . .
Model Prediction of Detailed AP shape and Spiketrain in Response to
Another Input Current . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Resting Potential and the Ionic Conductances as a Function of Distance
to the Soma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Analysis of BAPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Currents Shaping the Somatic AP Waveform before and after Pinching . .
xiv
61
62
63
64
65
66
68
70
71
List of Tables
3.1
3.2
4.1
4.2
Optimal Geometrical and Passive Parameters for the Reduced Model after
Simplification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Free Parameters in the Reduced Model . . . . . . . . . . . . . . . . . . .
36
46
Target Parameters and Best Parameter Combinations after the Surrogate
Data Optimizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Best Parameter Combinations after the Experimental Data Optimizations .
54
59
xv
List of Abbreviations
τmem
membrane time constant
τp
time constant for gating particle p
gbarx
maximal specific ionic conductance for ion channel x
Cm
effective membrane capacitance
cm
specific membrane capacitance
Ex
reversal potential for ion x
gpas
specific leak conductance
p∞
steady state value for gating particle p
ra
intracellular resistivity
Rm
effective membrane resistance
rm
specific membrane resistance
tadj
temperature adjustment factor
AHP
afterhyperpolarization
AP
action potential
BAP
backpropagating action potential
EA
Evolutionary Algorithm
fMRI
functional magnetic resonance imaging
HCN-channel
hyperpolarization-activated cyclic nucleotide-gated cation
channel
iseg
axon initial segment
Kfast-channel
fast potassium channel
xvii
LIST OF TABLES
Km-channel
muscarinic potassium channel
Kslow-channel
slow potassium channel
MOO
multi-objective optimization
Nap-channel
persistent sodium channel
Nat-channel
transient sodium channel
PF
Pareto front
xviii
CHAPTER
1
Introduction
that the elementary units of our brain are single metabolically
distinct cells and that our brain is the main information processing center for
everything we do. With these cells, the brain receives sensory information, computes, stores, and accesses memory and directs our muscles how to move to accomplish
our wish. We know that the autonomic nervous system maintains the functions of our
organs, although we are not consciously aware of it. All this appears natural to us today,
but this was not always that clear.
T
ODAY WE KNOW
Around 350 B.C., Aristotle developed his Cardiocentric Hypothesis, which states that
the brain solely exists for cooling the blood in the human body and that the heart is the
central organ of mind and emotion. On the other hand Plato and other philosophers argued that the mind must be found in the brain. But only with the pioneering work of
Herophilus and Erasistratus (around 300 B.C.), detailed anatomical and functional studies
revealed that mind and brain are linked. Another milestone was set by Galen (around 180
A.D.). He wrote that the brain “receives all sensations, produces images and understands
thoughts”, and that only with the help of rigorous anatomical methods one would be able
to prove the Encephalocentric Hypothesis (Crivellato & Ribatti, 2007). Influenced by the
work of the early Greek philosophers and anatomists, following generations of scientists
accumulated an enormous amount of knowledge about the function, dysfunction and the
anatomy of the brain and its parts, but still it remained unclear which were the intrinsic
biological mechanisms for brain function.
At the end of the 18th century, Luigi Galvani studied the electricity in the nervous
system in dissected frog legs and thereby laid the foundation for a new science, electrophysiology (Piccolino, 1997). Emil Heinrich Du Bois-Reymond further developed the
technique. Around 1850 he found that potentials in the tissue can change rapidly when
a nerve stimulus elicits an “action” in the muscle, and thereby formed the idea of the
action potential (AP) (Pearce, 2001).
At the end of the 19th century, with the advanced microscopy techniques and Camille
Golgi’s new silver staining method, the Spanish physician Santiago Ramón y Cajal started
1
CHAPTER 1. INTRODUCTION
to study the microscopic structure of the brain. Cajal who originally wanted to become
an artist created hundreds of fine drawings of the stained materials that are still admired
today (Fig. 1.1). During his rigorous work on different brain tissues, he realized that the
a
b
c
Figure 1.1: G OLGI S TAINING AND S OME OF C AJAL’ S D RAWINGS a) Photomicrographs
from Cajal’s preparations of the cerebral cortex showing neurons impregnated by the Golgi stain.
b) Cajal’s drawing of the superficial layers of the human frontal cortex shows the characteristic
morphology of pyramidal neurons. c) Drawing of the cerebellar cortex with a detailed view of
Purkinje cells (A).1
structures he was drawing were separate units interconnected within the large network
of the nervous system. This result was the proof for the “neuron doctrine" formulated
by Wilhelm Waldeyer in 1891 and the single units were finally named neurons (Greek:
string, wire). Moreover, Cajal formulated “The Law of Dynamic Polarization” stating that
neurons are polarized, receiving information on their cell bodies and dendrites (Greek:
tree) and transmit information to other distant neurons through axons (Greek: axis). 1
Finally Julius Bernstein realized that the electrical phenomena in all living tissue are a
property of membrane currents and that also APs have an ionic basis and can be explained
with a rapid membrane conductance change (Bernstein, 1906).
In the 1950s Alan Lloyd Hodgkin and Andrew Fielding Huxley quantitatively measured the membrane currents during APs in the giant axon of the squid Loligo. With
1 Life
and Discoveries of Santiago Ramón y Cajal: http://nobelprize.org/nobel_prizes/
medicine/articles/cajal/index.html
2
a new technique called space clamp they kept the potential along the entire axon spatially uniform. By voltage clamping the membrane to different potentials and the use
of pharmacological agents to block various currents, Hodgkin and Huxley were able to
dissect the membrane current into its constitutive components. The detailed analysis of
these currents led Hodgkin and Huxley to a simple mathematical model with some fictive “gating particles” that explained the generation of APs with remarkable accuracy
(Hodgkin & Huxley, 1952a,b).
In the 1970s Erwin Neher and Bert Sakmann were able to isolate the gating particles
predicted by Hodgkin and Huxley. They carried out further voltage-clamp studies with
fine electrodes on small patches of membrane. Interestingly the currents in these membrane pieces were not constant for a given voltage and their amplitude was random and
quantized. The result of these patch clamp experiments led Neher and Sakmann to the
conclusion that in the tiny pieces of a membrane of only several µm2 there were only a
few pores that were either open or closed, which proved the existence of stochastic ion
channels (Sakmann & Neher, 1984).
Today we know that the nervous system is formed of more than 1010 densely packed
neurons connected to an intricate network. We can admire the geometry and the beauty of
neurons in a detail Cajal could only dream of. Due to atomic-resolution crystal structures
and fluorescence distance measurements we have understood the molecular details of
ion channels, how they conduct ions and why they are selective and voltage-dependent.
We know that dendritic spines can appear and disappear during learning and synaptic
connections can become stronger or weaker. We know that the Hodgkin-Huxley model
can be used to explain general currents in different parts of the neuron in different brain
areas also for mammals. We can also observe the activity of larger brain areas with
advanced imaging techniques and with functional magnetic resonance imaging (fMRI)
even the activity of the whole brain during human behaviour.
All these wonderful experimental discoveries during the last century have led to a very
detailed picture of our brain and the neuron, but still, we have only vague ideas about the
details of neural computation. It is the task of theoreticians and today’s philosophers to
put the knowledge together and to extract the fundamental principles of how our brain
works!
One way to understand brain function is a bottom-up approach, by understanding and
modelling the elementary unit of our brain, the single neuron. If we can describe the
function of a single neuron with a mathematical model, we can create artificial networks
of neurons and observe the network behaviour in a detail we could never do in an experiment.
3
CHAPTER 1. INTRODUCTION
1.1. Layer 5 Pyramidal Neurons
Pyramidal neurons are some of the best studied cells in the brain and are present in virtually all mammals (Spruston, 2008). They are found primarily in structures that are
asccociated with advanced cognitive functions like the hippocampus or the cerebral cortex. Here we focus on cortical layer 5 pyramidal neurons, but the general geometrical and
many response properties are similar for the other pyramidal neuron types.
Pyramidal neurons have an elaborate dendritic tree and axonal structure. The relatively short basal dendrites connect directly to the soma whereas the large apical dendrite
connects the soma with the distal tuft. The oblique dendrites emanate proximally from
the main apical dendrite. The extent of the dendritic tree of layer 5 pyramidal neurons is
around 1 mm in the adult rat (Fig. 3.1).
These large neurons can therefore extend their dendritic structures into different layers of the cortex to receive thousands of synaptic connections. Some inhibitory inputs
are specifically targeting the soma and axon while most of the excitatory synaptic drive
arrives via the dendrites. Distal dendritic regions receive inputs from higher cortical areas
whereas local sources of synaptic input project to the proximity of the soma.
It is thought that the geometry of pyramdial neurons might be designed for coincident
detection of inputs from the tuft and the proximal dendrites (Cauller & Connors, 1994).
Alternative hypotheses suggest that synapses at the tuft could control the responsivness to
more proximal inputs (Larkum et al., 2004).
Like the soma and the axon, also the dendrites are enriched with a variety of voltagegated ion channels that have an important influence on the integration of synaptic input
(Johnston et al., 1996). It was shown that for clustered inputs the dendritic tree can initiate local dendritic spikes (Ariav et al., 2003; Golding & Spruston, 1998) which could
eliminate the problem of distance-dependent synaptic efficacy (Katz et al., 2009).
When considered mild or distributed input currents it is thought that AP initiation occurs in the axon initial segment (iseg) and that APs can then propagate actively into two
directions, downwards the axon, targeting other neurons, but also antidromically through
the soma into the dendritic tree (Stuart & Sakmann, 1994). These backpropagating action
potentials (BAPs) give rise to an interesting backward-forward “ping-ping” communication between the axonal spike initiation zone and dendritic post-synaptic locations and are
thought to be an important mechanism for synaptic plasticity (Segev & London, 2000).
In order to better understand these mechanisms, their interactions and functional network implications one approach is to create detailed or reduced compartmental models of
pyramidal neurons.
4
1.2. COMPARTMENTAL MODELLING OF NEURONS
1.2. Compartmental Modelling of Neurons
In compartmental models of neurons the membrane and its intrinsic conductances are
regarded as the central elements of the model. Depending on the level of complexity and
the purpose of modelling the model can consist of only a single compartment or involve
more than thousand compartments.
1.2.1. Membrane Properties
The membrane of the neuron consists of a densely packed ≈ 4 nm thick bilayer of
phospolipids with embedded proteins that make it almost impermeable for water,
big molecules or any charged ion. However some of the membrane proteins act as
“gates” or “channels” that open and close depending on voltage or by modification with
some ligand. We can define the membrane potential V as the difference between the
intracellular and the extracellular potentials:
V = Vi −Ve .
(1.1)
Early experiments have shown that the membrane potential for neurons at rest (Vrest ) is
almost always negative and lies between -90 mV and -30 mV (Koch, 2004, p. 6). The
origin of the negative resting potential lies in the differential distribution of open ion
channels across the membrane and the unequal concentration of ions with different charge
inside and outside the cell at rest. We can summarize all ionic conductances per membrane
area at rest into a single specific leak conductance (gpas ) or specific membrane resistance
(rm ) with:
1
.
(1.2)
gpas =
rm
Units for gpas and rm are mostly given in S/cm2 and Ω · cm2 respectively.
The insulating membrane also keeps charges apart and acts therefore as a capacitor.
Thus the membrane potential allows the capacitance to build up a specific charge (qm ) on
both sides of the membrane:
qm = cm ·V .
(1.3)
The specific membrane capacitance (cm ) describes the capacitance per membrane area
and should only depend on the biochemical composition of the membrane. Its units are
mostly reported in µF/cm2 . Frequently used values for cm and rm lie around 1 µF/cm2
(Gentet et al., 2000) and 15000 Ω · cm2 respectively.
5
CHAPTER 1. INTRODUCTION
1.2.2. Single-Compartment Models
1907, inspired by the idea that the neural membrane can be described by simple electrical
RC-circuit elements, Louis Lapicque modeled the neuron using a parallel capacitance,
a resistor as well as a battery to account for the resting potential Vrest (Abbott, 1999;
Lapicque, 2007). Let us neglect the spatial structure of the neuron so that we can model
it using an isopotential membrane sphere with surface area A. The effective membrane
resistance (Rm ) and effective membrane capacitance (Cm ) are then simply given by
Rm = rm · A
(1.4)
Cm = cm · A .
(1.5)
Any input current Iin j to the cell can be simply represented with a current flowing directly into the circuit (Fig. 1.2). Using Ohm’s law we can calculate the current over the
Figure 1.2: T HE N EURON AS AN RCC IRCUIT The single-compartment is modeled using elementary electrical circuit elements: A capacitance Cm , a resistor Rm and
a battery Vrest . Resistance and capacitance
depend on the membrane area. The capacitance determines how much charge can accumulate on both sides of the membrane for
a given voltage, the resistor defines the leakconductance of the membrane at rest, while
the battery maintains the resting potential.
The membrane potential is defined as the difference between the potential inside (Vi ) and
outside (Ve ) of the neuron. A current Iin j can
be represented as a current flowing directly
into the circuit.
Ve
Rm
Cm
IC
Iinj
IR
Vrest
Vi
resistance and the battery:
Vi = Ve − Rm · IR +Vrest
⇒ V = −Rm · IR +Vrest
Vrest −V
.
⇒ IR =
Rm
(1.6)
(1.7)
(1.8)
Whenever the membrane voltage changes, a capacitive current will flow:
IC =
dQm
dV
= Cm ·
.
dt
dt
(1.9)
Due to Kirchhoff’s law of current conservation we can write down:
IC = IR + Iin j
6
(1.10)
1.2. COMPARTMENTAL MODELLING OF NEURONS
and hence
dV
dt
dV
⇒ Cm · Rm
dt
Cm ·
=
Vrest −V
+ Iin j
Rm
= −V +Vrest + Rm · Iin j .
(1.11)
(1.12)
Introducing the membrane time constant τmem = Rm ·Cm leads to the membrane equation:
dV (t)
= −V (t) +Vrest + Rm · Iin j (t) .
(1.13)
dt
This is an inhomogeneous ordinary differential equation that can easily be solved for a
general input current.
τmem ·
Hodgkin-Huxley Model
Hodgkin & Huxley (1952b) extended the simple RC-circuit by introducing further resistors to describe the conductivity of the membrane for various currents (Fig. 1.3). They
Figure 1.3:
A N RC- CIRCUIT WITH
H ODGKIN -H UXLEY L IKE VOLTAGE D EPENDENT C ONDUCTANCES Hodgkin
and Huxley added two resistors to the circuit
to account for the sodium Gna = 1/Rna and
potassium Gk = 1/Rk conductances they
found in the squid axonal membrane. These
conductances are voltage-dependent and are
therefore symbolized with a variable resistor.
The reversal potential of each conductance
(Ena and Ek ) as well the resting potential
(Vrest ) are represented as a battery in the
circuit. Hodgkin and Huxley developed a
general framework in order to mathematically
describe the voltage dependencies of the
currents. Today many other ionic currents can
be added to the circuit in the same fashion.
Ve
Rna
Rk
Rm
Cm
Iinj
Ena
Ek
Vrest
Vi
found out that the membrane of the squid axon conducts mainly sodium and potassium
ions and that the passive leak current is mainly due to chloride ions. Therefore the total
current flowing through the membrane is
Iion = Ina + Ik + I pas .
(1.14)
They also realized that the sodium and potassium currents are variable and time- and
voltage-dependent and can be modeled using a relatively simple formalism:
7
CHAPTER 1. INTRODUCTION
A membrane current Ix was described with the following differential equations:
dp
= (1 − p) · α p (V ) − p · β p (V )
dt
dq
= (1 − q) · αq (V ) − q · βq (V )
dt
Gx = Gbarx · pi · q j
Ix = Gx · (V − Ex )
(1.15)
(1.16)
(1.17)
(1.18)
where Gbarx = 1/RMax
is the maximal membrane conductance for ion x. To describe the
m
dynamics of the conductance Gx , Hodgkin and Huxley introduced several fictive gating
particles p and q. Each gating particle can be in one of two possible states, open or closed.
In order for the current to flow, all gating particles need to be open simultaneously. The
transition between open and closed for each gating particle was described with the help of
the voltage-dependent transition functions α(V ), β (V ). The general form and parameters
of these functions could be estimated by directly fitting the conductance changes during
several voltage steps. To obtain better fits Hodgkin and Huxley also included the exponents i and j which can be related to the oligomeric nature of ion channels today. Finally,
the current Ix depends on the driving force V − Ex . Ex is the reversal potential and depends
on the ratio of the intra- and extracellular concentration of the ion.
Hodgkin and Huxley modeled the sodium conductance Gna with two kinds of gating
particles m and h. m was thought to be the activation particle while h was responsible for
the inactivation of the current. On the other hand the potassium conductance was modeled
using a single activation particle n and the leak current was voltage-independent. For
simplification of the mathematics and for illustration of the kinetics of a gating particle p
it is often useful to calculate its steady state value (p∞ ) and time constant (τ p ):
p∞ (V ) = α p (V )/ (α p (V ) + β p (V ))
(1.19)
τ p (V ) = 1/(α p (V ) + β p (V ))
dp
= (p∞ − p)/τ p
dt
(1.20)
(1.21)
which is equivalent to Eqn. 1.15.
Based on these detailed descriptions of the currents, we can rewrite the membrane
equation:
Cm ·
dV
dt
= −Gbarna · m3 · h · (V − Ena ) −
Gbark · n4 · (V − Ek ) − G pas · (V −Vrest ) + · Iin j (t)
(1.22)
(1.23)
where each of the gating particles m, h and n is described with a differential equation of
the form 1.15 and its own voltage-dependent transition functions α(V ) and β (V ). The
8
1.2. COMPARTMENTAL MODELLING OF NEURONS
exponents over the gating particles were found to produce the best fit for the experimental
data.
With that extended single-compartment model, Hodgkin and Huxley were able to elucidate the fundamental principles of AP generation based on a realistic description of
ionic conductances. The Hodgkin-Huxley model is widely regarded as the cornerstone of
quantitative modelling of nerve cell excitability and is seen as one of the greatest achievements of 20th -century biophysics. It should be remembered that at the time the model was
suggested, the existence of stochastic ion channels was not known yet.
The formalism developed by Hodgkin and Huxley was later generalized and used to
create a variety of kinetic models for many different ionic conductances in other neuron
types. The models of ionic conductances in pyramidal neurons that we are using in this
study are explained in detail in sec. 3.2. Nevertheless, there are recent approaches to develop new kinetic schemes based on detailed kinetic transitions (Baranauskas & Martina,
2006; Gurkiewicz & Korngreen, 2007).
In addition to the description of AP initiation and propagation, single-compartment
Hodgkin-Huxley-type models have led to an understanding of many fundamental dynamics of spiking neurons. For example, they were used to explore the role of calcium currents and calcium dynamics in bursting neurons (Amini et al., 1999) or they were used to
suggest mechanisms for spike frequency adaptation (Benda & Herz, 2003). Furthermore,
based on a separation of time-scales, it was possible to reduce the four dimensional singlecompartment Hodgkin-Huxley model to models with only two dimensions where phaseplane analysis or bifurcation theory could be used to study neuronal excitability (Fitzhugh,
1961; Morris & Lecar, 1981; Nagumo et al., 1962). There is still a vigorous and controversial debate over whether the Hodgkin-Huxley model is sufficient for explaining the
sharp AP onset in pyramidal neurons (McCormick et al., 2007; Naundorf et al., 2006;
Yu et al., 2008).
1.2.3. Multi-Compartment Models
Most neurons cannot be represented as a single-compartment as they have a complex
morphology with different membrane and ion channel properties in different regions of
the cell. However, we can discretize a neuron into multiple small pieces of membrane
cylinders and each of these compartments can then be represented as a simple RC-circuit.
The membrane pieces are connected via the intracellular solution and current can
flow from one piece to the other. Hereby the electrolyte solution in the cable acts like a
resistance. Normalizing this resistance to the length and the diameter of the cylinder we
can define the intracellular resistivity (ra ). A common value used for pyramidal neuron
models lies around ra = 100 Ω · cm.
The compartments need not necessarily have the same diameter d and length L, hence
9
CHAPTER 1. INTRODUCTION
each compartment will have its own effective membrane resistance Rm and capacitance
Cm and the effective axial resistance Ra between two compartments will depend on the
geometry of both:
Rim = rm · π · di · Li
(1.24)
Cmi = cm · π · di · Li
Lj
ra
ra
Li
Riaj =
2 · + 2 · .
2
2
d
π · d2i
π · 2j
(1.25)
(1.26)
A model with three compartments of different sizes and its circuit representation are illustrated in Fig. 1.4. By means of compartmentalization even a complex morphology can
L2
a
L1
L3
d1
d3
d2
b
R12
a
1
Cm
1
Rm
Ra23
2
Rm
2
Cm
Vrest
3
Cm
Vrest
3
Rm
Vrest
Figure 1.4: C IRCUIT R EPRESENTATION OF A T HREE -C OMPARTMENT M ODEL a) A fictive simplified neuronal geometry is shown. The neuron model consists of three cylindrical compartments, each with different diameter d and length L. b) The compartments are represented as
RC-circuits. The effective resistance Rim and capacitance Cmi of each unit depend on the surface
area of the cylinder while the battery is the same in each circuit. The connection between two compartments can be described with an axial resistance Riaj . In each case the axial resistance depend
on the diameter and length of two adjacent cylinders.
be approximated by a set of membrane equations coupled through axial resistive currents:
Cmi ·
dV i (t) Vrest −V i (t)
V j (t) −V i (t)
i
=
+
I
(t)
+
.
∑
in j
ij
dt
Rim
Ra
j
(1.27)
Mostly, this system must be solved numerically. Choosing the right spatial discretization
is a recurring practical problem in neuronal modelling and the size of one compartment
will obviously depend on the morphological complexity of the neuron and of course on the
question that is being asked. Finding the resting membrane potential in a simple passive
geometry, for example, will require a rather rough discretization, while the analysis of
10
1.2. COMPARTMENTAL MODELLING OF NEURONS
burst firing might demand much finer spatial resolutions.
Reduced Models
Reduced multi-compartment models consist of a few compartments only but their complexity is sufficient to gain insights into the complicated spatiotemporal interactions in
the neuron. A two-compartment model was used, for example, to investigate the interplay of fast somatic sodium and potassium conductances with the dendritic slow calcium
currents and their role in bursting behaviour (Pinsky & Rinzel, 1994). It was recently
also possible to create reduced models that could explain BAPs in pyramidal neurons
(Keren et al., 2009). In addition to their immense value for understanding mechanisms
like that, reduced multi-compartment models are also computationally effective. They
were therefore used in large-scale network studies with more than 3000 cells and demonstrated, for example, that gap junctions are instrumental for cortical gamma oscillations
(Traub et al., 2005).
Detailed Models
Detailed multi-compartment models are based on exact anatomical reconstructions. To
represent an entire morphology with sufficient discretization, often more than 1000 compartments are necessary that are connected via axial resistors. In each compartment several ion channels can be modeled to account for the local conductances. This might
lead to a very complex system of more than 10000 coupled differential equations which
rules out the chance of any analytical solution and even makes “by-hand” numerical
simulation a daunting task. Therefore tools are needed that keep track of the neuronal
properties and can create and efficiently solve the large number of equations automatically. Several of these tools have been developed during the last decades, for example
GENESIS (Bower & Beeman, 1998), NEURON (Carnevale & Hines, 2006) or very recently MOOSE (Ray & Bhalla, 2008).
The level of detail used for these models can lead to insights into the biophysical
mechanisms and the role of the neuron’s spatial structure involved in neuronal information processing that today’s experimental methods could not give. The computer model
however might suggest a new experimentally testable hypothesis. The new experimental
result could than be used to further optimize the model parameters and mechanisms and
then, with the improved model, suggest other experiments. Hence, detailed models, combined with experiments are powerful tools for the exploration of the complex biophysics
in neurons.
Detailed models were used for the study of axonal AP initiation (Kole et al., 2008)
or voltage attenuation in dendrites (Stuart & Spruston, 1998). They were used to explore
how the dendritic geometry determines somatic firing patterns (Mainen et al., 1995). Sev11
CHAPTER 1. INTRODUCTION
eral models were used to examine the computational capabilities of dendrites (reviewed
by London & Hausser, 2005; Segev & London, 2000). For example, it was suggested that
mechanisms in the dendritic tree could explain translation-invariant tuning of visual complex cells (Mel et al., 1998). Another study analyzed the integration of synaptic input in
a detailed model of a CA1 pyramidal cell and suggested that the neuron could be represented as a two-layer cascade (Poirazi et al., 2003). Even dendritic spines can be modeled
but often this is simply done by increasing the dendritic leak conductance and the capacitance by a certain spinefactor to account for the additional spine membrane area. It was
recently found experimentally that spine size is scaled along the dendrites. These findings
were combined with a detailed model of a CA1 pyramidal neuron which led to further
evidence for a two-layer integration of dendritic input (Katz et al., 2009).
Although detailed models are computationally expensive they are widely considered
to be useful for large-scale network simulations. The hope is to model a part of the brain
as realistically as possible to understand brain function and dysfunction through detailed
computer simulations and possibly to suggest new pharmaceutical treatments. The Blue
Brain Project, for example, attempts to create a model of a neocortical column (Markram,
2006). Other studies are currently creating detailed models of a thalamocortical region
(Izhikevich & Edelman, 2008) whereas others perform detailed network simulations of
the primary visual cortex, in software as well as in silico.1
1.3. Geometry Reduction
If a reduced multi-compartment model should be created, it is not clear how the simplified
geometry must look like and which diameters, length and membrane parameters should
be used for the cylinders. To get an idea about the structure of the model it is therefore
reasonable to start with a detailed reconstruction and to simplify its dendritic geometry
while maintaining the neuron’s passive response properties. For homogeneous passive
cables the Linear Cable Theory was developed (Koch, 2004; Segev, 1994) which made it
possible to study the voltage distribution in long cables. It was also shown that a small
subset of neuronal morphologies can be collapsed into an equivalent single cylinder (Rall,
1962) which allowed the application of the Linear Cable Theory to study the voltage
spread in complex dendritic geometries.
Rall’s theory is based on the following assumptions for the dendritic tree: First, the
membrane resistance Rm and the axial resistance Ra are the same in all branches of the
dendritic tree. Second, the electrotonic distance from the soma to each dendritic terminal
should be equivalent. Third, the branch points must follow the 3/2 power rule, meaning
that for the diameter d0 of a parent branch and the diameters d1 , d2 of its daughter branches
1 Colamn-Project:
http://gow.epsrc.ac.uk/ViewGrant.aspx?GrantRef=EP/C010841/1
12
1.4. PROBLEMS CONSTRAINING NEURON MODELS
the following condition must hold:
3/2
d0
3/2
3/2
= d1 + d2 .
(1.28)
To overcome these constrains several authors suggested alternative methods to construct
simple structures from arbitrarily branched dendritic trees (Bush & Sejnowski, 1993;
Lindsay et al., 2003). A simple and intuitive way was suggested by Destexhe (2001). He
divided the dendritic tree of a layer 6 pyramidal neuron into several functional subunits,
namely the soma with proximal dendrites, the basal and the distal dendrites. Each of
these functional subunits was represented by a single cylinder in the simplified model.
The length of the equivalent compartment was chosen to be similar to the typical physical
length of its associated functional region. The diameter of the cylinder was adjusted
such that its total membrane area was the same as the subset of dendrites it represents.
Then the intracellular resistivities were fitted so that the simplified model showed similar
voltage attenuation like the complex model.
1.4. Problems Constraining Neuron Models
Neuron models, in particular multi-compartment conductance-based models normally
come with a large number of free parameters. Many of these parameters cannot be directly determined experimentally with the technique available today. Even if experimental
data exists, we must be very careful using it without a detailed evaluation of the experimental protocol as there are several uncertainties that arise during electrophysiological
recordings.
1.4.1. Experimental Uncertainties
For example the measured membrane potential is often shifted in respect to the real value
of sometimes more than 10 mV (Barry, 1994; Barry & Diamond, 1970). This is due to
an insufficient compensation for the Liquid Junction Potential which occurs when two
solutions of different concentration are in contact with each other. Yet many electrophysiological studies neglect this. Thus these measurements do not only result in a wrong
estimate of the absolute membrane potential, but also lead to a failure when modelling
ion channel kinetics since the fraction of open channels depends on the absolute voltage.
Ion channel densities are mostly estimated using the cell-attached patch clamp configuration. All ion channels but the one of interest are blocked by application of various
blocking agents into the extra- or intracellular solution (for a review see Catterall, 1995).
Then a fine glass electrode is pressed against the cell membrane until a high resistance
seal can establish (a gigaseal). Then the neuron is voltage clamped and isolated currents
13
CHAPTER 1. INTRODUCTION
can be measured. By measuring the exact size of the pipette tip, it is possible to calculate
the ion channel conductance per area.
Ion channel densities for several ion channel-types have been suggested by this
method and many ion channel models were published.2 However, one might question
the quality of the results. For example it is not clear that the blocking agent really did
block all ion channels, but the single one we are interested in and therefore the assumed
pure current might be a mixture of several currents. Next, it was shown that cell-attached
patch clamp recordings might underestimate ion channel density per se. For example,
there is experimental evidence that sodium channels in the axon initial segment are
anchored via the cytoskeleton to the inner neuronal membrane. Thus a pipette attached to
the outer surface cannot record these currents and the effective conductance appears to be
low (Kole et al., 2008). However qualitative statements about relative channel densities
or the density distribution within a single neuron can be made and are very useful for
modelling (for example Keren et al., 2009; Kole et al., 2006).
To build a detailed model, the neuron is often filled with Biocytin after the recording
and the slice is fixed. The neuronal morphology is then reconstructed for example via
the manual reconstruction software Neurolucida (MicroBrightField, Williston, VT) under
a microscope. There is currently development to automate the procedure based on liveimaging data (for example Losavio et al., 2008). Most of the detailed reconstructions
available today are not accurate enough. This is due to the human-made error during the
reconstruction procedure and due to tissue shrinkage of approximately 10 % during the
fixation (Weaver & Wearne, 2008).
In addition to these uncertainties about the experimental procedures themselves, we
cannot obtain a full set of parameters for one neuron based on experiments. It is obviously
not possible to obtain data for all ion channels densities and kinetics when the others were
blocked and biocytin filling is sometimes not possible after long-lasting experimental
procedures. Furthermore patch clamping is not easy and only a few successful patches
can be obtained per neuron. It is particularly hard to obtain good recordings from the
dendritic tree. Therefore the results from many different neurons are combined and hence
all detailed models are based on dozens of experiments in different parts of the neuron,
different cells, animals and even species, temperatures and recording conditions.
We also need to make assumptions about the regions that are not accessible experimentally at all. It is often assumed, for example, that ion channel kinetics once obtained
from somatic recordings are the same everywhere in the neuron. But all kinds of different subunits are expressed in different parts and ion channels can change their properties
due to local modification. It was shown, for example, that the sodium activation and
inactivation curves in the axon are shifted ≈ -7 mV to more hyperpolarized potentials
(Colbert & Pan, 2002; Kole & Stuart, 2008).
2 ModelDB:
http://senselab.med.yale.edu/ModelDB
14
1.5. AUTOMATIC FITTING STRATEGIES
1.4.2. Constraining Parameters by Hand
Summarizing, the list of uncertainties about the parameters of multi-compartment
conductance-based neuronal models is endless. But if we cannot properly constrain these
parameters via experiments, we need to find other strategies. One common way is to
only consider a subset of all possible parameters and to set the remaining parameters to
some standard values. The free parameters are then adjusted by hand via trial and error
to minimize the distance between some model response and an experimental data set,
like the somatic voltage. This procedure however is tedious for the modeller and requires
an extensive experience. The parameters have highly nonlinear interactions and a slight
modification of one parameter might require a change of the others with unpredictable
amplitudes. Nevertheless there are many detailed models that were tuned by hand which
are very successful in explaining qualitatively many observations in single neurons (see
above).
But even if the model reproduces qualitatively the experimental data well one might
doubt that the adjusted parameters represent biological reality and possibly many distinct
parameter combinations will lead to similar good results. For example Prinz et al. (2004)
have shown that similar network dynamics can arise with distinct circuit and neuron parameters.
Moreover an extension of the model with some further mechanisms might require
a completely new parameter search. Therefore it is necessary to automate the search
strategy.
1.5. Automatic Fitting Strategies
There are three parts of a good optimization: First, we need a model that could in principle
fit the experimental data. Second, we need a good distance function between model and
data. Third, we need an efficient optimization algorithm that using the distance function
can evaluate different parameters for the model and thereby find better solutions, until an
optimal parameter set is reached.
The first point is the hardest and requires biophysical knowledge about the complex
mechanisms responsible for information processing in single neurons. Even if the distance function and the optimization algorithm work well, the algorithm will definitely
never find a good solution if the model will not be able to represent the experimental data.
For example, the data might show spike frequency adaptation, but the model might not
have such a mechanism implemented. To circumvent the search for the right model for
a given experimental data set, the optimization algorithm and the distance functions are
often evaluated fitting the model to so called surrogate data that were generated by some
parameter set of the same model. In this case a perfect solution exists for sure and it is
15
CHAPTER 1. INTRODUCTION
only a question of the search strategy whether the original parameter set can be found
again.
However it is not guaranteed that the optimization algorithm will also fit experimental
data even if it has performed well in the model-to-model fit (Druckmann et al., 2008). But
this is neglected by most studies developing strategies to fit a model to surrogate data and
it could be that these algorithms cannot be directly used to fit experimental data. However
these studies are useful to develop advanced optimization algorithms that can eventually
be applied successfully to experimental data.
1.5.1. A Brief Review of Earlier Studies
Let us briefly review some automatic parameter constraining procedures that were developed during the last decade (detailed reviews were published by Druckmann et al., 2008;
Van Geit et al., 2008).
One of the first studies to automatically fit neuronal models was published by
Vanier & Bower (1999). They realized the need for a rigorous comparison of different
search strategies to constrain a large parameter set. Different strategies were evaluated,
namely gradient descent, genetic algorithms, simulated annealing and stochastic search to
fit models of different complexities. Interestingly they found out that for simple models
simulated annealing showed the highest performance, while detailed models with a large
number of parameters were best fitted by genetic or evolutionary algorithms. However
the comparison was only performed on surrogate data and thus their conclusions might
not directly transfer to strategies fitting experimental data.
Prinz et al. (2003) were able to constrain an 8 dimensional single-compartment model
of a lobster stomatogastric neuron to experimental data. They did not directly fit the model
to a single data set, but first created a database of many possible model responses. For
each parameter 6 discrete values between a lower and an upper boundary were allowed
and the parameters were varied individually in a grid-like manner and the corresponding
model response was evaluated and stored. Once the database was created, it was possible
to filter only those data sets that mimicked an experimental target data best. One may also
use the data for many statistical studies, like the analysis of the role of a certain parameter in producing some specific spiking behaviour. However the grid-search strategy will
become too costly when more parameters are involved, as in multi-compartment models,
especially if the grid resolution becomes higher.
Achard & De Schutter (2006) created a framework to fit an entire detailed model of
a Purkinje neuron (De Schutter & Bower, 1994) to surrogate data. The model consists of
1600 compartments and 24 ion channel densities were set as free parameters. Fitting was
performed using a distance measure based on LeMasson and Maex’ phase-plane analysis
to overcome a problem with spike-timing (LeMasson & Maex, 2001). The problem with
16
1.5. AUTOMATIC FITTING STRATEGIES
a direct mean-square comparison of a target and test spiking trace is that the resulting
error value is strongly dependent on spike-timing. If, for example, target and test traces
are almost similar, but the test solution shows slightly faster spike frequency adaptation,
then the final error value will be huge anyhow. Moreover the error value will be smaller
when the test solution is not spiking at all (Fig. 1.5a,b). Therefore such a distance measure does not represent the quality of the model. In contrast, the phase-plane distance
measure is independent of the precise spike-timing. For the target and test spiketrain the
voltage derivative dV /dt is calculated. Then the matrix of M(V, dV /dt) is binned and
for each bin the number of points is determined. The differences of points in each bin
for the target and test histogram are then summed to the final error value (Fig. 1.5c,d).
Achard & De Schutter (2006) were able obtain good fits the entire Purkinje neuron model
c
60
40
20
0
-20
-40
-60
-80
dV/dt (V/s)
V (mV)
a
0 100 200 300 400 500 600
Time (ms)
d
60
40
20
0
-20
-40
-60
-80
dV/dt (V/s)
V (mV)
b
V (mV)
0 100 200 300 400 500 600
Time (ms)
V (mV)
Figure 1.5: C OMPARISON OF THE S QUARED -D ISTANCE M EASURE WITH L E M ASSON AND
M AEX ’ D ISTANCE M EASURE We chose a target parameter set to calculate the target spiketrain
(black) for a neuron model. To compare the squared distance measure with LeMasson and Maex’
distance measure, we chose two test parameters sets and determined their spiketrains (red). The
first test parameter set was similar to the target parameter set, but each parameter was changed by
1% leading to a similar spiketrain with small differences in spike timing (a). The second parameter
set was also like the target parameter set but with much less sodium channels to disable spiking
(b). a,b) The squared distance measure has a large value (39492 mV2 · ms) for the first test solution
while the second test solution is given a smaller value (23583 mV2 · ms). The second test solution
would therefore be preferred. c,d) Using LeMasson and Maex’ phase-plane distance measure
the first test solution has more data points in the bins of the target solution than the second test
solution. Therefore the first test solution obtains the smaller distance value and will be preferred
over the second.
reproducing even tiny details of the complex firing patterns. Interestingly the resulting
17
CHAPTER 1. INTRODUCTION
parameter sets leading to similar firing patterns were distinct. Two important conclusions
were made by the authors: First, the originally hand-tuned model is only one of many
good models and therefore the channel densities cannot be regarded as representing the
original channel distribution in the real neuron. Second, if similar firing patterns can be
reproduced with different ion channel densities, then there might also be some homeostasis during development adjusting the neuron’s parameters in order to reproduce the
complex firing patterns of real Purkinje neurons. Unfortunately this study has not been
able to fit experimental data yet and it appears that the phase-plane distance measure is
not sufficient for this task as it overestimates the errors in spiking traces below threshold
(Druckmann et al., 2008).
Another distance measure which is independent of spike-timing was used by
Weaver & Wearne (2006). The model was a single-compartment model with several
currents and calcium dynamics. For one parameter set they calculated the target
spiketrain and simulated annealing was used to fit the model to this data. As for the error
function, each AP of the target trace was aligned with its corresponding AP of the test
trace and a mean-square distance was calculated. This value was combined with an error
in firing rate. They found good fits for the whole spiketrain. Although the model might
be too simple to fit experimental data, this study shows that the spike shape contains a lot
of information about the ion channel distribution involved in AP generation.
To constrain the parameters of a reduced multi-compartment model of a layer 5 pyramidal neuron Keren et al. (2005) tested different error functions and a genetic algorithm
to fit surrogate data. They found out that one single error function is not sufficient if only
one somatic voltage recording is available, but that several error functions need to be combined. In particular for constraining an entire neuron, they suggested that several voltage
recordings from different locations in the neuron are needed. Although these suggestions
were based on surrogate data, Keren et al. (2009) performed a preceding study aiming
to fit the ion channel distribution in the soma and apical dendrite of a simplified neuron
model to experimental data. Interestingly the combined error function of a somatic and
apical dendritic recording initially failed to constrain the model. Therefore they modified
the functions describing the decay or growth of the ion channel density in the dendrite
which finally led to a successful fitting of the neuron model. Besides the fact that this
study was one of the first that managed to automatically fit a model to experimental data,
they also introduced a “parameter-peeling” strategy to reduce the number of free parameters per optimization step. This means that the passive parameters are fitted first. These
parameters are then fixed and the remaining active parameters are fitted in following steps
using different ion channel blockers. They also realized that passive and active parameters
are not completely independent and suggested a way to estimate both sets of parameters
in spite of these difficult dependencies.
Druckmann et al. (2007) introduced a multi-objective optimization (MOO) strategy
18
1.5. AUTOMATIC FITTING STRATEGIES
using Evolutionary Algorithms (EAs) (Deb, 2001) into computational neuroscience. Unlike previous studies using EAs minimizing a single error function (which might be a
combination of several weighted separate error functions) this approach can minimize
multiple error functions independently. Therefore it was possible to extract several meaningful features from a given spiketrain, like the spike height, rate or width and optimize
the model parameters in respect to each feature without an arbitrary weighting. This
approach was robust and overcame the problem that the model might be insufficient to
perfectly fit experimental data in all features. Druckmann et al. (2007) could fit the somatic conductances and the passive membrane parameters of a detailed model of a nest
basket cell to experimentally recorded spiketrains.
Summarizing we observe constant progress in the field of automatic fitting algorithms
for neuron models and we see that a key to success is a good distance measure and a
powerful search strategy. We have also seen that it is useful to separate the parameter
space to fit the subsets of parameters independently of each other. As today’s computer
power available to the researcher is constantly increasing further elaborate studies with
even more parameters and error functions will be possible.
19
CHAPTER
2
Methods
2.1. Experiment
Experiments were performed by Bekkers & Häusser (2007). Briefly, Sprague-Dawley or
Wistar (17- to 25-days-old) rats were anesthetized with Isoflurane and rapidly decapitated.
Slices (300 µm thick) were prepared from the somatosensory cortex and maintained at
32 ◦ C to 34 ◦ C. A MultiClamp 700A amplifier (Molecular Devices, Union City, CA) was
used to obtain whole-cell recordings from the somata of visually identified large layer 5
pyramidal neurons. Recordings were low-pass filtered at 10 kHz and sampled at 50 kHz.
In current-clamp recordings pyramidal neurons were allowed to remain at their resting
potential (≈ -67 mV). Voltages have not been corrected for the liquid junction potential
(≈ -7 mV). Electrophysiological recordings were performed under two conditions: First,
in the intact neuron. Then the apical dendrite and the soma were separated by a method
called Pinching and the recording protocol was repeated. Pinching was performed by
attaching two pincer pipettes to the proximal site of the apical dendrite (≈ 20 µm from
the soma) and moving them slowly against each other. In successful experiments it was
tested whether the initial properties of the cell membrane were restored by releasing the
pinch to ensure that pinching had not destroyed the cell. Pincer pipettes resembled sharp
intracellular electrodes with a shallow taper and very fine tips.
2.2. High-Resolution Alignment of APs
In electrophysiological experiments the voltage is normally recorded with less than
100 kHz. Higher recording frequencies would require more elaborate equipment and
lead to larger data files. However, in most experiments recording frequencies of 10 kHz
or less are sufficient to observe the desired properties. APs in cortical pyramidal neurons
have a rise time of approximately 0.2 ms, hence even with 100 kHz recording frequency
we obtain only about 20 data points in the region between onset and peak. Additionally
due to different sources of noise, each recorded AP is slightly different from the other.
21
CHAPTER 2. METHODS
Therefore a single recorded AP can only give an approximate estimate of the AP shape.
Thus, to obtain a high-resolution AP it is necessary to average data from many APs.
a
V
2
dV/dt
2
d V/dt
5 V/ms
30 mV
2
200 V/s
b
0.3 ms
Figure 2.1: H IGH -R ESOLUTION A LIGNMENT OF AP S a) The data from three APs from the
same spike train as well as their first dV /dt and second d 2V /dt 2 derivatives are shown (black
dots). The dashed lines are cubic spline interpolations fitted to V (t) (but not to the derivatives)
of each AP onset. It can be seen that the position of the peak of each interpolated AP is slightly
different. The first and second derivatives of the interpolations are even more variable and follow
only barely their corresponding data points. b) The interpolated APs were peak-aligned. It can
be seen that each interpolated AP is different, notably the first and second derivatives. Only the
average of the peak-aligned interpolated APs provides a detailed picture of the AP onset shape
including its derivatives (red lines).
It was shown that a precise and high-resolution alignment of APs can be achieved
without high recording frequencies (Wheeler & Smith, 1988): For each low-resolution
AP the position of the peak is determined. This peak is not the real peak of that AP but lays
somewhat nearby. Now a cubic spline interpolation is applied around that position which
allows a prediction of the real AP shape. This is done with every AP. These interpolated
APs are then peak-aligned and averaged. The result is a high-resolution average of many
low-resolution APs (Fig. 2.1).
22
2.3. MODELLING IN NEURON
The experimental data we use in this study were recorded with 50 kHz. We downsampled the recording frequency to different frequencies and tested whether the method could
recover the detailed shape of the AP. For sampling frequencies below 25 kHz we did not
observe a prominent biphasic AP onset anymore as seen in the first and second derivative
averages in Fig. 2.1. We therefore conclude that at least recording frequencies of 25 kHz
are necessary to analyze detailed AP onset shapes.
2.3. Modelling in NEURON
The creation and analysis of detailed neuronal morphologies with many compartments
and ion channels is very simple in NEURON (Carnevale & Hines, 2006). A neuron is
described by a set of sections that are connected to each other. A section is a continuous
length of unbranched cylindrical cable with its own anatomical and biophysical properties. Each section is automatically divided into several compartments. It is easy to introduce several point mechanisms, like synapses or current electrodes in each compartment
separately. Ion channel equations can be set up externally in so called .mod-files. These
files need to be compiled via nrnivmodl (Linux) before the mechanisms can be inserted
into the sections. Based on the resulting properties of each compartment NEURON automatically creates the system of differential equations representing the neuron and applies
efficient solving strategies. The neuron model can also be split into several components to
be calculated on multi-core architectures (Eichner et al., 2009; Hines et al., 2008). During the calculation it is possible to visualize or record any variable in the cell. NEURON
has its own programming language (hoc) to describe, analyze and control the models but
also offers a graphical user interface.
To give an example let us create a “simple” model with 25 compartments:
// example.hoc
nrn_load_dll("../../results/channels/x86_64/.libs/libnrnmech.so")
load_file("stdrun.hoc")
celsius = 37
create soma
create dend
soma {
diam = 20
L = 20
23
CHAPTER 2. METHODS
nseg = 5
insert nat
insert kfast
ena = 50
ek = -80
gbar_nat = 3000
gbar_kfast = 500
}
dend {
diam = 5
L = 500
nseg = 20
}
forall {
insert pas
e_pas = -70
g_pas = 1./15000
}
connect soma(1),dend(0)
objref stim
soma stim = new IClamp(0.5)
stim.del = 100
stim.dur = 5
stim.amp = 0.5
At the beginning we must load the compiled ion channel mechanisms libnrnmech.so
which are located elsewhere. We also need to load the main hoc-routines for NEURON
(stdrun.hoc) before we can specify the model. We set the overall temperature to 37 ◦ C
as ion channel kinetics are highly sensitive to temperature and our goal is to model in vivo
conditions. We create a somatic (soma) and a dendritic (dend) section. The somatic
cylinder is 20 µm long and has a diameter of 20 µm. The dendrite is 500 µm long and
has a diameter of 5 µm. The soma is divided into 5 and the dendrite into 20 compart24
2.3. MODELLING IN NEURON
ments. The somatic section contains transient sodium (nat) and fast potassium channels
(kfast) (for detailed ion channel description see sec. 3.2). We set the reversal potential
for sodium Ena = 50 mV and for potassium Ek = -80 mV. A passive leak conductance
(pas) is inserted in all compartments. The reversal potential for the leak current is set
to Epas = -70 mV and the specific membrane resistance is set to rm = 15000 Ω · cm2 . A
current electrode (IClamp) is placed in the middle of the soma. A constant current of
0.5 nA is switched on after 100 ms for 5 ms.
It would have been a huge amount of work to implement this simple model in
MATLAB or C++, for example. The source code would be extremely error-prone,
especially if one decides to change something in the model, like the number of
compartments for the dendrite. NEURON extremely simplifies the modelling of complex
neuron geometries and biophysics.
2.3.1. Using Python to Control NEURON
The recent versions of NEURON provide an interface to Python1 (Hines et al., 2009).
This offers the opportunity to control the cell models and the NEURON-simulation from
Python, but do all data analysis and specific model modification in Python. Let us continue our “simple” example:
# example.py
from neuron import h
import pylab as P
h.load_file("example.hoc")
h("""
v_init = -70
tstop = 150
objref
time =
rec1 =
rec2 =
time, rec1, rec2
new Vector()
new Vector()
new Vector()
time.record(&t)
rec1.record(&soma.v(0.5))
rec2.record(&dend.v(0.9))
1 The
Python Programming Language: www.python.org
25
CHAPTER 2. METHODS
""")
h.init()
h.run()
P.plot(h.time, h.rec1, ’b-’)
P.plot(h.time, h.rec2, ’b--’)
P.show()
We import the neuron module as well as the Python-plotting package pylab. Now we
load the cell model example.hoc. In the following we prepare the simulation using the
hoc-interface. We set the initial voltage v_init = -70 mV and set the overall simulation
time tstop = 150 ms. We create three vectors, one to record the time and the other two
to record the somatic and dendritic voltage. We initialize the model and finally run the
simulation. NEURON performs the evaluation of the model and records the variables.
These variables are immediately available to Python and can be plotted (Fig. 2.2) or
analyzed further using any scientific Python package.
soma.v(0.5)
dend.v(0.9)
Figure 2.2: E XAMPLE NEURON-P YTHON
S IMULATION R ESULTS The simple model
of a soma and dendrite was loaded into
NEURON and controlled with Python. We
recorded the voltages in the middle of the soma
(soma.v(0.5)) and at the end of the dendrite
(dend.v(0.9)). The model initiates a somatic
AP in response to the short current step. The
voltage spreads and decays into the dendrite.
10 mV
-70
100 1 ms
2.4. Multi-Objective Optimization using EAs
The most significant number of real world problems involve more than one single objective and therefore several error values need to be minimized during an optimization
procedure. Classical optimization algorithms only minimize one single error function
and if multiple error functions are taken into account they are weighted and summed into
a single one. But the choice of weights, especially for objectives with different units is
highly arbitrary. Furthermore conflicts between the different objectives are neglected and
the final optimal solution might not represent the desired trade-off. To overcome these
problems we use a multi-objective optimization (MOO) strategy that optimizes several
26
2.4. MULTI-OBJECTIVE OPTIMIZATION USING EAS
Harmful Gases Produced
objectives simultaneously (Deb et al., 2002; Druckmann et al., 2007). This way of sorting leads to a number of trade-off optimal solutions that are optimal in each objective
separately, but also contain mixed solutions. An illustration of a two-objective optimization problem is shown in Fig. 2.3.
1
0.8
0.6
0.4
0.2
0
0
0.2 0.4 0.6 0.8
Production Cost
1
Figure 2.3: I LLUSTRATION OF A S IMPLE T WO -O BJECTIVE O PTIMIZATION P ROBLEM
Let us consider a car factory that while producing cars creates emissions. Most of the car production strategies will lead to solutions that are bad in both objectives (gray shaded area). The
owners of the factory want to reduce the cost of car production and at the same time it is desired to
minimize the amount of harmful gases produced. But minimizing the amount of produced gases
will lead to high production costs and minimizing the production costs will strongly increase the
amount of harmful gases produced and hence both objectives cannot be minimized at the same
time. A multi-objective optimization strategy could find the optimal trade-off solutions (black
line) while classical single error function minimization would only find one single particular solution (red dot).
Minimization is often done with simple gradient-based methods: A random starting
point in the search space defines an initial parameter set. Around this point the model is
evaluated and the direction of parameter change is applied such that the improvement of
the solutions is maximal. This works very well when the error surface is smooth but most
of real-world problems cannot be described by smooth error functions and such a strategy would get stuck in local minima. One can improve the gradient-based methods by
introducing random parameter fluctuations (simulated annealing) but if the search space
becomes more complex the randomization of parameters is not sufficient to lead the optimization algorithm to the global minimum. Here evolutionary strategies have proven to
be very effective.
In this study, we have implemented a very general Python-framework for the MOOstrategy using EAs which can be easily used for any optimization problem.
2.4.1. Evolutionary Algorithms
EAs mimic natural evolutionary principles like selection, crossover and mutation. One
parameter set is called an individual and all individuals form the population. Each individual is associated with a fitness value that describes its ranking in the population and
27
CHAPTER 2. METHODS
only good individuals are transferred into a new generation. Parameter combinations
among individuals are exchanged during evolution and the whole population is guided
to better solutions (Deb, 2001, chap. 4). EAs can be extended to multi-objective problems and can be easily used on a parallel machine. In the following we will describe the
essential steps of this optimization strategy (Fig. 2.4).
Begin
Figure 2.4: F LOWCHART OF THE W ORK ING P RINCIPLE OF AN EA A random
population is created and the selection, mutation and crossover operators are applied. Then
the population is evaluated and each individual is associated with a unique fitness value. If
several objectives are considered the sorting is
done by the multi-objective-sorting strategy.
After sorting only a subset of all individuals
is transferred into the new generation. The
evolution is repeated until a stop-criterion is
fulfilled.
Create Population;
gen = 0
No?
Cond?
gen = gen + 1
Selection
Yes?
Stop
Crossover
Evaluate
Individuals
Mutation
Set Fitness for
Individuals
Multi-Objective
Sorting
Create Population
The search space is filled with a set of random solutions that form the initial population
of size N. To simplify and generalize the following steps, each parameter is normalized
between 0 and 1. The individuals are evaluated based on the chosen error functions to
determine their fitness.
Selection
In the selection step a so called mating pool is formed. The population is shuffled and individuals are pairwise compared and only the individual with the higher fitness is copied
into the mating pool. This step is done twice, so that the mating pool contains N individuals again. Thus, no matter what the order of individuals was, better individuals will have
a higher chance to be transferred. The best individual will be definitely found twice in
the mating pool as it is always compared with a worse one and the worst individual will
definitely not reach the mating pool as it can only be compared with a better one. Hence
the mating pool accumulates good solutions that are ready to “mate” in order to exchange
their parameters in the next step.
Crossover
Two individuals (parents) are randomly selected from the mating pool and the crossover
operator is applied to produce two offspring. We have chosen the Simulated Binary
Crossover, but there are other possible operators (Deb, 2001, chap. 4). The crossover
28
2.4. MULTI-OBJECTIVE OPTIMIZATION USING EAS
operator is applied to each parameter independently: First a random number ui between 0
and 1 is drawn from a uniform distribution. Then the value

(2 · ui ) ηc1+1
if ui ≤ 0.5 ;
βqi =
(2.1)
1
 1 ηc +1 , otherwise
2 · (1−ui )
is calculated. After obtaining βqi from the above probability distribution, the offspring
parameters are calculated as follows:
y1i = 0.5 · (1 + βqi ) · xi1 + (1 − βqi ) · xi2
y2i = 0.5 · (1 − βqi ) · xi1 + (1 + βqi ) · xi2 .
(2.2)
(2.3)
Here y1i is the ith parameter of the first and y2i of the second offspring. xi1 and y2i are the
ith parameters of the parents respectively. A large value of ηc gives a higher probability
for creating “near-parent” solutions and a small value of ηc allows distant solutions to be
selected as offspring. In this study we use ηc = 10. If an offspring parameter was obtained
that was outside the allowed parameter range the value was repositioned:

y → 0
i
y → 1
i
if
yi < 0 ;
if
yi > 1 .
(2.4)
The crossover operator is applied until the total size of the parent and the produced offspring population have reached a certain capacity C. Offspring and parent population are
combined to a new temporal population. The effect of the crossover operator is shown in
Fig. 2.5a.
After crossover we find many individuals that contain parameters that lie between the
parent parameters. But we have not created entirely new individuals. This is done in the
next step.
Mutation
To increase the diversity of individuals we apply the mutation operator to the temporal
population. We use Polynomial Mutation, but again many other operators are available
(Deb, 2001, chap. 4). Each parameter for each individual is mutated independently. To
determine the strength of mutation we first draw a random variable ri from a uniform
distribution between 0 and 1. Then the value δi is calculated:

(2 · r ) ηm1+1 − 1
if ri < 0.5 ;
i
δi =
(2.5)
1
1 − [2 · (1 − r )] ηm +1 , otherwise .
i
29
CHAPTER 2. METHODS
ηc = 10
ηc = 50
b
Crossover
ηm = 20
ηm = 100
Occurence in %
Occurence in %
a
0.8
0.6
0.4
0.2
0
0
Mutation
0.8
0.6
0.4
0.2
0
0.2 0.4 0.6 0.8 1
Normalized Parameter
0
0.2 0.4 0.6 0.8 1
Normalized Parameter
Figure 2.5: V ISUALIZATION OF THE C ROSSOVER AND M UTATION O PERATORS To visualize the effects of the crossover and mutation operators we created a large population of N = 10000
individuals each with one single parameter x. We observe how the parameter distribution looks
like after applying the operators. a) Half of the population parameters were set to x = 0.4 and the
other half to x = 0.6 (red dots). The crossover operator was applied until a capacity of C = 20000
was reached. Then the histogram of the resulting parameter distribution was calculated. We see
that the majority of solutions lies near one of the parents Using a small ηc = 5 (black line) leads
to a large variability of parameters, while a large ηc = 55 (blue line) causes almost no change
in the parameter distribution. b) All parameters are set to x = 0.5 (red dot). Then the mutation
operator is applied to the population. Only 20 % of the individuals were mutated. We calculated
the histogram of the resulting parameter distribution. It can be seen that the mutation operator
leads to a broad distribution if ηm = 20 is small while the distribution remains relatively narrow if
ηm = 100 is large.
The ith parameter is then changed by δi :
xi → xi + δi .
(2.6)
The value of ηm defines the strength of the mutation. Using a small ηm it is likely to lead
to “distant mutations”, while the mutation operator with a large ηm has only minor effects.
In this study, we use ηm = 20. We also defined a mutation probability pc = 0.2/k, where
k is the number of parameters of the optimization problem. This means that on average
one parameter per individual will be changed in 20 % of all individuals. If a parameter
change led to a value outside the allowed parameter range it was repositioned:

x → 0
i
x → 1
i
if xi < 0 ;
(2.7)
if xi > 1 .
The effect of the mutation operator is shown in Fig. 2.5b.
After we have applied the selection, crossover and mutation operators, we are left with
a temporal diverse population that has accumulated the parameter combinations that lead
to good solutions, but also contains entirely new parameter sets. After these randomization procedures the population needs to be sorted in order that each individual can obtain
its ranking. This is done in the next two steps.
30
2.4. MULTI-OBJECTIVE OPTIMIZATION USING EAS
Evaluation
For each individual the model is evaluated based on the given parameter set and the result
is compared with the target data. If M objectives are considered, this leads to a set of M
error values that are associated with the individual.
Set Fitness
Finally the individuals are given a fitness value to describe their ranking in the population.
In the case of a single error function this can be done by sorting all individuals in respect
to their associated error value. If several independent error functions are considered a
multi-objective sorting concept can be used, which is explained below.
New Generation
The temporal population of size C is sorted by the fitness of its individuals and only the
first N individuals are transferred into the new generation. The generation counter is incremented and the optimization loop starts again. It is possible to define any stop-criterion.
In this study, we simply stop after having reached a certain number of generations.
2.4.2. Multi-Objective Sorting
If several independent error functions are to be minimized the sorting procedure can be
replaced by the concept of nondomination. One solution (x1 ) dominates the other (x2 ) if
it is better in at least one objective ( fk ), but not worse in any other ( f j ):
f j (x1 ) ≤ f j (x2 ) for all
j = 1...M ;
fk (x1 ) < fk (x2 ) for at least one k = 1 . . . M .
(2.8)
(2.9)
Analyzing the domination relation in the whole population leads to a set of solutions
that are not dominated by any other solution (first Pareto front (PF), nondomination front).
The PF of the rest of solutions is ranked second and so on (Fig. 2.6a). In this study, we
use an efficient algorithm (NSGA-II) to calculate the PFs that was developed by Deb et al.
(2002). The PFs give the individuals a first ranking in the population. However within
such a front it is not possible to call one individual better than the other without making
further specification.
Nevertheless within a PF the individuals must also be ranked. The solutions that have
the smallest error in one of the objectives are immediately ranked best in the front. The
remaining solutions are ranked using the crowding distance measure (Deb, 2001, p. 248).
For a solution i the crowding distance dmi in respect to the objective m, is defined as the
31
CHAPTER 2. METHODS
distance between its direct neighbors in the same PF:
dmi = fmi+1 − fmi−1
(2.10)
where fm is the objective value for objective m. The total crowding distance d i for solution
i is the weighted sum of the crowding distances for each objective:
di = ∑
m
dmi
.
fmmax − fmmin
(2.11)
Therefore very “lonely” solutions obtain higher values, whereas solutions in very
“crowded” regions of the error space get smaller values (see Fig. 2.6b). By this means
each individual in a PF obtains a further ranking and hence a unique fitness for every
individual in the population can be determined.
a
b
200
100
100
i-1
f2
f2
150
150
50
PF 1
PF 2
PF 3
50
0
0
50
100
f1
d 2i
i
d1
i
i+1
0
150
200
0
20
40
60
f1
Figure 2.6: I LLUSTRATION OF THE R ANKING -C ONCEPT USING PARETO F RONTS AND
THE C ROWDING D ISTANCE For two parameters x, y ∈ [−10, 10] we calculated two objective
functions f1 = 0.125 · (20 + x + y)2 and f2 = 100 + x · y. It can be seen from the formulas that both
objectives are conflicting and cannot be minimized at the same time. MOO will help to find the
best trade-off solutions for this problem. a) The objectives for the initial random population N = 50
are shown (black dots). The first, second and third Pareto front (PF) was calculated (colored dots).
The PFs give the individuals a first ranking in the population. Black crosses symbolize the first PF
after an evolution of 10 generations and show the desired trade-off solutions. b) A magnification
of the first PF of the initial population is shown (red dots). Using the crowding distance measure
each individual can obtain a further ranking. The measure is illustrated for a solution i (black dot)
which has the highest crowding distance. We calculate the crowding distance d i of i using the
distance between the two nearest neighbors i − 1 and i + 1 in respect to each objective (dashed
lines). As solution i is located in a very “lonely” region of the first PF it obtains a very good fitness
(boundary individuals are given the best fitness).
Using both concepts, the Pareto-optimal ranking and the crowding distance measure
it is ensured that the optimization minimizes all error functions at the same time and that
it directs individuals into unexplored regions of the error space to obtain a good spread of
solutions.
32
2.4. MULTI-OBJECTIVE OPTIMIZATION USING EAS
2.4.3. Parallelization
As the principle of an EA is the evaluation of independent individuals it is very easy to
evaluate the population via multiple processors. Parallelization in Python is very simple
via the package mpi4py (Dalcin et al., 2008) and only needs a few lines of code. All
evolutionary steps are done on the master-node. Then the master-node splits the population into equally sized packages and sends the list of individuals to the slave-nodes. The
slave-nodes start the evaluation once they receive their package of individuals and submit
the resulting error values to the master-node. For the optimization of complex models the
evaluation of one parameter set is the most time consuming step and parallelization can
reduce the time of population evaluation by a factor of the number of processors used. In
this study, we use a Beowulf cluster with 44 processors.2
2 Cluster-access
was kindly provided by the Colamn-Project:
ViewGrant.aspx?GrantRef=EP/C010841/1
33
http://gow.epsrc.ac.uk/
CHAPTER
3
The Cell Model
3.1. Neuronal Morphology
We created a reduced morphology of a cortical layer 5 pyramidal neuron. In order to
obtain reasonable parameters for the geometry, we started with a detailed reconstruction
and simplified the geometry while maintaining the neuron’s passive response properties.
After this was done, we appended an axon. The final morphology consists of 7 functional
sections and is described with a 39-compartment model.
3.1.1. Geometry Reduction
Bekkers & Häusser (2007) did not reconstruct the cells after the experiments. Thus we
needed to choose a detailed morphological reconstruction of another cortical layer 5 pyramidal neuron from a rat with similar age (Stuart & Spruston, 1998).
We removed all ion channels, including the HCN-channels from the complex model
and set the passive membrane parameters constant over the whole morphology (rm =
15000 Ω · cm2 , cm = 1 µF/cm2 , Epas = -70 mV, ra = 100 Ω · cm). We divided the complex
neuron model into four functional sections (soma, basal dendrites, apical dendrite, tuft)
(Fig. 3.1a) and determined the membrane area of each. They were Asoma = 1682 µm2 ,
Abasal = 7060 µm2 , Aapical = 9312 µm2 and Atuft = 9434 µm2 .
In order to obtain a simplified geometry with passive response properties similar
to those of the complex model, we modified the simplification strategy suggested by
Destexhe (2001): For a given length Lx of a reduced section x the diameter dx of that section was always adjusted such that the resulting membrane area of the cylinder matched
the area of the subset of dendrites it represents (Ax ):
dx =
Ax
.
π · Lx
35
(3.1)
CHAPTER 3. THE CELL MODEL
For the somatic cylinder we set:
r
dsoma = Lsoma =
Asoma
.
π
(3.2)
Beside the uncertainty about the precise length of such a reduced section, it is unclear
what the magnitude of its intracellular resistivity (ra ) has to look like after simplification.
However the specific passive membrane parameters (rm , cm ) and the leak-reversal potential Epas were set to the same values as in the complex model. Hence, the entire reduced
passive neuron model can be described with 8 geometrical parameters, 4 parameters for
the intracellular resistivities as well as 3 membrane parameters, but only 7 parameters
were free and used to optimize the model (Tab. 3.1).
Parameter
ra soma
Lbasal
ra basal
Lapical
ra apical
Ltuft
ra tuft
dsoma
Lsoma
dbasal
dapical
dtuft
rm global
Epas global
cm global
Result
82.0
257.0
734.0
500.0
261.0
499.0
527.0
23.1
23.1
8.7
5.9
6.0
15000.0
-70.0
1.0
LS Bound
80.0
170.0
700.0
500.0
150.0
400.0
500.0
-
US Bound
200.0
280.0
2000.0
800.0
300.0
600.0
1200.0
-
Unit
Ω · cm
µm
Ω · cm
µm
Ω · cm
µm
Ω · cm
µm
µm
µm
µm
µm
Ω · cm2
mV
µF/µm2
Table 3.1: O PTIMAL G EOMETRICAL AND PASSIVE PARAMETERS FOR THE R EDUCED
M ODEL AFTER S IMPLIFICATION The final set of parameters for the reduced neuron model is
shown that was obtained after optimizing its passive response properties. Only 7 parameters were
free and used to constrain the model: The length L and ra for each cylindrical section as well as
ra for the soma. The lower search bound (LS Bound) and the upper search bound (US Bound) defined the allowed region in parameter space for each parameter during the search. The parameters
for the initial random population were uniformly distributed in that region. The section diameters
were not free and always adjusted using Eqn. 3.1. The diameter and the length of the soma were
calculated using Eqn. 3.2. The remaining membrane parameters were the same as in the complex
model and not changed. All values were rounded.
It was already shown in earlier studies that an adequate estimate of the passive
parameters can be found by optimizing the neuron’s input impedance and phase-shift
(Borst & Haag, 1996). Injection of oscillating input current with a certain frequency f
leads to an oscillation of the membrane potential with the same frequency. We describe
the relative amplitudes between the sinusoidal input current and the membrane voltage
36
3.1. NEURONAL MORPHOLOGY
with an impedance Z( f ) and the shift of the oscillation phase is described with a
phase-shift θ ( f ). The membrane time constant determines the time that is necessary
to charge the membrane. In case of fast input oscillation the membrane does not have
enough time to accumulate charge before the next oscillation phase begins. Therefore
the input current starts to hyperpolarize the membrane before an equilibrium potential
is reached. The higher the input frequency the stronger this effect and the lower the
membrane oscillation amplitude. This analysis was only done with injections and
recordings at the soma. Nevertheless the impedance and phase-shift curves depend on
dendritic geometry and can therefore help us to optimize the passive properties of an
entire neuron.
To obtain additional information about the voltage distribution in the neuron, we also
injected a constant input current into the soma and measured the steady state voltage
distribution.
Distance Functions
We optimized the simplified geometry such that the four objectives somatic steady state
voltage (Vs (0)), voltage attenuation (Vs (x)), somatic input impedance (Zsoma ( f )) and somatic phase-shift (θsoma ( f )) mimicked those objectives in the complex model. To obtain
a better resolution for the steady state voltage distribution, we divided each of the sections into 20 compartments during the optimization. For the calculation of the four error
functions (e1 , e2 , e3 , e4 ) we used the squared distance measure:
2
e1 = 0.5 · Vscomplex (0) −Vsreduced (0)
2
e2 = 0.5 · ∑ Vscomplex (x) −Vsreduced (x)
(3.3)
(3.4)
x∈X
e3 = 0.5 ·
∑
2
complex
reduced
Zsoma ( f ) − Zsoma
(f)
(3.5)
∑
2
complex
reduced
θsoma ( f ) − θsoma
(f)
(3.6)
f ∈F
e4 = 0.5 ·
f ∈F
where X is the vector of distances at different locations in the neuron and F the vector of chosen frequencies. In order to minimize all four error functions independently,
we applied the multi-objective optimization (MOO) strategy (sec. 2.4) to automatically
and systematically explore the parameter space for good solutions. The population size
was N = 400, the population capacity was C = 800 and the evolution was performed for
100 generations. Mutation and crossover parameters were as described (p. 27). Search
boundaries and the final parameter results are given in Tab 3.1. Voltage attenuation, somatic input impedance and the phase-shift for the complex and the optimized simplified
model are compared in Fig. 3.1. The passive response properties of the simplified neuron
37
CHAPTER 3. THE CELL MODEL
model match well with those of the complex neuron model.
c
b
SS Voltage (mV)
a
tuft
Complex Model
Reduced Model
-100
-110
-120
-130
-140
d
100 !m
apical
Soma Phase-Shift (rad)
e
Soma Impedance (M!)
-300
0
300 600 900
Distance to soma (µm)
soma
basal
70
60
50
40
30
20
10
0
0
200 400 600 800 1000
f (Hz)
0
200 400 600 800 1000
f (Hz)
0
-0.4
-0.8
-1.2
Figure 3.1: M ORPHOLOGY AND PASSIVE P ROPERTIES FOR THE C OMPLEX AND THE
R EDUCED M ODEL a) The detailed reconstruction of a layer 5 pyramidal neuron taken from
Stuart & Spruston (1998). We divided the complex morphology into 4 functional sections: The
soma, the basal dendrites, the apical dendrites and the tuft. The oblique dendrites are taken to be
part of the apical dendrites. b) An illustration of the reduced model (not to scale). The model consists of 4 cylinders that represent the sections described in (a). Each of the cylinder was divided
into 20 compartments to increase the number of data points for plotting and geometrical optimization. c) A constant current (-1 nA) was injected into the somata of both neurons and the steady
state voltage distribution was determined for the complex model (black dots) and the optimized
reduced model (red dots). d) An oscillating input current with low amplitude was injected into the
somata. The amplitude of the resulting membrane potential oscillation was used to calculate the
somatic input impedance for the complex model (black line) and the optimized reduced model (red
line). e) We calculated the somatic phase-shift between the input current and membrane potential
oscillation for the complex model (black line) and the optimized reduced model (red line).
Noise Test
However it should also be tested whether the reduced model is really a good representation
of the passive properties of the complex model. To do this we injected white-noise input
current into the soma of the complex neuron model and the same noisy input into the
somatic cylinder of the simplified model. For both models we recorded the voltage in the
soma and in a distal location in the apical dendrite (Fig. 3.2). The voltage traces in the
38
3.1. NEURONAL MORPHOLOGY
soma and in the distal dendrite are very similar for both models and we therefore conclude
that we have found an adequate simplification of the complex dendritic geometry that we
can safely use in the following steps.
Complex Model
Reduced Model
Apical Distal
5 mV
Soma
5 mV
2 nA
0
50
100
150
Time (ms)
200
250
300
Figure 3.2: C OMPARISON OF THE VOLTAGE T RACES IN THE C OMPLEX AND R EDUCED
M ODEL IN R ESPONSE TO N OISY I NPUT C URRENT To test whether the reduced model is
a good approximation of the complex model, we analyzed the response to white noise current
injection in both models. The same random current was injected into the somata of the models
(green trace). For both models the somatic voltage trace as well the voltage trace distally (≈
425 µm from the soma) were recorded. The traces of the complex model (black) and of the reduced
model (red dashed line) are almost indistinguishable.
3.1.2. Axon Geometry
To address the question of AP initiation we appended an axon to the simplified morphology. The axonal geometry is based on a detailed reconstruction of a cortical layer 5
pyramidal neuron (Zhu, 2000). We represented that axon by three sections with different
starting diameters and length (Fig. 3.3). The axonal parameters ra , rm , cm , Epas were the
same as in the soma. We did not model axonal nodes of Ranvier or segments of myelin.
soma
hillock
axon
iseg
39
Figure 3.3: G EOMETRY OF THE A XON
FOR THE R EDUCED M ODEL An illustration of the axonal geometry is shown (not
to scale). The axon hillock is directly connected to the soma and starts with a diameter of 3.5 µm. Within 20 µm the hillock tapers to 2 µm where the axon initial segment
begins. The axon initial segment had a length
of 25 µm and the diameter tapers to 1.5 µm.
The rest of the axon had a uniform diameter
of 1.5 µm and a length of 500 µm.
CHAPTER 3. THE CELL MODEL
3.1.3. Segmentation
The reduced model should be able to show correct AP initiation and propagation. When
APs are travelling in multi-compartment models, it is important to use a sufficient high
number of compartments within one segment (Carnevale & Hines, 2006). In our model
the axon initial segment and the axon hillock consist of 5 compartments each. The apical
dendrite was divided into 16 and the tuft into 10 compartments. The soma, the basal
dendrite and the axon were represented by a single compartment each. Thus the simplified
model consists of 39 compartments in total.
3.2. Ion Channel Kinetics and Distribution
All ionic currents in this study are modeled in the standard Hodgkin-Huxley style (for a
general overview see sec. 1.2.2) and based on published ion channel models1 . In this
study we are using the unit pS/µm2 for all specific ionic conductances while Hodgkin and
Huxley reported the units in mS/cm2 (Hodgkin & Huxley, 1952b). As NEURON requires
the unit mA/cm2 for the specific ionic current and the membrane potential is given in mV
we need to include the factor 10−4 in the equation for the specific ionic currents (Eqn.
1.18). It is assumed that all ion channels of the same type have the same single channel
conductance γ if they are open. Fast sodium channels for example were reported to have
γ ≈ 14 pS (Koch, 2004, p. 197). We use deterministic ion channel models to describe the
conductance of an ensemble of ion channels. Thus for an ion channel type x the maximal
specific ionic conductance (gbarx ) is proportional to the ion channel density (ηx ). Both
terms are often used equivalently.
Ion channels are normally faster and their conductance increases with higher temperatures. We are using a temperature of celsius = 37 ◦ C in our modelling study but the ion
channel models were created under different conditions. To properly adjust the rate constants and the peak conductances a temperature adjustment factor (tadj ) was introduced
(Hodgkin & Huxley, 1952b).
The steady state values and time constants for the gating variables in this study are
illustrated in Fig. 3.4 and explained in the following sections. We did not model calcium
channels or any other calcium related mechanisms in our model.
3.2.1. Hyperpolarization-Activated Cation Channel
The hyperpolarization-activated cyclic nucleotide-gated cation channel (HCN-channel)
gives rise to the h-current. The kinetic scheme and parameters for the channel were taken
1 ModelDB:
http://senselab.med.yale.edu/ModelDB
40
3.2. ION CHANNEL KINETICS AND DISTRIBUTION
τp (ms)
p∞
1
HCN
0.75
100
0.5
10
1
0.25
0.1
0
0.01
-100
-50
0
50
1
Nat
100
0.5
10
50
m
h
1
0.01
-50
0
50
1
-100
-50
100
0.5
10
0
50
n
1000
0.75
1
0.25
0.1
0
0.01
-100
-50
0
50
1
-100
-50
0.75
100
0.5
10
0
50
a
b
b1
1000
1
0.25
0.1
0
0.01
-100
-50
0
50
1
-100
-50
100
0.5
10
0
50
m
1000
0.75
1
0.25
0.1
0
0.01
-100
-50
0
50
1
Km
0
0.1
-100
Nap
-50
1000
0
Kslow
-100
0.75
0.25
Kfast
q
1000
-100
-50
100
0.5
10
50
m
1000
0.75
0
1
0.25
0.1
0
0.01
-100
-50
0
50
-100
-50
0
50
Clamped Voltage (mV)
Figure 3.4: I ON C HANNEL G ATING PARTICLES U SED IN T HIS S TUDY The voltagedependent steady state values and time constants for the gating particles used to describe the ion
channel kinetics in this study are shown. The first column shows the steady state values for the
gating particles p∞ . The second column shows on a logarithmic scale the corresponding time constants. It can be seen that there is a huge range between the time constants from less than 0.1 ms
to more than 3 s. Only physiologically relevant voltages are shown.
41
CHAPTER 3. THE CELL MODEL
from Kole et al. (2006):
α (v) = 0.001 · 6.43 · (V + 154.9) / (exp ((V + 154.9) /11.9) − 1)
β (V ) = 0.001 · 193 · exp(V /33.1)
τq (V ) = 1/ αq (V ) + βq (V )
q∞ (V ) = αq (V )/ αq (V ) + βq (V )
dq
= (q∞ − q)/τq
dt
Ih = 10−4 · gbarHCN · q · (V − Eh ) .
(3.7)
(3.8)
(3.9)
(3.10)
(3.11)
(3.12)
Only one gating particle q is required to describe the time- and voltage-dependent
activation of the channel. The reversal potential was set to Eh = -47 mV. It was shown
experimentally that HCN-channels in pyramidal neurons are mainly located in the tuft
(Berger et al., 2001; Kole et al., 2006), but our data suggests that this channel type is also
found in the soma. We therefore inserted the HCN-channels into the somatic and into the
tuft section with a homogeneous density, but not into the apical dendrite.
3.2.2. Transient Sodium Channel
We used a recent model of the transient sodium channel (Nat-channel) (Kole et al., 2006).
The model equations are the following:
Ṽ = V − vshift − vshift2
(3.13)
αm (Ṽ ) = 0.182 · (Ṽ + 28)/(1 − exp(−(Ṽ + 28)/9))
(3.14)
βm (Ṽ ) = −0.124 · (Ṽ + 28)/(1 − exp((Ṽ + 28)/9))
(3.15)
αh (Ṽ ) = 0.024 · (Ṽ + 50)/(1 − exp(−(Ṽ + 50)/5))
(3.16)
βh (Ṽ ) = −0.0091 · (Ṽ + 50)/(1 − exp((Ṽ + 50)/5))
(3.17)
tadj = 2.3(celsius−23)/10
τm (Ṽ ) = 1/ tadj · αm (Ṽ ) + βm (Ṽ )
m∞ (Ṽ ) = αm (Ṽ )/ αm (Ṽ ) + βm (Ṽ )
τh (Ṽ ) = 1/ tadj · αh (Ṽ ) + βh (Ṽ
h∞ (Ṽ ) = 1/(1 + exp((Ṽ + 55)/6.2))
dh
= (h∞ − h)/τh
dt
INat = 10−4 · gbarNat · m3 · h · (V − Ena ) .
(3.18)
(3.19)
(3.20)
(3.21)
(3.22)
(3.23)
(3.24)
Here, an activation m and an inactivation h gating variable are required to describe
42
3.2. ION CHANNEL KINETICS AND DISTRIBUTION
the channel’s opening state. The differential equations for the gating variables depend on
a shifted membrane potential Ṽ . The first vshift was included because of uncertainties
about the absolute voltage when channel kinetics were optimized based on experimental
recordings (sec. 1.4). The vshift2 was added to have the opportunity to specify another
voltage-shift precisely to one compartment. As done in previous studies (Keren et al.,
2009; Mainen et al., 1995), Nat-channels were distributed in the soma, in the axon hillock
and axon initial segment, as well in the apical dendrite and tuft. The sodium channel
density in the apical dendrite and tuft is described with a linear decay:
gbarNat (x) = f (gbarsoma
Nat − decayNat · x)

x if x ≥ 0 ;
f (x) =
0 otherwise .
(3.25)
(3.26)
x is the distance to the soma (in µm) and decayNat describes the slope of the decay. The
rectifying function f (x) is needed in order that for any parameter combination channel
densities cannot become negative. The reversal potential for sodium was set to Ena =
55 mV for the entire neuron.
3.2.3. Fast Potassium Channel
The kinetics of the fast potassium channel (Kfast-channel) (Kole et al., 2006) are described by the following equations:
αn (V ) = 0.02 · (V − 25)/(1 − exp(−(V − 25)/9))
(3.27)
βn (V ) = −0.002 · (V − 25)/(1 − exp((V − 25)/9))
(3.28)
tadj = 2.1(celsius−23)/10
(3.29)
τn (V ) = 1/ tadj · (αn (V ) + βn (V ))
(3.30)
n∞ (V ) = αn (V )/(αn (V ) + βn (V ))
dn
= (n∞ − n)/τn
dt
IKfast = 10−4 · gbarKfast · n · (V − Ek ) .
(3.31)
(3.32)
(3.33)
As in previous studies, this channel type was distributed in the soma and in the apical
dendritic tree (Keren et al., 2009). The channel density in the apical dendritic tree is
described by an exponential function of the distance x (in µm) to the soma:
gbarKfast (x) = gbarsoma
Kfast · exp (−x/decayKfast ) .
(3.34)
decayKfast describes the distance within the channel density decays to a fraction of 1/e.
The reversal potential for potassium is set to Ek = -80 mV.
43
CHAPTER 3. THE CELL MODEL
3.2.4. Slow Potassium Channel
The kinetics of the slow potassium channel (Kslow-channel) (Korngreen & Sakmann,
2000) depend on three gating particles a, b and b1. The activation a is described by
standard Hodgkin-Huxley kinetics:
αa (V ) = 0.0052 · (V − 11.1)/(1 − exp(−(V − 11.1)/13.1))
(3.35)
βa (V ) = 0.01938 · exp(−(V + 1.27)/71) − 0.0053
(3.36)
τa (V ) = 1/(αa + βa )
(3.37)
a∞ (V ) = αa /(α + βa )
da
= (a∞ − a)/τa .
dt
(3.38)
(3.39)
The channel inactivation however was bi-exponential, therefore two inactivation particles
were used:
τb (V ) = 360 + (1010 + 23.7 · (V + 54)) · exp(−((V + 75)/48)2 )
(3.40)
τb1 (V ) = 2350 + 1380 · exp(−0.01118 ·V ) − 210 · exp(−0.0306 ·V )
(3.41)
b∞ (V ) = 1/ (1 + exp ((V + 58) /11))
db
= (b∞ − b)/τb
dt
db1
= (b∞ − b1)/τb1 .
dt
(3.42)
(3.43)
(3.44)
These gating particles are equally weighted in the final current equation:
tadj = 2.3(celsius−21)/10
(3.45)
IKslow = 10−4 ·tadj · gbarKslow · a2 · (0.5 · b + 0.5 · b1) · (V − Ek ) .
(3.46)
The slow potassium channel was inserted into the soma and into the apical dendritic tree.
The channel density decays with distance x (in µm) to the soma:
gbarKslow (x) = gbarsoma
Kslow · exp (−x/decayKslow ) .
(3.47)
decayKslow describes the distance within the channel density decays to a fraction of 1/e.
3.2.5. Persistent Sodium Channel
The persistent sodium channel (Nap-channel) was taken from Traub et al. (2003). This
sodium channel activates very rapidly but does not show inactivation. It can, therefore,
44
3.3. DEFINING THE STATIC AND THE FREE PARAMETERS
alter the overall excitability of the neuron. The kinetic equations are the following:
m∞ (V ) = 1/(1 + exp(−(V + 48)/10))

0.025 + 0.14 · exp((V + 40)/10)
if v < −40 ;
τm (V ) =
0.02 + 0.145 · exp(−(V + 40)/10) otherwise
dm
= (m∞ − m)/τm
dt
INap = 10−4 · gbarNap · m · (V − Ena ) .
(3.48)
(3.49)
(3.50)
(3.51)
The Nap-channel was only present in the soma in our model.
3.2.6. Muscarinic Potassium Channel
The muscarinic potassium channel (Km-channel) gives rise to the m-current. It is a noninactivating voltage-dependent very slow potassium channel. As it reaches its steady state
opening in a timescale of seconds, it can lead to spike frequency adaptation. Kinetics were
taken from Winograd et al. (2008):
tadj = 2.3(celsius−36)/10
(3.52)
m∞ (V ) = 1/(1 + exp(−(V + 35)/10))
(3.53)
τm (V ) = 1000/ tadj · ((3.3 · exp((V + 35)/20) + exp(−(V + 35)/20))) (3.54)
dm
= (m∞ − m)/τm
(3.55)
dt
Im = 10−4 · gbarKm · m · (V − Ek ) .
(3.56)
We used Km-channels only in the soma.
3.3. Defining the Static and the Free Parameters
We have found a reasonable geometry for the reduced model, we have defined a set of ion
channels that need to be present in our model and we have realized that approximately
39 compartments are needed to describe the interactions between dendrites, soma and
axon. The model has therefore a very long list of parameters that could be modified.
In order to obtain any reasonable solutions, we need to define those parameters that are
less important or can safely be set to an experimentally secured value. The remaining
parameters are those parameters that are crucial for the neuronal response properties but
cannot be directly deduced from experimental data. In our reduced model we set the
following as static: The optimized reduced morphology and the intracellular resistivities
as well as all ion channel kinetics. The set of parameters that remain uncertain and need
to be constrained are summarized in Tab. 3.2.
45
CHAPTER 3. THE CELL MODEL
Table 3.2: F REE PARAMETERS IN THE R EDUCED
M ODEL Eighteen parameters in our reduced model cannot be constrained directly by experiments but are crucial for the neuron’s active and passive response properties. We are uncertain about the leak reversal potential Epas but we assume that it is uniform in the whole
model. The specific membrane resistance (rm ) and capacitance (cm ) are only adjusted directly in the axosomatic
region (soma, basal, axon initial segment, hillock, axon).
The spinefactor defines a factor describing the extension
of the membrane area due to dendritic spines. rm and
cm for the apical dendrite and the tuft are therefore indiapical, tuft
axosomatic /spinefactor and
rectly given through rm
= rm
apical, tuft
axosomatic · spinefactor. We are also uncertain
cm
= rm
about the ion channel densities in the soma as well as about
the functions describing their gradients in the dendrites.
The ion channel densities in the axon hillock and axon
initial segment are also unknown and therefore parameterized. Moreover, we introduced two parameters describing
the shift of the activation and inactivation curves for the
transient sodium channel. The parameter vshift is applied
globally as a general parameter for the kinetic model. The
parameter vshift2 is only applied to the axon initial segment in order to introduce some additional local voltage
shift as found experimentally (Colbert & Pan, 2002).
46
Parameter
Unit
global
Epas
axosomatic
rm
caxosomatic
m
mV
spinefactor
1
gbarsoma
Nat
pS/µm2
gbarsoma
Kfast
pS/µm2
gbarsoma
Kslow
pS/µm2
gbarsoma
Nap
pS/µm2
gbarsoma
Km
pS/µm2
gbarsoma
HCN
pS/µm2
gbartuft
HCN
pS/µm2
decayNat
pS/µm3
decayKfast
µm
decayKslow
µm
gbarhillock
Nat
pS/µm2
iseg
gbarNat
Ω · cm2
µF/cm2
pS/µm2
vshiftNat
global
mV
iseg
vshift2Nat
mV
CHAPTER
4
Results
4.1. Experimental Data
The somatic voltage traces of layer 5 pyramidal neurons were recorded before and after
pinching in response to the same current stimulus protocol. The protocol consisted of
current steps with different amplitudes. The current was switched on for 1000 ms after a
delay of 100 ms. The amplitudes were increases from -0.1 nA to 1 nA in 0.05 nA steps.
When we compare the data before and after pinching several significant changes become
obvious. Most notably, the input resistance increases upon pinching, leading to increased
firing rates. The spike onset shape, spike height and the strength of the AHP do change
after pinching as well (Fig. 4.1).
4.2. Fitting Strategy
As outlined in chap. 3, we have created a model with a reduced morphology of 39 compartments and defined a biologically realistic channel composition based on experimental
data. We have chosen a subset of 18 parameters that determine the neuron’s response
properties but that are not fully constrained by experimental evidence. The experimental
recordings before and after pinching allow us to observe the neuron’s behaviour under two
different conditions, with and without an apical dendrite, which might give us enough information to constrain the free parameters indirectly. Once we will have constrained our
model, we should be able to quantitatively explain the effects of pinching and use the
model to explore other mechanisms.
To optimize our model to the given data we apply the multi-objective optimization
(MOO) strategy (sec. 2.4). To obtain any reasonable parameter combinations during the
stochastic search, we introduced search boundaries for each of the 18 parameters. These
boundaries are set relatively wide in order to put the least amount of knowledge into the
values, but are in the biologically realistic range (Tab. 4.1). In the following optimization
steps we will use a population size of N = 500, a population capacity of C = 1000 and the
47
CHAPTER 4. RESULTS
Data
Figure 4.1: E XPERIMENTAL R ECORDINGS
BEFORE AND AFTER P INCHING The experimental recordings before (black lines) and
after pinching (blue lines) are shown. Several significant changes are obvious. a) The
AP onset and the repolarizing phase (offset)
are overlayed before and after pinching. The
voltage threshold for AP initiation is shifted to
more hyperpolarized levels and the afterhyperpolarization (AHP) is enhanced after pinching.
Also the spike height (measured from onset to
the peak) increases. b) The first 600 ms of
the spiketrains under both conditions are compared. The frequency increases after pinching.
c) Four traces in response to subtreshold current injections before and after pinching are
shown. From the stronger voltage responses
it can be concluded that the input resistance
is increased after pinching. We also see that
the resting potential slightly, but significantly
drops and that there is still a sag-response after
pinching. d) The current amplitudes used for
the stimulation. The same current protocol was
used before and after pinching (green lines).
Before Pinching
After Pinching
a
30 mV
0.2 ms
5 ms
b
30 mV
-70
c
-70
5 mV
d
0.4 nA
0
100
200
300
400
Time (ms)
500
600
evolution will be performed for 1000 generations. Crossover and mutation parameters are
as explained (p. 27). On our cluster one complete evolution needed about two days and
produced approximately 100 MB of data.
To lead any optimization algorithm to good solutions we need to define the distance
between the model response and the target data. However it is not straight forward to
define a single reasonable distance function for spiking traces that is a reflection of the
quality of the model responses (see Fig. 1.5). Our approach to define useful distance
functions is explained in the following.
4.2.1. Checking Response Properties
Pyramidal neurons operate in at least two different modes. Below a certain current injection amplitude they only show passive subthreshold responses. Then, if the current
amplitude is increased pyramidal neurons start to elicit APs. For many parameter combinations the neuron model however will not spike for any current injection or it will
spontaneously elicit APs under current injections where only a passive response is ex48
4.2. FITTING STRATEGY
pected. Therefore we first need to check whether a given parameter set leads to a model
that shows “good” spiking behaviour.
An AP time is defined as the time after the voltage has crossed a threshold value
of θ = -20 mV from below. A parameter set is defined as “good” when the following
conditions hold:
• The model neuron should not elicit an AP when the target data shows only passive
responses. The model response should show APs when the target data does. If
spiking is expected, further conditions must hold to check whether the given voltage
trace is a regular spiketrain without bursts:
• The model should elicit at least 6 spikes.
• The spike width, defined as the time between the two points where the voltage
crosses the threshold θ = -20 mV from below and from above, should not
exceed 3 ms.
• The absolute spike heights from the third to the next to last spike should not
change by more than 20 %.
• The voltage minimum between the third and the fourth spike compared to the
voltage minimum between the next to last and the last spike should not change
by more than 10 %.
• There should not be any interspike interval below 15 ms.
If one of these conditions is not fulfilled, the neuron’s response properties are considered
as “bad”. Hence, the responsible model parameter set must be “punished” by associating
it with large error values. This will tell the optimization algorithm to avoid this position
in parameter space during the subsequent search.
4.2.2. Distance Functions
If the conditions defining a “good” spiketrain are fulfilled, we can specify more detailed
objectives that are elementary for quantitatively describing the spiking behaviour of pyramidal neurons. We have chosen four objectives that we consider as being good representations of the neuron’s response properties and that have a correlation with our set of
free parameters. These objectives were used to calculate four error functions to define
the distance between the response of the reduced model and the target data. All timedependent functions were interpolated such that the step size between two time points
(dt) was always the same, the integrales described below are therefore sums over discrete
time points. The objective-functions are illustrated in Fig. 4.2. The error functions are
explained in the following:
49
CHAPTER 4. RESULTS
1. The passive subthreshold response traces should help use to constrain the passive
membrane parameters in the neuron as well as the HCN-channel density. For each
model (t) and
passive trace i, the squared distance between the model response Vi,passive
target
the target data Vi,passive (t) is calculated only in the time window between 50 ms and
400 ms. The contribution of four non-spiking traces were summed into the passiveerror value:
4
ẽpassive = 0.5 · ∑
i=1
Z400
2
target
model
Vi,passive (t) −Vi,passive
(t) dt .
(4.1)
50
2. The voltage trace average of an AP (Vspike (t)) was determined by a high-resolution
alignment (sec. 2.2). The first two spikes and the last spike from a trace were
excluded from averaging to circumvent conflicts with adaptive currents and spike
shape changes when the stimulating current is switched off. The time dependency
for the mean-spike was shifted such that for Vspike (0) the first derivative dVspike/dt (0)
was at its maximum. This occurs slightly before the actual AP peak. Doing this,
the averaged APs obtained from the target data and the model response could be
aligned at t = 0. We have found that choosing the peak of the first derivative as
an alignment time was a better choice than the actual voltage peak of the AP. The
shape of the peak was variable in different data sets and some of these peaks could
not by reproduced with our model.
The detailed spike onset contains information about the sodium channel distribution
in the soma and axon. The squared distance between the averaged model and target
AP onset was calculated from t = -0.5 ms to t = 0.1 ms. This value was combined
with the squared distance between the first derivatives in the same time window to
obtain the final onset-error value:
Z0.1 2
target
model
Vspike (t) −Vspike
(t) +
ẽonset = 0.5 ·
(4.2)
−0.5
0.01 ·
target
dVspike
/dt (t) −
2
/dt (t)
dt .
model
dVspike
(4.3)
3. The detailed spike repolarization (offset) should tell us about the potassium ion
channel composition in the soma and the sodium and potassium channel gradients
in the apical dendrite. Therefore we calculated the squared distance between the
model and target APs in the time window t = 0.1 ms to t = 14 ms. We also included
50
4.2. FITTING STRATEGY
the distance between the first derivatives and obtained the offset-error value:
ẽoffset = 0.5 ·
Z14 2
target
model
Vspike (t) −Vspike (t) +
(4.4)
0.1
0.01 ·
target
dVspike
/dt (t) −
2
/dt (t)
dt .
model
dVspike
(4.5)
4. Finally, the interspike intervals will constrain the parameters responsible for the
adaptive currents and the overall excitability of the cell. The difference between the
ith interspike interval of the model and target spiketrain was also calculated using
the squared distance measure. All interspike interval distances were then summed
and normalized by n leading to the isis-error. Here, n is the number of interspike
intervals of the spiketrain with less interspike intervals:
ẽisis =
2
0.5 n · ∑ ISIstarget (i) − ISIsmodel (i) .
n i=0
(4.6)
4.2.3. Combining Intact and Pinching Data
All four error functions were calculated for the target data and the model responses under
two conditions. First in the intact neuron (ẽintact
) and then in the neuron with an occluded
x
pinch
apical dendrite (ẽx ). These values were then combined into the four final error functions:
pinch
epassive = 2 · ẽintact
passive + ẽpassive
(4.7)
eonset = ẽintact
onset + ẽonset
pinch
(4.8)
pinch
(4.9)
eoffset = ẽintact
offset + ẽoffset
eisis =
pinch
ẽintact
isis + ẽisis .
(4.10)
The passive-error of the intact neuron were weigthed more to equalize the influence of the
error before and after pinching. Pinching in the model neuron was simulated by increasing
the intracellular resistivity and decreasing the diameter of the most proximal compartment
pinch
of the apical dendrite (ra
= 1000000 Ω · cm, d pinch = 0.1 µm).
We think that these four error functions are a good representation of the response
properties of pyramidal neurons. They are determined from a combination of 8 passive
responses, the detailed AP shape and precise interspike intervals from 2 spiketrains under
different recording conditions in the same neuron. Thus, these error functions are putting
heavy constraints on our reduced model. We also think that introducing these distance
functions will help us to find a good trade-off solution for our reduced model approximating the experimental data, although it might not be possible to optimize all error functions
51
CHAPTER 4. RESULTS
at the same time. During our studies we have seen that the nonlinear interactions between
the different parameters are stronger than expected. For example the passive responses
also depend on the sodium channel density and on the other hand the spike shape depends
on the passive membrane parameters as well. It is therefore necessary to minimize all
four error functions at the same time.
4.2.4. Selection of the Optimal Solution
During the optimization we save each of the 1000 generations in order to make a final selection of a single optimal solution after evolution. Thus we get a solution matrix M(i, j)
of the size 1000 × 500. A solution can be referred by S(i, j) and its corresponding error
values by ex (i, j) where i is the generation number, j the index of the individual in generation i and x the name of the distance function. While during the search we do not put
any weight on the four distance functions, we must introduce a weighting at the final step
of selection. But how should such a weighting look like without being arbitrary?
As we expect from our knowledge about the optimization algorithm the minimal distance value for each independent objective decays while the evolution proceeds and only
rarely a single objective-optimal solution is lost due to a mutation (Fig. 4.4, 4.8). The
minimal distances in the last generation can be used to introduce a reasonable weighting:
wpassive =
1/min(epassive (1000, j))
(4.11)
wonset =
1/min(eonset (1000, j))
(4.12)
woffset =
1/min(eoffset (1000, j))
(4.13)
wisis =
1/min(eisis (1000, j)) .
(4.14)
j
j
j
j
We can now normalize the distance functions by these weights. Thereby we do not only
cancel the units, but also equalize the magnitudes of the error values. Summing the
weighted distance functions leads to a total-error value for each generation and individual:
etotal (i, j) = wpassive · epassive (i, j) + wonset · eonset (i, j) +
woffset · eoffset (i, j) + wisis · eisis (i, j) .
(4.15)
(4.16)
The fact that the ’best total-error value did not always decay’ can be explained by referring
to the fact that the weighting is a posteriori – the minimization was not performed on this
error function (Figs. 4.4, 4.8). To select a single optimal solution we therefore decided not
to take the best individual from the last generation, but instead to choose that individual
52
4.3. FITTING RESULTS
associated with the lowest total error value ever found during the evolution:
(i0 , j0 ) = argmin (etotal (i, j))
(i, j)
0 0
optimal_solution = S(i , j ) .
(4.17)
(4.18)
4.3. Fitting Results
4.3.1. Surrogate Data Optimization
In order to test the described fitting strategy we first analyzed the performance of the
algorithm on data that were generated by the model itself (surrogate data). This model-tomodel fit guarantees that a perfect solution exists, namely the solution with exact the same
parameters as the target parameters. Thus we circumvent the problem that the defined
model might not be adequate for representing experimental data.
Our choice of target parameters can be seen in Tab. 4.1. With these parameters the
target data were created and the search algorithm was started to search for parameters
that reproduce these data well. We performed the search twice under exact the same
conditions, yet with another random initial population (Trial 1 and Trial 2). The two
resulting best parameter sets can also be seen in the table. The corresponding optimal
model responses before and after pinching can be compared with the target data in Fig.
4.3 and Fig. 4.5. For the first optimization trial we also plotted the evolution of the
minimum of each of the four objective-distance functions together with the best totalerror value (Fig. 4.4). In order to visualize the improvement we have made using the
MOO-strategy, we also plotted the best solution of the initial random population where
no evolution has been performed yet (Fig. 4.2).
Both optimization trials lead to very good model solutions that mimic the surrogate
data very well and present good fits for all objectives. We also see that a significant
improvement is made during the evolution from the best initial random solution to the
final optimal one.
53
CHAPTER 4. RESULTS
Parameter
global
Epas
axosomatic
rm
caxosomatic
m
spinefactor
gbarsoma
Nat
gbarsoma
Kfast
gbarsoma
Kslow
gbarsoma
Nap
gbarsoma
Km
gbarsoma
HCN
gbartuft
HCN
decayNat
decayKfast
decayKslow
gbarhillock
Nat
iseg
gbarNat
global
vshiftNat
iseg
vshift2Nat
Target
-78.0
12000.0
1.5
1.0
450.0
45.0
370.0
2.9
14.0
39.0
65.0
0.5
66.0
27.0
17000.0
16000.0
7.0
-5.0
Result 1
-76.0
19521.0
1.6
1.5
591.1
48.4
392.2
1.2
14.2
40.7
36.3
0.8
77.4
23.2
17555.2
19479.0
7.7
-6.2
Result 2
-74.0
15446.2
1.7
1.7
1043.8
56.5
339.4
1.0
12.3
31.6
43.0
1.9
78.5
26.3
13459.7
18820.3
4.2
-1.4
LS Bound
-85.0
10000.0
0.6
0.8
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
1.0
1.0
5000.0
5000.0
0.0
-15.0
US Bound
-60.0
30000.0
3.0
3.0
1500.0
300.0
1000.0
5.0
15.0
50.0
150.0
2.0
100.0
100.0
20000.0
20000.0
10.0
0.0
Unit
mV
Ω · cm2
µF/cm2
1
pS/µm2
pS/µm2
pS/µm2
pS/µm2
pS/µm2
pS/µm2
pS/µm2
pS/µm3
µm
µm
pS/µm2
pS/µm2
mV
mV
Table 4.1: TARGET PARAMETERS AND B EST PARAMETER C OMBINATIONS AFTER THE
S URROGATE DATA O PTIMIZATIONS The target parameters (Target) used for creating the target surrogate data as well as the optimal parameter sets found after two separate searches (Result
1, Result 2) are shown. The lower search bound (LS Bound) and the upper search bound (US
Bound) defined the region in parameter space that was explored for good solutions during the optimization. The parameters for the initial random population were uniformly distributed in that
region. The two optimal solutions are different from the target parameters and are different among
each other. All reported values were rounded.
54
4.3. FITTING RESULTS
After Pinching
Before Pinching
Target Data
Model
a
Target Data
Model
30 mV
30 mV
0.2 ms
0.2 ms
5 ms
5 ms
b
30 mV
-70
30 mV
-70
c
-70
5 mV
d
0.4 nA
-70
5 mV
0.4 nA
0
0
100
200
300
400
Time (ms)
500
600
100
200
300
400
Time (ms)
500
600
Figure 4.2: B EST S OLUTION OF THE I NITIAL R ANDOM P OPULATION BEFORE THE S UR ROGATE DATA O PTIMIZATION , T RIAL 1 All objectives used for the optimization are illustrated and shown for the target data and the model solution before pinching (black and red lines)
and after pinching (blue and orange lines): a) The detailed shape of the AP onsets and offsets used
to determine the objective onset and offset. b) Only the first 600 ms of the spiketrains resulting
from a current injection of 0.4 nA are shown to better visualize the interspike intervals. The interspike intervals for the entire spiketrains were used to calculate the objective isis. c) The 4 passive
subthreshold traces used to calculate the objective passive. d) The 5 different current amplitudes
used for stimulation (-0.10 nA, -0.05 nA, 0.00 nA, 0.05 nA, 0.4 nA) (green lines).
55
CHAPTER 4. RESULTS
After Pinching
Before Pinching
Target Data
Model
a
Target Data
Model
30 mV
30 mV
0.2 ms
0.2 ms
5 ms
5 ms
b
30 mV
-70
30 mV
-70
c
-70
5 mV
d
0.4 nA
-70
5 mV
0.4 nA
0
0
100
200
300
400
Time (ms)
500
600
100
200
300
400
Time (ms)
500
600
Figure 4.3: B EST S OLUTION AFTER THE S URROGATE DATA O PTIMIZATION , T RIAL 1
56
4.3. FITTING RESULTS
1
Passive
Onset
Offset
ISIs
Total
Normalized Error
0.1
0.01
0.001
0.0001
0
100 200 300 400 500 600 700 800 900 1000
Generation
Figure 4.4: E VOLUTION OF THE F OUR O BJECTIVE -D ISTANCE F UNCTIONS AND OF THE
T OTAL -E RROR VALUE DURING THE S URROGATE DATA O PTIMIZATION , T RIAL 1 All
values are shown on a logarithmic scale and were normalized by their maximal value and hence
the relative improvement during the optimization can be seen. The distance describing the passive
response error epassive (long-dashed line) was minimized by a factor of more than 1400 by our
algorithm. The distance for the objective onset eonset (medium-dashed lines) could be optimized
more than 100×. The distance for the objective offset eoffset (short-dashed line) as well as the
error for objective isis eisis (long-dash-short-dashed line) were optimized by a factor of more than
2500. However the total-error value etotal (red line) could only be minimized about 50× during
the optimization procedure. The optimal solution we have selected at the end of the evolution was
found in generation 854 in this optimization trial.
57
CHAPTER 4. RESULTS
After Pinching
Before Pinching
Target Data
Model
a
Target Data
Model
30 mV
30 mV
0.2 ms
0.2 ms
5 ms
5 ms
b
30 mV
-70
30 mV
-70
c
-70
5 mV
d
0.4 nA
-70
5 mV
0.4 nA
0
0
100
200
300
400
Time (ms)
500
600
100
200
300
400
Time (ms)
500
600
Figure 4.5: B EST S OLUTION AFTER THE S URROGATE DATA O PTIMIZATION , T RIAL 2
58
4.3. FITTING RESULTS
4.3.2. Experimental Data Optimization
After the surrogate data optimization has shown that the fitting strategy is able to constrain
our model to several separate active and passive target traces, we will now replace the
surrogate data by real experimental recordings.
We performed the search three times (Trial 1, Trial 2 and Trial 3). The three resulting optimal parameter sets can be seen in Tab. 4.2. The corresponding optimal model
responses before and after pinching can be compared with the experimental target data in
Figs. 4.7, 4.9 and 4.10. In order to show that the optimization algorithm leads to better
solutions than the selection of the best solution of a random population we also plotted
the evolution of the minimal error values during the optimization (Fig. 4.8). An illustration of the best individual from the initial population before any optimization has been
performed is shown in Fig. 4.6.
Parameter
global
Epas
axosomatic
rm
caxosomatic
m
spinefactor
gbarsoma
Nat
gbarsoma
Kfast
gbarsoma
Kslow
gbarsoma
Nap
gbarsoma
Km
gbarsoma
HCN
gbartuft
HCN
decayNat
decayKfast
decayKslow
gbarhillock
Nat
iseg
gbarNat
global
vshiftNat
iseg
vshift2Nat
Result 1
-78.4
20494.5
2.0
1.2
780.6
47.7
396.8
1.3
11.8
32.0
53.0
1.4
82.7
10.7
7574.9
12727.9
8.2
-7.9
Result 2
-77.8
11609.7
1.8
0.8
380.8
50.0
400.5
2.8
15.0
23.1
60.2
0.5
76.0
1.6
12561.8
16152.3
8.5
-7.5
Result 3
-75.8
19975.8
2.1
1.1
796.7
56.3
392.7
0.8
12.3
31.0
31.9
1.4
52.9
63.5
11199.6
15932.1
5.9
-5.8
LS Bound
-85.0
10000.0
0.6
0.8
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
1.0
1.0
5000.0
5000.0
0.0
-15.0
US Bound
-60.0
30000.0
3.0
3.0
1500.0
300.0
1000.0
5.0
15.0
50.0
150.0
2.0
100.0
100.0
20000.0
20000.0
10.0
0.0
Unit
mV
Ω · cm2
µF/cm2
1
pS/µm2
pS/µm2
pS/µm2
pS/µm2
pS/µm2
pS/µm2
pS/µm2
pS/µm3
µm
µm
pS/µm2
pS/µm2
mV
mV
Table 4.2: B EST PARAMETER C OMBINATIONS AFTER THE E XPERIMENTAL DATA O PTI MIZATIONS The optimal parameter sets found after three separate searches (Result 1, Result 2,
Result 3) are shown. The lower search bound (LS Bound) and the upper search bound (US Bound)
defined the region in parameter space that was explored for good solutions during the optimization. The parameters for the initial random population were uniformly distributed in that region.
The three optimal solutions are different among each other. All reported values were rounded.
The MOO-strategy was able to constrain our model to experimental data. The spike
onset and offset shapes as well as the interspike intervals and the subtreshold responses
are reproduced well for the intact and pinching condition for all three trials. Only the
spike onset and the height of the AP after pinching are not reproduced satisfactorily.
59
CHAPTER 4. RESULTS
The solutions found are trade-off solutions between the optima of the four objectives.
We have seen that the model can perform better in each objective alone but that such a
solution was obtained only at the cost of the other objectives (data not shown).
We see that the best solution of the initial random population does not present good
fits. Although the decay of the distance functions is less significant than in the surrogate
data optimization (Fig. 4.4), the results show that the optimization algorithm works well
and helps to constrain the parameters better than a random choice does.
After Pinching
Before Pinching
Target Data
Model
a
Target Data
Model
30 mV
30 mV
0.2 ms
0.2 ms
5 ms
5 ms
b
30 mV
30 mV
-70
-70
c
-70
5 mV
d
0.4 nA
-70
5 mV
0.4 nA
0
0
100
200
300
400
Time (ms)
500
600
100
200
300
400
Time (ms)
500
600
Figure 4.6: B EST S OLUTION OF THE I NITIAL R ANDOM P OPULATION BEFORE THE E X PERIMENTAL DATA O PTIMIZATION , T RIAL 1
60
4.3. FITTING RESULTS
After Pinching
Before Pinching
Target Data
Model
a
Target Data
Model
30 mV
30 mV
0.2 ms
0.2 ms
5 ms
5 ms
b
30 mV
30 mV
-70
-70
c
-70
5 mV
d
0.4 nA
-70
5 mV
0.4 nA
0
0
100
200
300
400
Time (ms)
500
600
100
200
300
400
Time (ms)
500
600
Figure 4.7: B EST S OLUTION AFTER THE E XPERIMENTAL DATA O PTIMIZATION , T RIAL 1
61
CHAPTER 4. RESULTS
1
Passive
Onset
Offset
ISIs
Total
Normalized Error
0.1
0.01
0.001
0.0001
0
100 200 300 400 500 600 700 800 900 1000
Generation
Figure 4.8: E VOLUTION OF THE F OUR O BJECTIVE -D ISTANCE F UNCTIONS AND OF THE
T OTAL -E RROR VALUE DURING THE E XPERIMENTAL DATA O PTIMIZATION , T RIAL 1
All values are shown on a logarithmic scale and were normalized by their maximal value and
hence the relative improvement during the optimization can be seen. The distance describing the
passive response error epassive (long-dashed line) was minimized by a factor of more than 50 by
our algorithm. The distance for the objective onset eonset (medium-dashed lines) could be optimized only about 3×. The distance for the objective offset eoffset (short-dashed line) as well as
the error for the objective isis eisis (long-dash-short-dashed line) were optimized by a factor of
approximately 20. The total-distance value etotal (red line) could be minimized about 25× during
the optimization procedure. The optimal solution we have selected at the end of the evolution was
found in generation 741 in this optimization trial.
62
4.3. FITTING RESULTS
After Pinching
Before Pinching
Target Data
Model
a
Target Data
Model
30 mV
30 mV
0.2 ms
0.2 ms
5 ms
5 ms
b
30 mV
30 mV
-70
-70
c
-70
5 mV
d
0.4 nA
-70
5 mV
0.4 nA
0
0
100
200
300
400
Time (ms)
500
600
100
200
300
400
Time (ms)
500
600
Figure 4.9: B EST S OLUTION AFTER THE E XPERIMENTAL DATA O PTIMIZATION , T RIAL 2
63
CHAPTER 4. RESULTS
After Pinching
Before Pinching
Target Data
Model
a
Target Data
Model
30 mV
30 mV
0.2 ms
0.2 ms
5 ms
5 ms
b
30 mV
30 mV
-70
-70
c
-70
5 mV
d
0.4 nA
-70
5 mV
0.4 nA
0
0
100
Figure 4.10:
T RIAL 3
200
300
400
Time (ms)
500
600
100
200
300
400
Time (ms)
500
600
B EST S OLUTION AFTER THE E XPERIMENTAL DATA O PTIMIZATION ,
64
4.3. FITTING RESULTS
4.3.3. Generalization for Other Input Currents
Frequency (Hz)
After having found several model solutions that represent the experimental data well in all
objectives, we also need to test whether a selected model is able to reproduce experimental
data that has not been used for the optimization algorithm, that is, whether the model
generalizes. The optimization has been performed with 4 subthreshold currents and one
suprathreshold current with an amplitude of 0.4 nA. In order to test whether our model
generalizes we can change the amplitudes of the suprathreshold current injections and
check whether we are able to predict the experimental spiking responses.
We first tested how our model is able to predict the experimental mean firing frequency
in response to current injections between 0 nA and 1.1 nA under intact and pinching conditions (Fig. 4.11). It can be seen that the model predicts the firing frequency of the
experimental data well for many currents. Moreover the current thresholds for spike initiation under both conditions are predicted well by our model. Only for higher current
injections above 0.8 nA the prediction becomes worse.
60
50
40
30
20
10
0
Data Before Pinching
Data After Pinching
Model Before Pinching
Model After Pinching
0
0.2
0.4 0.6 0.8
Input Current (nA)
1
Figure 4.11: M ODEL P REDICTION OF F IRING F REQUENCY The experimentally measured
and the predicted firing frequencies in response to current injections from 0 nA to 1.1 nA are
shown for the two conditions before pinching (black and red dots) and after pinching (blue and
orange dots).
Furthermore we challenge the model not only to predict the firing frequencies, but
also to reproduce the detailed spiketrains that were not used for the optimization. We
changed the suprathreshold current injection from 0.4 nA to 0.6 nA and compared the
predicted spiking responses with the experimentally measured ones (Fig. 4.12). The
stronger current injection leads to a change in spike rate of ≈ 10 Hz for the experimental
data and predicting these spiketrains must therefore be an accomplishment. However, it
can be seen that the spiketrains and the detailed spike shapes before and after pinching are
predicted well by our model. Remarkably our model also predicts the first experimentally
measured interspike intervals, which are much shorter than the following ones. Thus, in
conclusion, the model generalizes well.
65
CHAPTER 4. RESULTS
After Pinching
Before Pinching
Target Data
Model
a
Target Data
Model
30 mV
30 mV
0.2 ms
0.2 ms
5 ms
5 ms
b
30 mV
30 mV
-70
-70
c
-70
5 mV
d
0.6 nA
-70
5 mV
0.6 nA
0
0
100
200
300
400
Time (ms)
500
600
100
200
300
400
Time (ms)
500
600
Figure 4.12: M ODEL P REDICTION OF D ETAILED AP SHAPE AND S PIKETRAIN IN R E SPONSE TO A NOTHER I NPUT C URRENT We checked whether our optimized reduced model
is able to predict the voltage trace of a spiketrain in response to a higher current injections (0.6 nA,
(d)) that has not been used for the optimization. The objectives ((a),(b),(c)) can be compared for
the model prediction and the experimental recordings before (black and red lines) and after pinching (blue and orange lines).
66
4.4. MODEL EVALUATION
4.4. Model Evaluation
We have selected the best solution after the experimental data optimization, Trial 1 for all
following analysis. This model showed very good fit results and good generalization for
other input currents, and thus we will use this model to study mechanisms that would be
hard or impossible to be explored experimentally. The model will be referred simply as
“our model” in the following.
4.4.1. Resting Potential
We have measured the resting potential in the dendritic tree as a function of the distance
to the soma and observed a change of about 4 mV in our model (4.13a). HCN-channel
are open at rest and therefore the resting potential in a specific location depends on the
local HCN-channel density. But the resting potential could also be altered by sodium and
potassium channels that are slightly open at rest as well. However larger densities will be
needed in order that these channels will have a significant effect on the resting potential.
As it is not completely clear which conductances lead the observed change of the resting
potential we plotted the steady state conductance for each ion channel present in the apical
dendritic tree as a function of distance to the soma (Fig. 4.13b,c,d,e). It can be seen that
the effect of sodium channels is negligible, but that indeed potassium and HCN-channels
have a significant influence on the resting potential. The potassium channel density decays
quickly with distance to the soma. Thus the hyperpolarizing potassium conductance is
only present in the somatic region and hence the distal dendrite is more depolarized than
the soma at rest. On the other hand, we find a higher HCN-channel density in the apical
tuft than in the soma. Therefore, these channels introduce a depolarizing conductance that
is stronger in the tuft than in the soma and hence the distal depolarization is even more
enhanced.
4.4.2. AP Backpropagation
We were also interested in whether the automatically created reduced model would show
AP backpropagation. Thus we recorded the voltage in different locations in the neuron
when a single spike was observed in the soma in response to a short 20 ms current stimulation. We could see that the AP is generated in the axon initial segment and then travels
antidromically through the soma into the dendrites (Fig. 4.14a). To further quantify the
AP shape in different locations we calculated its absolute peak (Fig. 4.14b) and determined its half-width (Fig. 4.14c). The half-width was defined as the width at halfway
from -60 mV to the AP peak. It can be seen that the AP peak decays slowly in the apical
dendrite and quickly when it reaches the tuft (500 µm). The half-width increases linearily
with distance to the soma. When it reaches the tuft the slope of this change is enhanced.
67
a
Resting Potential (mV)
CHAPTER 4. RESULTS
-65
-66
-67
-68
-69
-70
0
c
800
700
600
500
400
300
200
100
0
gNat (fS/µm2)
gh (fS/µm2)
b
0.4
0.3
0.2
0.1
200 400 600 800 1000
Distance to Soma (µm)
0
200 400 600 800 1000
Distance to Soma (µm)
0
200 400 600 800 1000
Distance to Soma (µm)
e
gKslow (fS/µm2)
50
gKfast (fS/µm2)
0.5
0
0
d
200 400 600 800 1000
Distance to Soma (µm)
40
30
20
10
0
400
300
200
100
0
0
200 400 600 800 1000
Distance to Soma (µm)
Figure 4.13: R ESTING P OTENTIAL AND THE I ONIC C ONDUCTANCES AS A F UNCTION OF
D ISTANCE TO THE S OMA All conductance units are reported in fS/µm2 . a) The resting potential is more depolarized in the distal regions than in the proximity of the soma. b) The conductance
of the HCN-channels is stronger in the tuft than in the soma. As we did not insert HCN-channels
in the apical dendrite, the conductance is zero in the apical region. c) The negligible sodium
conductance decays linearly with distance. d) The fast potassium channel introduces a hyperpolarizing conductance at rest mainly in the soma as the ion channel density decays exponentially
with distance. e) The slow potassium channel density decays quickly with distance and thus the
conductance at rest introduces a strong hyperpolarization only in the soma.
68
4.4. MODEL EVALUATION
In order to understand this behaviour we determined which where the ionic mechanisms involved into AP backpropagation. We measured the conductances during an AP
peak in different locations along the apical dendrite and tuft (Fig. 4.14c,d,e). We can see
that the sodium conductance near the soma has large values, hence this conductance appears to be important for the local AP generation. However the sodium conductance drops
distally which explains the decay of the voltage-peak. Both potassium conductances also
decay, hence the repolarizing current is missing in the regions with less or no potassium
conductance and therefore the AP becomes wider. However it appears that active backpropagation in our model only occurs until the AP reaches the tuft. From then on there
are neither sodium nor potassium channels active and the voltage spreads passively only.
4.4.3. Currents Shaping the Somatic AP Waveform
We stimulated the soma with a short current pulse of 20 ms to elicit a single AP and
analyzed the influence of different currents on its shape. We recorded the sodium and
potassium currents as well as the axial currents from the axon and from the apical dendrite. We did this analysis before and after pinching (Fig. 4.15). Axial currents were
determined by the voltage difference between two adjacent compartments and the local
effective resistance (Eqn. 1.26).
If the apical dendrite is accessible a large amount of the axonal and somatic sodium
current quickly escapes into the apical dendrite where it leads to a dendritic depolarization
and BAPs (see above). Then the somatic potassium conductances set in and the repolarization of the AP is initiated. However due to BAPs the dendritic tree has undergone a
massive depolarization. This current can now flow back into the soma and introduce a
long lasting depolarizing current that reduces the effect of the local potassium channels
and therefore leads to a reduced AHP.
After pinching the AP height becomes slightly higher. The axonal and somatic sodium
current cannot escape into the apical dendrite anymore and enhance the local onset depolarization. Next, if the dendritic tree is not available to deliver current to compensate the
hyperpolarization produced by the somatic potassium conductances, the AHP becomes
stronger.
69
CHAPTER 4. RESULTS
a
35
0
-35
317
b
318
35
0
319
320
321
Time (ms)
c
7
6
5
4
3
2
1
0
322
f
324
Iseg
soma
Apical (161 µm)
Apical (295 µm)
Apical (495 µm)
Tuft (686 µm)
323
0 200 400 600 800 1000
Distance to Soma (µm)
4
3
2
1
0
0 200 400 600 800 1000
Distance to Soma (µm)
70
-35
-70
e
20
15
10
5
0
Half-Width (ms)
0 200 400 600 800 1000
Distance to Soma (µm)
gKslow (pS/µm2)
0 200 400 600 800 1000
Distance to Soma (µm)
gKfast (pS/µm2)
-70
800
600
400
200
0
0 200 400 600 800 1000
Distance to Soma (µm)
Peak Voltage (mV)
Figure 4.14: A NALYSIS OF BAP S We analyzed the shape transformation and the underlying ionic conductances of BAPs in our model. a) APs are shown
at 6 locations in the neuron: In the initial segment (Iseg, dark red line), in the soma (red line), and in the apical dendrite and tuft (light red lines). The absolute
peak voltage of the AP decays with distance to the soma, while the half-width (width at halfway from -60 mV to the AP peak) increases. The initiation of the AP
occurs in the initial segment. It can also be seen that the voltage threshold for spike initiation decays with distance to the spike initiation zone. b,c) Quantification
of the absolute voltage peak and the half-width of the AP as a function of distance to the soma. d) The decaying sodium conductance at the AP peak is shown.
e,f) The conductances of the fast and slow potassium conductance decay with the distance to the soma.
d
Voltage (mV)
gNat (pS/µm2)
4.4. MODEL EVALUATION
a
Before Pinch
50
After Pinch
50
30 mV
-60
30 mV
-60
b
1 ms
1 ms
0
0
IaxHillock
5 nA
5 nA
IaxApical
INat
IKfast
IKslow
Figure 4.15: C URRENTS S HAPING THE S OMATIC AP WAVEFORM BEFORE AND AFTER
P INCHING a) The voltage traces of the APs before (red line) and after pinching (orange line)
are shown. It can be seen that the height of the AP is slightly increased after pinching and that
the AHP without an apical dendrite is strongly enhanced. b) The somatic currents during the APs
before and after pinching are overlayed: Axial current from the axon (black line), axial current
from the apical dendrite (black dashed line) as well as the ionic currents of sodium (blue line),
and potassium (green and cyan lines) are shown before and after pinching. Negative currents are
somatic inward currents.
71
CHAPTER
5
Discussion
We have presented a systematic strategy to automatically construct a reduced compartmental model of a cortical layer 5 pyramidal neuron. First, we developed an approach
to reduce a detailed pyramidal neuron morphology to a simpler one, and found that our
simplification strategy preserves the neuronal passive response properties very well. Then
we selected a set of ion channel models as well as their spatial distribution in our model
and defined its set of 18 free parameters. These parameters were then fitted with a multiobjective optimization (MOO) strategy (sec. 2.4). To test the optimization procedure,
we used target data that had been generated by the model itself. We found solutions
that fit these surrogate data very well, but we did not find a unique parameter set. We
then repeated the optimization but replaced the target data with experimental recordings
from pyramidal neurons. The different optimal parameter sets we have obtained suggest
general trends for and homeostatic adjustments of conductance densities in pyramidal
neurons. We used the optimized model to investigate the conductances underlying the
resting potential and BAPs as well the currents shaping the somatic AP. These results
have helped to quantitatively explain the effects of pinching.
5.1. Neuronal Geometry
5.1.1. Geometry Reduction
In order to simplify a detailed morphological reconstruction of a layer 5 pyramidal neuron
by Stuart & Spruston (1998), we were not interested in an elaborate mathematical analysis
of how a reduction could be performed theoretically (for example Lindsay et al., 2003),
but we simply aimed to obtain approximate parameter values for the reduced model that
preserve the passive response properties well.
We considered the suggestion by Destexhe (2001) as suited for our purpose. His reduced model consisted of several cylinders each representing a subset of the dendritic
geometry of the detailed model. The intracellular resistivities were then fitted by him
73
CHAPTER 5. DISCUSSION
to optimize the neuron’s voltage attenuation and passive responses. In addition to maintaining these objectives we additionally aimed to optimize the somatic impedance and
phase-shift curves. Therefore we did not only fit the intracellular resistivity values in the
neuron, but at the same time the dendritic geometry (Tab. 3.1). Using the MOO-strategy
we obtained good fits (Fig. 3.1). As done by Destexhe (2001) we challenged the method
by injecting noisy current into the somata of both neuron models. To our astonishment we
could see that the voltage responses in the soma and distal dendrite were almost indistinguishable (Fig. 3.2). Our simplification strategy is straightforward, intuitive and precise
and we therefore consider it as more useful than those currently found in the literature.
There is a large heterogeneity among neurons in the cortex and the size of the neuronal morphology we have chosen might be different from the size of the neuron our
electrophysiological data came from. However, we assume that the general proportions
between different sections are similar in all layer 5 pyramidal neurons in a rat with similar
age. Then the overall size of the neuron will play a lesser role as channel densities can
compensate for variations in size.
It was relatively easy to obtain good solutions for the chosen free parameter set with
only a few generations of the evolutionary optimization method. Unlike our evaluation
of the fitting procedure during the optimization of the ionic conductances (sec. 4.3), in
this first step of model creation we have not tested yet how the evolution of the solution quality looks like or whether different optimal solutions can be obtained with other
optimization runs. Therefore further investigation of the principles underlying neuronal
geometry reduction is needed to explore why these results are so convincing and so easy
to obtain.
Application to Experiments
In this study a detailed model produced the target data for the optimization of our reduced
model. But the strategy also transfers directly to the creation of a simplified geometry
based on direct experimental recordings. Voltage recordings with high time resolution
are currently only possible by patch clamp experiments. But those recordings were only
rarely successful in three (for example Larkum et al., 2001) and are still not possible in
more locations of the dendritic tree. However the steady state voltage distribution can be
obtained from the entire dendritic geometry via voltage sensitive dyes (Loew, 1996) that
have time constants in the order of ms. In combination with a detailed dendritic reconstruction these data would give a perfect target for our simplification strategy. It would be
very interesting to check whether the dendritic specific membrane leak conductance and
capacitance will need to be higher than in the soma in order to fit the data. This would
give us an idea about the membrane extension due to dendritic spines and could therefore
be used to estimate the spine density in the dendritic tree.
74
5.1. NEURONAL GEOMETRY
5.1.2. Passive Influence of the Basal Dendrite
Our reduced model contains a single-compartment passive basal dendrite connected to
the soma. It could be argued that this compartment is not necessary and might be collapsed into the soma. We have tested a model without a basal dendrite but with a larger
soma and failed to obtain good results with biologically realistic parameters. To produce
realistic spiking a certain match between the ionic conductances and the capacitance is
needed in the soma. However optimizing these conductances led to combinations of the
capacitance and the membrane-leak that produced time constants that did not match those
of the experimentally measured passive responses. Furthermore the value for the specific
capacitance was lower than standard values. Therefore we think that it is helpful to add
an additional passive capacitor to the soma that has only minor effects on the spiking behaviour, but can introduce an additional capacitive current to adjust the time constants for
the passive responses.
5.1.3. Axonal Geometry
The geometry of our axon (Fig. 3.3) was adjusted by hand with only approximate geometrical estimates based on a detailed reconstruction (Zhu, 2000). We have observed that
the diameters and length of the hillock and axon initial segment had an influence on the
AP shape. We should further investigate the influence of the axon geometry on our fitting
results.
5.1.4. Segmentation
In order to obtain a sufficiently high resolution for the segmentation of a neuron model
it was suggested to start with a certain number of compartments and to determine the
spiking responses. Then the number of compartments should be increased and if the
response properties do not change the model can be accepted whereas if a change occurs
the resolution needs to be refined (Carnevale & Hines, 2006).
As for our model we had started with an apical dendrite with 10 and a tuft with 5
compartments. We fitted the free model parameters to the experimental data and obtained
good results and could also study BAPs. However we then tested whether the responses
would remain the same if we increased the number of compartments in the optimized
model. We observed small changes in the AHP. Thus we increased the number of compartments of the apical dendrite to 15 and of the tuft to 10 and repeated the optimization.
However we have not tested whether in our new optimized model a further increment of
compartments would alter the spiking responses again. We should do this in order to ensure that the compartmentalization is sufficient now. It should also be tested whether the
number of compartments used in the other sections of our reduced model is sufficient.
75
CHAPTER 5. DISCUSSION
5.2. Ion Channel Composition
5.2.1. Choice of Ion Channel Models
Many ion channel models are available in databases and are ready to be implemented in a
neuron model.1 However the kinetics of the ion channels available were measured under
different conditions and often for a specific modelling purpose. It was therefore not easy
and one of the most time consuming parts of this study to make a selection of the right
ion channels to be combined in our model. We have tested a variety of different sodium
and potassium channel models, we also used different HCN-channels and calcium
channels. It was especially interesting to see that realistic AP onsets could only be
obtained with the model of the Nat-channel by Kole et al. (2006) although this model is
based on kinetic measurements that were performed around 20 years ago (Hamill et al.,
1991; Huguenard et al., 1988) while recent models of Nat-channels (for example
Baranauskas & Martina, 2006; Gurkiewicz & Korngreen, 2007) failed. As for the
repolarizing phase we tested several potassium channel combinations but only the choice
of the Kfast-channel (Kole et al., 2006) and Kslow-channel (Korngreen & Sakmann,
2000) led to satisfying results. Furthermore this was the only potassium channel
combination we were able to produce BAPs with. Differences in the results due to the
other HCN-channel models tested were not significant.
We also introduced calcium channels, calcium dynamics and calcium-dependent
potassium channels. These conductances however did not significantly improve the
fitting results, but led to bursting behaviour for larger current inputs which was not seen
in the experimental recordings. Therefore we decided not to model calcium channels
and related mechanisms although they are present in pyramidal neurons (Schiller et al.,
1997).
It was very useful to have a fitting algorithm available in order to automatically test
whether the insertion or modification of an ion channel model improves or worsens the
fitting results. In a subsequent study we might build a framework to easily quantify the
performance of any ion channel combination in order to rank them. This would surely
help many modelling studies to choose the best ion channel composition for their purpose.
5.2.2. Ion Channel Distribution
We inserted ion channel models in the axon initial segment, hillock, soma, apical dendrite
and tuft. The basal dendrite and the rest of the axon were left passive.
1 ModelDB:
http://senselab.med.yale.edu/ModelDB
76
5.2. ION CHANNEL COMPOSITION
Potassium Channels in the Axon
Hillock and axon initial segment only contained Nat-channels. We did not insert potassium channels although they are present in this region and their influence on shaping
the axonal AP was studied recently. The experimentally determined repolarization in the
axon initial segment (Kole et al., 2007, Fig. 1A) is faster than in our model (Fig. 4.14a).
We tested the influence of potassium channels in the axon but they did not significantly
improve the fitting results. We think that somatic potassium channels can take over the
work of the missing ion channels in the axon to optimize the somatic AP repolarization.
Active Conductancs in the Basal Dendrites
We did not insert sodium and potassium channels in the basal dendrites of our model although it is known that active propagation of APs occurs in these dendrites (Nevian et al.,
2007; Polsky et al., 2004). It would indeed be very interesting to model AP propagation
in basal dendrites, but due to their small diameters reliable experimental data are rare. We
think that modelling sodium and potassium channels in the basal dendrites could lead to a
similar mechanism like that of BAPs in the apical dendrite. Thus also the basal dendrites
might act as a current source after the somatic AP and thus influence the repolarization.
Apical Sodium and Potassium Gradients
As for the ion channel density gradients in the apical dendrite and tuft we first tested
constant densities but were not successful in obtaining good fitting results. Only with a
linear decay of the Nat- and exponential decays of the Kfast- and Kslow-channel densities
as suggested by Keren et al. (2009) we could obtain satisfying results.
HCN-Channel Distribution
Keren et al. (2009) have also described the HCN-channel density in the apical dendrite
with a function that depends on distance from the soma and increases sigmoidally but this
distribution did not lead to good fitting results for the passive neuron responses in our
study. We explain this failure with our need to have a certain number of HCN-channels
available also in the soma as we observe a sag-response in the experimental data after
pinching as well (Fig. 4.1). With the described function the HCN-channel density directly
increased in the apical dendrite and hence the total number of proximal HCN-channels
was high. This produced a sag-response that was much stronger than the experimentally
measured one before pinching. We therefore decided not to model HCN-channels in the
apical region but only in the soma and tuft.
It would be interesting to test another scenario with two sigmoidal density functions,
one increasing towards the distal part of the basal dendrite and one increasing towards
77
CHAPTER 5. DISCUSSION
the distal part of the apical dendrite. If such a distribution is reasonable the somatic density can be zero now and the proximal apical dendrite would contain less HCN-channels
producing a good sag-response for the intact neuron. On the other hand the basal HCNchannels would lead to the right sag-response after pinching. That study however would
require several additional free parameters and would need more elaborate distance functions but could exploit the pinching data for the study of HCN-channel distribution in the
basal dendrite.
Further Somatic Conductances
We inserted the Nap- and Km-channel only in the soma. These channels were needed to
adjust the interspike intervals for our model neuron as the experimental data shows slight
spike frequency adaptation. The Km-channel activation reaches its steady state value
only within seconds (Fig. 3.4) and is therefore suited for modulating the spike frequency.
However it was not possible to get the right interspike intervals only with this channel
alone, but we needed to include the Nap-channel as a counterpart to increase the overall
excitability which was reduced by the Km-channel.
AP Propagation in the Axon
In the axon, only the hillock and the axon initial segment were able to initiate APs, the
rest of the axon was passive and homogeneous. This is obviously not realistic as axons
of pyramidal neurons contain nodes of Ranvier and segments of myelin. It would be
important to check whether the existence of nodes of Ranvier with high sodium channel
densities would alter the somatic response properties or the shape of the somatic AP.
Nevertheless in order to use our reduced model neuron for the creation of realistic network
connections via synapses the axon does not necessarily need to reproduce propagating
APs as postsynaptic locations can also be activated with a certain time delay instead.
5.3. Fitting Results
5.3.1. Choosing the Free Parameters
The right choice of the free parameters for a complex model is very difficult as it is not
clear which of all possible model parameters need to be free in order to fit the target data.
It is obvious that the more parameters will be free the easier it will be for the model
to reproduce that data, but at the same time it will become harder for any algorithm to
find the optimal point in parameter space. Therefore there exists a certain set of free
parameters that offer a good trade-off between the ability to fit the target data and to be
identifiable with the chosen fitting strategy.
78
5.3. FITTING RESULTS
For a given list of free parameters we checked whether we would be able to fit our
experimental data well enough and whether the optimized model would be able to generalize. Depending on the result we resized the list of free parameters and repeated the optimization. Thus the choice of the 18 free parameters we are using in this study (Tab. 3.2) is
the result of a long trial and error process. We consider these parameters to be important
for shaping the neuronal response properties. At the same time we are uncertain about
their precise values as it is hard to constrain them via direct experiments.
5.3.2. Surrogate Data Optimization
A first test for any optimization strategy is to challenge the algorithm with data that have
been generated by the model itself (surrogate data) and to check whether it can find the
original target parameters again (Druckmann et al., 2008). We have performed that test
with our algorithm.
For a target parameter set we chose one of our experimental data fitting results and
slightly modified the parameter values (Tab. 4.1). With these parameters we generated
the target data and ran the optimization two times. The surrogate data fitting results were
very good (Figs. 4.3, 4.5) but not perfect and the underlying parameter combinations are
different from the target values and are different from each other (Tab. 4.1). This shows
that our fitting strategy is not able to find the global minimum in the total-error space.
We think that this due to our choice of distance functions and the small population
size. With a small population size the evolutionary algorithm will not be able to explore
the whole error space and will get stuck in local minima. We think that our population
size of N = 500 is too small, but due to our limited computational resources we could not
test whether a larger population size would improve the results. Moreover the four distance functions appear not to emphasize the important distances between the final optimal
solutions well enough. We have tried to include a fifth distance function optimizing the
model with respect to the interspike interval trace, but could not test whether it significantly improved our fitting results as we would have needed an even larger population for
this.
We also observed that best initial solution (Fig. 4.2) is better than we had expected,
which shows that with the free parameters chosen and the constraints defining a “good”
model , it appears to be easy to obtain realistic spiking behaviour.
It seems that the number of 18 free parameters is simply too large and that hence the
resulting error space contains too many local minima. A way to reduce the number of
free parameters per optimization run is to use a “parameter-peeling” approach by fitting
the model to multiple traces one after the other where several ion channels were blocked
(Keren et al., 2009; Roth & Bahl, 2009). Together with a MOO-strategy this should be an
interesting approach to uniquely constrain an entire neuron model in one optimization run.
79
CHAPTER 5. DISCUSSION
However if a combination with the pinching-strategy is desired this would require more
distance functions and larger population sizes which surpasses our current computational
resources and we are not sure whether this will be feasible experimentally.
In any case it might not make sense to put effort in creating good strategies that are
able to fit surrogate data perfectly and find the target parameter sets again, as it is not guaranteed that these methods directly transfer to experimental target data (Druckmann et al.,
2008). Thus it might be a better way to further develop the algorithms that are already
able to fit experimental data, but still do not find unique solutions for a given set of experimental recordings.
5.3.3. Experimental Data Optimization
We were able to fit our model to experimental data. Only a very small number of studies were successful doing so (for example Druckmann et al., 2007; Keren et al., 2009;
Prinz et al., 2003). Those studies however either used a single-compartment model or
only focused on a specific region of the neuron. Furthermore none of these studies have
presented how the optimized model generalizes to other current injections that were not
used for the optimization. We instead have presented a method that is able to constrain
an entire neuron model to several experimental data traces simultaneously (Figs. 4.7, 4.9,
4.10) and have shown that our optimized model predicts the spike frequency (Fig. 4.11)
and even the detailed spiketrain for other current injections (Fig. 4.12).
Optimal Parameters
The final parameter results we have obtained after three evolutions are different, but all
three solutions present good fits for the experimental data. Comparing the values in Tab.
4.2 we see interesting tendencies for some of the values. It would be fascinating to further investigate the suggested parameter correlations which could also reflect homeostatic
adjustments in real neurons. The following hypotheses are however speculative and more
optimization runs have to be performed to create a convincing statistical analysis:
global
Epas : It can be seen that the estimated global leak reversal potential is more negative than the resting potential ≈ -71 mV for all three results. This is expected as we
know that HCN-channels push the membrane potential towards their reversal potential
(Eh ≈ -47 mV) at rest. In order to remain at the resting potential a compensating leak
conductance with a reversal potential below the resting potential is needed that balances
the depolarizing h-current with a hyperpolarizing leak current.
axosomatic : The value for the specific membrane resistance for the second result is
rm
estimated to be about 50 % smaller than for the other solutions and thus the leak conductance in this model is larger. This will reduce the excitability of the neuron which can be
iseg
hillock
compensated by increasing the values of gbarsoma
and gbarNat .
Nap , gbarNat
80
5.3. FITTING RESULTS
spinefactor: The estimates for this value were low in all trials and even below 1 for
Trial 2. In other modelling studies spinefactors around 2 are used (Holmes, 1989). Our
result might reflect a failure we have made in choosing the right proportions between
the segments in our model, but it could also suggest lower spine densities in pyramidal
neurons than previously assumed.
gbarsoma
Nat : This value appears to by correlated with decayNat . The sodium channel
density in the apical dendrite needs to be adjusted properly to produce BAPs that lead
to the right amount of dendritic source current to optimize the AP repolarization. If high
soma
values for gbarsoma
Nat are found the decay needs to be fast (Trial 1, Trial 3), while if gbarNat
is estimated to be low the decay must be slower (Trial 2).
iseg
gbarhillock
Nat , gbarNat : These values are high for all solutions. This was not due to the
search boundaries. We tried the optimization with the lower search boundary set to zero
and also obtained high values. There has been a long lasting debate regarding the correct
sodium channel density in the hillock and axon initial segment. Modelling studies suggested high sodium channel densities (Mainen et al., 1995) in order to initiate APs in the
axon while experimental patch clamp recordings measured small sodium currents in axonal patches (Colbert & Pan, 2002). Only recently other experimental studies have agreed
with high densities and suggested a ratio between the axon initial segment and somatic
sodium channel density of around 50 (Kole et al., 2008). Consistent with these findings
our results are 16, 42 and 20.
iseg
vshift2Nat : All three trials find that the voltage dependence of activation and inactivation of the sodium channel in the axon initial segment should be shifted to more negative
potentials. Based on experimental recordings it was suggested that such a shift can explain
the initiation of APs in the axon initial segment without assuming high sodium channel
densities (Colbert & Pan, 2002). Our findings (-7.9 mV, -7.5 mV, -5.8 mV) are very close
to the value found in the experiments (-7 mV) which gives more theoretical evidence that
AP initiation in the axon initial segment is due to this axonal modification.
5.3.4. AP Initiation
The optimal solution after all optimization trials showed AP initiation in the axon initial
segment. If the AP was initiated in the soma the AP onset had a very distinct shape from
the experimentally determined one and hence the underlying parameter set was associated
with a large onset-error value and less considered during evolution.
5.3.5. Effects of Pinching
With the help of our optimized model, we can now quantitatively explain the effects of
pinching (Fig. 4.1):
81
CHAPTER 5. DISCUSSION
Input Resistance
The somatic input resistance increases, hence depolarization and firing rate increase in
response to the same stimuli. This is due to the fact that the neuron loses a large amount
of its accessible leaky surface area during pinching and therefore the same input current
can charge the membrane more effectively.
Resting Potential
The resting potential is shifted to a more hyperpolarized level (≈ -1 mV). HCN-channels
are open at rest and introduce a conductance that pushes the membrane potential towards
their reversal potential (Eh ≈ -47 mV). As the density of these channels is highest in
the distal regions of the apical dendritic tree, a large amount of HCN-channels is not
accessible after pinching and therefore the resting potential is lowered.
Voltage Threshold for AP Initiation
The voltage threshold for spike initiation is shifted to a more hyperpolarized level (≈
-2 mV in the experiment, but only ≈ -0.5 mV in our model (not shown)). AP initiation occurs in the axon initial segment. In the intact neuron a significant amount of the activated
axonal sodium current can escape into the dendrites and charge the dendritic capacitor,
which acts as a current sink. Therefore more sodium channels need to be activated in order that the positive feedback of depolarization and sodium activation can initiate a spike.
After pinching less sodium current can escape into the dendrites and less sodium channels
are needed to initiate a sufficient local depolarization and hence the voltage threshold for
AP initiation is reduced.
We could not further minimize the threshold-difference between the experiment and
our model. It was possible to obtain better results only if the sodium channel distribution
was adjusted such that AP initiation occurred in the soma. We explain this discrepancy
with the fact that the axon initial segment is too far away from the capacitive load of
the apical dendrite. This might be solvable by modification of the axonal geometry and
membrane parameters or it might help to modify the intracellular resistivity. We have
tried several of these ideas, but none of them led to satisfying results.
AHP Differences
The AHP is increased by about 10 mV after pinching. The dendritic tree contains sodium
and potassium channels as well. When a somatic AP occurs it can actively backpropagate
into the dendritic tree (Fig. 4.14) which leads to a massive dendritic depolarization. Thus
dendritic current flows back into the soma after a somatic spike. After pinching this
depolarizing dendritic current is not available any more and hence the AHP is stronger
(Fig. 4.15).
82
5.4. MODEL EVALUATION
AP Height
The total spike height (measured from the onset of an AP to its peak) is increased after pinching. This is expected as the depolarizing somatic currents cannot escape into
the dendritic tree but instead lead to an enhanced somatic depolarization during an AP
(Fig. 4.15).
5.4. Model Evaluation
5.4.1. Resting Potential
We have studied the value of the resting potential in different locations along the apical
dendrite (Fig. 4.13) and found that the resting potential is shifted in the distal part the
neuron by about 4 mV. This was explained by the higher HCN-channel density in the tuft
and the hyperpolarizing effect of the proximal potassium conductances. Experimental
measurements (Stuart et al., 1997, Fig. 2B) and recent modelling studies (Keren et al.,
2009) have also shown this modulation of the resting potential along the apical dendrite
but suggested a higher change (≈ 9 mV). We think that this discrepancy is due to the
missing HCN-channels along the apical dendrite in our model (see above).
5.4.2. Rapid AP Onset
There has been a vigorous debate regarding the origin of the fast AP onset also called
“the kink” in pyramidal neurons. This kink cannot be explained with standard HodgkinHuxley single-compartment models. Based on theoretical studies Naundorf et al. (2006)
suggested a sodium channel cooperativity mechanism as a possible solution although
there are no direct experimental findings supporting this idea. On the other hand it was
argued that Hodgkin-Huxley models can explain this kink if the sudden influx of axonal
lateral current is considered (McCormick et al., 2007). A combined experimental and
modelling study was published (Yu et al., 2008) supporting the latter idea. However it
was claimed that the amount of lateral current is insufficient to explain the rapid onset if
the axonal morphology and membrane parameters are realistic.2
We have created a realistic somatic and axonal morphology (sec. 3.1). We use standard Hodgkin-Huxley models for the sodium current and our optimization algorithm has
found densities and kinetics that are consistent with recent experimental findings (see
above). The AP is initiated in the axon initial segment and travels antidromically through
the soma into the dendrites (Fig. 4.14a). With these biologically realistic results we obtain
very good fits for the experimentally determined AP kink (Fig. 4.7a). Wo do therefore
2 SfN-Conference
2008, Washington DC, USA: Baranauskas et al. “Why action potentials in the soma
have sharp onset?”
83
CHAPTER 5. DISCUSSION
support the lateral-current hypothesis and think that we have presented the first quantitative modelling study to support that idea.
5.4.3. AP Backpropagation
Our reduced model shows BAPs after all three optimization trials, although we have not
directly asked for this in the distance functions. We have analyzed BAPs only for Trial 1
in detail but the other solutions led to similar results.
The peak amplitude of BAPs in our model decays sigmoidally with distance from the
soma (Fig. 4.14b). This result is different from that of Keren et al. (2009, Fig. 10A) and
does not match the experimental finding that the peak decays exponentially (Stuart et al.,
1997, Fig. 2A). We find that the half-width of proximal BAPs increases linearly with
distance (Fig. 4.14c). A linear increase with similar slope was also found experimentally
(Stuart et al., 1997, Fig. 2C). The half-width slope in the tuft region of our model was
higher but the experimental measurements are not precise enough in that region to judge
this result. The underlying conductances during BAPs predicted by us (Fig. 4.14d,e,f)
were similar to those predicted by Keren et al. (2009, Fig. 10B,C,D).
We think that the mismatch for the peak amplitude is due to the missing HCN-channel
conductance in the apical dendrite of our model. If we use an increasing HCN-channel
density the leak conductance does also increase and therefore we expect the local sodium
currents to have less depolarizing effects. As described above we have already suggested
how apical HCN-channels could be introduced in our modelling study.
5.4.4. Currents Shaping the Somatic AP Waveform
We have used our model to determine the main currents involved during a somatic AP
(Fig. 4.15). We can see that the somatic AP onset is due to a sudden current influx from
the axon that leads to the “kink” (see above). Then with a delay somatic sodium channels
open and further enhance the depolarization. These two separate inward currents lead to
the typical biphasic AP onset as seen in phase-plots of voltage traces of pyramidal neurons
(This study, Fig. 1.5; Yu et al., 2008, Fig. 1C). The repolarization of the AP is shaped
by a complex dialogue of sodium, potassium and lateral currents from the axon and the
apical dendrite. It is interesting to see that with the described set of conductances we
were able to reproduce the experimental repolarization well although we did not model
further conductances. It was shown, for example, that the interplay of calcium channels
and calcium-dependent potassium channels has strong effects on the AHP in CA1 pyramidal neurons (Golomb et al., 2007). Thus it seems that our ion channel composition can
compensate for the missing conductances.
84
5.5. OUTLOOK
5.5. Outlook
This thesis led to a model of a layer 5 pyramidal neuron that precisely reproduces experimental data, generalizes to other types of input, and that we have used to explain several
biophysical mechanisms relevant to the function of pyramidal neurons. But significant
differences compared to the experiment remain, for example in the resting potential and
the shape of BAPs. One of the next steps will therefore be a modification of the HCNchannel distribution to get closer to the experimental findings. We will further improve
the fitting strategy by increasing the number of individuals and by including additional
distance functions. With these improvements we will then perform more optimization
runs on the experimental data in order to create a rigorous statistical analysis of the homeostatic adjustments we have hypothesized. For our purposes we will need to acquire larger
computational resources. We hope that this will finally lead to an accurate and efficient
model of a layer 5 pyramidal neuron that will be beneficial for large-scale simulations of
the cortex.
85
References
Abbott, L. (1999). Lapicque’s introduction of the integrate-and-fire model neuron (1907).
Brain Research Bulletin, 50(5-6), 303–304.
Achard, P. & De Schutter, E. (2006). Complex parameter landscape for a complex neuron
model. PLoS Comput Biol, 2(7), e94.
Amini, B., Clark, J. W., & Canavier, C. C. (1999). Calcium dynamics underlying
pacemaker-like and burst firing oscillations in midbrain dopaminergic neurons: a computational study. Journal of Neurophysiology, 82(5), 2249–61.
Ariav, G., Polsky, A., & Schiller, J. (2003). Submillisecond precision of the input-output
transformation function mediated by fast sodium dendritic spikes in basal dendrites of
ca1 pyramidal neurons. Journal of Neuroscience, 23(21), 7750–7758.
Baranauskas, G. & Martina, M. (2006). Sodium currents activate without a hodgkin-andhuxley-type delay in central mammalian neurons. J Neurosci, 26(2), 671–84.
Barry, P. H. (1994). Jpcalc, a software package for calculating liquid junction potential
corrections in patch-clamp, intracellular, epithelial and bilayer measurements and for
correcting junction potential measurements. J Neurosci Methods, 51(1), 107–16.
Barry, P. H. & Diamond, J. M. (1970). Junction potentials, electrode standard potentials, and other problems in interpreting electrical properties of membranes. Journal of
Membrane Biology, 3(1), 93–122.
Bekkers, J. M. & Häusser, M. (2007). Targeted dendrotomy reveals active and passive
contributions of the dendritic tree to synaptic integration and neuronal output. Proc
Natl Acad Sci USA, 104(27), 11447–52.
Benda, J. & Herz, A. V. M. (2003). A universal model for spike-frequency adaptation.
Neural Computation, 15(11), 2523–64.
Berger, T., Larkum, M. E., & Lüscher, H. R. (2001). High i(h) channel density in the
distal apical dendrite of layer v pyramidal cells increases bidirectional attenuation of
epsps. Journal of Neurophysiology, 85(2), 855–68.
87
REFERENCES
Bernstein, J. (1906). Untersuchungen zur Thermodynamik der bioelektrischen Ströme.
Pflügers Archiv European Journal of Physiology, 112(9), 439–521.
Borst, A. & Haag, J. (1996). The intrinsic electrophysiological characteristics of fly lobula
plate tangential cells: I. passive membrane properties. J Comput Neurosci, 3(4), 313–
36.
Bower, J. M. & Beeman, D. (1998). The book of GENESIS (2nd ed.): exploring realistic
neural models with the GEneral NEural SImulation System. New York, NY, USA:
Springer-Verlag New York, Inc.
Bush, P. C. & Sejnowski, T. J. (1993). Reduced compartmental models of neocortical
pyramidal cells. J Neurosci Methods, 46(2), 159–66.
Carnevale, N. T. & Hines, M. L. (2006). The NEURON Book. New York, NY, USA:
Cambridge University Press.
Catterall, W. A. (1995). Structure and function of voltage-gated ion channels. Annu Rev
Biochem, 64, 493–531.
Cauller, L. J. & Connors, B. W. (1994). Synaptic physiology of horizontal afferents to
layer i in slices of rat si neocortex. J Neurosci, 14(2), 751–62.
Colbert, C. M. & Pan, E. (2002). Ion channel properties underlying axonal action potential
initiation in pyramidal neurons. Nat Neurosci, 5(6), 533–8.
Crivellato, E. & Ribatti, D. (2007). Soul, mind, brain: Greek philosophy and the birth of
neuroscience. Brain Res Bull, 71(4), 327–36.
Dalcin, L., Paz, R., Storti, M., & D’Elia, J. (2008). Mpi for python: Performance improvements and mpi-2 extensions. Journal of Parallel and Distributed Computing,
68(5), 655–662.
De Schutter, E. & Bower, J. M. (1994). An active membrane model of the cerebellar
purkinje cell. i. simulation of current clamps in slice. Journal of Neurophysiology,
71(1), 375–400.
Deb, K. (2001). Multi-Objective Optimization Using Evolutionary Algorithms. Wiley.
Deb, K., Pratap, A., Agarwal, S., & Meyarivan, T. (2002). A fast and elitist multiobjective
genetic algorithm: Nsga-ii. Ieee T Evolut Comput, 6(2), 182–197.
Destexhe, A. (2001). Simplified models of neocortical pyramidal cells preserving somatodendritic voltage attenuation. Neurocomputing, 38, 167–173.
88
REFERENCES
Druckmann, S., Banitt, Y., Gidon, A., Schürmann, F., Markram, H., & Segev, I. (2007).
A novel multiple objective optimization framework for constraining conductance-based
neuron models by experimental data. Frontiers in neuroscience, 1(1), 7–18.
Druckmann, S., Berger, T. K., Hill, S., Schürmann, F., Markram, H., & Segev, I. (2008).
Evaluating automated parameter constraining procedures of neuron models by experimental and surrogate data. Biol Cybern, 99(4-5), 371–9.
Eichner, H., Klug, T., & Borst, A. (2009). Neural simulations on multi-core architectures.
Frontiers in Neuroinformatics, 3, 21.
Fitzhugh, R. (1961). Impulses and physiological states in theoretical models of nerve
membrane. Biophys J, 1(6), 445–466.
Gentet, L., Stuart, G., & Clements, J. (2000). Direct measurement of specific membrane
capacitance in neurons. Biophys J, 79(1), 314–320.
Golding, N. L. & Spruston, N. (1998). Dendritic sodium spikes are variable triggers of
axonal action potentials in hippocampal ca1 pyramidal neurons. Neuron, 21(5), 1189–
200.
Golomb, D., Donner, K., Shacham, L., Shlosberg, D., Amitai, Y., & Hansel, D. (2007).
Mechanisms of firing patterns in fast-spiking cortical interneurons. PLoS Comput Biol,
3(8), e156.
Gurkiewicz, M. & Korngreen, A. (2007). A numerical approach to ion channel modelling
using whole-cell voltage-clamp recordings and a genetic algorithm. PLoS Comput Biol,
3(8), e169.
Hamill, O. P., Huguenard, J. R., & Prince, D. A. (1991). Patch-clamp studies of voltagegated currents in identified neurons of the rat cerebral cortex. Cereb Cortex, 1(1),
48–61.
Hines, M. L., Davison, A. P., & Muller, E. (2009). Neuron and python. Frontiers in
Neuroinformatics, 3, 1.
Hines, M. L., Eichner, H., & Schürmann, F. (2008). Neuron splitting in compute-bound
parallel network simulations enables runtime scaling with twice as many processors. J
Comput Neurosci, 25(1), 203–210.
Hodgkin, A. L. & Huxley, A. F. (1952a). Currents carried by sodium and potassium ions
through the membrane of the giant axon of loligo. The Journal of Physiology, 116(4),
449–72.
89
REFERENCES
Hodgkin, A. L. & Huxley, A. F. (1952b). A quantitative description of membrane current
and its application to conduction and excitation in nerve. The Journal of Physiology,
117(4), 500–44.
Holmes, W. R. (1989). The role of dendritic diameters in maximizing the effectiveness of
synaptic inputs. Brain Res, 478(1), 127–37.
Huguenard, J. R., Hamill, O. P., & Prince, D. A. (1988). Developmental changes in na+
conductances in rat neocortical neurons: appearance of a slowly inactivating component. Journal of Neurophysiology, 59(3), 778–95.
Izhikevich, E. M. & Edelman, G. M. (2008). Large-scale model of mammalian thalamocortical systems. Proc Natl Acad Sci USA, 105(9), 3593–8.
Johnston, D., Magee, J., Colbert, C., & Christie, B. (1996). Active properties of neuronal
dendrites. Annu. Rev. Neurosci., 19, 165–186.
Katz, Y., Menon, V., Nicholson, D. A., Geinisman, Y., Kath, W. L., & Spruston, N.
(2009). Synapse distribution suggests a two-stage model of dendritic integration in ca1
pyramidal neurons. Neuron, 63(2), 171–7.
Keren, N., Bar-Yehuda, D., & Korngreen, A. (2009). Experimentally guided modelling of
dendritic excitability in rat neocortical pyramidal neurones. The Journal of Physiology,
587(Pt 7), 1413–37.
Keren, N., Peled, N., & Korngreen, A. (2005). Constraining compartmental models using
multiple voltage recordings and genetic algorithms. Journal of Neurophysiology, 94(6),
3730–42.
Koch, C. (2004). Biophysics of Computation: Information Processing in Single Neurons
(Computational Neuroscience). Oxford University Press, USA.
Kole, M. & Stuart, G. (2008). Is action potential threshold lowest in the axon?
Neurosci.
Nat
Kole, M. H. P., Hallermann, S., & Stuart, G. J. (2006). Single ih channels in pyramidal
neuron dendrites: properties, distribution, and impact on action potential output. J
Neurosci, 26(6), 1677–87.
Kole, M. H. P., Ilschner, S. U., Kampa, B. M., Williams, S. R., Ruben, P. C., & Stuart,
G. J. (2008). Action potential generation requires a high sodium channel density in the
axon initial segment. Nat Neurosci, 11(2), 178–86.
Kole, M. H. P., Letzkus, J. J., & Stuart, G. J. (2007). Axon initial segment kv1 channels
control axonal action potential waveform and synaptic efficacy. Neuron, 55(4), 633–47.
90
REFERENCES
Korngreen, A. & Sakmann, B. (2000). Voltage-gated k+ channels in layer 5 neocortical
pyramidal neurones from young rats: subtypes and gradients. The Journal of Physiology, 525 Pt 3, 621–39.
Lapicque, L. (2007). Quantitative investigations of electrical nerve excitation treated as
polarization. 1907. Biol Cybern, 97(5-6), 341–9.
Larkum, M. E., Senn, W., & Lüscher, H.-R. (2004). Top-down dendritic input increases
the gain of layer 5 pyramidal neurons. Cereb Cortex, 14(10), 1059–70.
Larkum, M. E., Zhu, J. J., & Sakmann, B. (2001). Dendritic mechanisms underlying the
coupling of the dendritic with the axonal action potential initiation zone of adult rat
layer 5 pyramidal neurons. The Journal of Physiology, 533(Pt 2), 447–66.
LeMasson, G. & Maex, R. (2001). Introduction to equation solving and parameter fitting. In E. De Schutter (Ed.), Computational Neuroscience: Realistic Modeling for
Experimentalists (pp. 1–21). CRC Press.
Lindsay, K. A., Rosenberg, J. R., & Tucker, G. (2003). Analytical and numerical construction of equivalent cables. Mathematical biosciences, 184(2), 137–64.
Loew, L. M. (1996). Potentiometric dyes: Imaging electrical activity of cell membranes.
Pure Appl Chem, 68(7), 1405–1409.
London, M. & Hausser, M. (2005). Dendritic computation. Annu. Rev. Neurosci., 28(1),
503–532.
Losavio, B. E., Liang, Y., Santamaría-Pang, A., Kakadiaris, I. A., Colbert, C. M., & Saggau, P. (2008). Live neuron morphology automatically reconstructed from multiphoton
and confocal imaging data. Journal of Neurophysiology, 100(4), 2422–9.
Mainen, Z. F., Joerges, J., Huguenard, J. R., & Sejnowski, T. J. (1995). A model of spike
initiation in neocortical pyramidal neurons. Neuron, 15(6), 1427–39.
Markram, H. (2006). The Blue Brain Project. Nat Rev Neurosci, 7(2), 153–60.
McCormick, D. A., Shu, Y., & Yu, Y. (2007). Neurophysiology: Hodgkin and huxley
model–still standing? Nature, 445(7123), E1–2; discussion E2–3.
Mel, B. W., Ruderman, D. L., & Archie, K. A. (1998). Translation-invariant orientation tuning in visual "complex" cells could derive from intradendritic computations. J
Neurosci, 18(11), 4325–34.
Morris, C. & Lecar, H. (1981). Voltage oscillations in the barnacle giant muscle fiber.
Biophys J, 35(1), 193–213.
91
REFERENCES
Nagumo, J., Arimoto, S., & Yoshizawa, S. (1962). An active pulse transmission line
simulating nerve axon. Proceedings of the IRE, 50(10), 2061–2070.
Naundorf, B., Wolf, F., & Volgushev, M. (2006). Unique features of action potential
initiation in cortical neurons. Nature, 440(7087), 1060–3.
Nevian, T., Larkum, M. E., Polsky, A., & Schiller, J. (2007). Properties of basal dendrites
of layer 5 pyramidal neurons: a direct patch-clamp recording study. Nat Neurosci,
10(2), 206–214.
Pearce, J. M. (2001). Emil heinrich du bois-reymond (1818-96). J Neurol Neurosurg
Psychiatr, 71(5), 620.
Piccolino, M. (1997). Luigi galvani and animal electricity: two centuries after the foundation of electrophysiology. Trends Neurosci, 20(10), 443–8.
Pinsky, P. F. & Rinzel, J. (1994). Intrinsic and network rhythmogenesis in a reduced traub
model for ca3 neurons. J Comput Neurosci, 1(1-2), 39–60.
Poirazi, P., Brannon, T., & Mel, B. W. (2003). Pyramidal neuron as two-layer neural
network. Neuron, 37(6), 989–99.
Polsky, A., Mel, B. W., & Schiller, J. (2004). Computational subunits in thin dendrites of
pyramidal cells. Nat Neurosci, 7(6), 621–627.
Prinz, A. A., Billimoria, C. P., & Marder, E. (2003). Alternative to hand-tuning
conductance-based models: construction and analysis of databases of model neurons.
Journal of Neurophysiology, 90(6), 3998–4015.
Prinz, A. A., Bucher, D., & Marder, E. (2004). Similar network activity from disparate
circuit parameters. Nat Neurosci, 7(12), 1345–52.
Rall, W. (1962). Theory of physiological properties of dendrites. Ann N Y Acad Sci, 96,
1071–92.
Ray, S. & Bhalla, U. S. (2008). Pymoose: Interoperable scripting in python for moose.
Frontiers in Neuroinformatics, 2, 6.
Roth, A. & Bahl, A. (2009). Divide et impera: optimizing compartmental models of
neurons step by step. The Journal of Physiology, 587(Pt 7), 1369–70.
Sakmann, B. & Neher, E. (1984). Patch clamp techniques for studying ionic channels in
excitable membranes. Annu. Rev. Physiol., 46, 455–72.
92
REFERENCES
Schiller, J., Schiller, Y., Stuart, G., & Sakmann, B. (1997). Calcium action potentials
restricted to distal apical dendrites of rat neocortical pyramidal neurons. The Journal
of Physiology.
Segev, I. (1994). The Theoretical Foundations of Dendritic Function: The Collected
Papers of Wilfrid Rall with Commentaries (Computational Neuroscience). The MIT
Press.
Segev, I. & London, M. (2000). Untangling dendrites with quantitative models. Science,
290(5492), 744–50.
Spruston, N. (2008). Pyramidal neurons: dendritic structure and synaptic integration. Nat
Rev Neurosci, 9(3), 206–21.
Stuart, G., Schiller, J., & Sakmann, B. (1997). Action potential initiation and propagation
in rat neocortical pyramidal neurons. The Journal of Physiology, 505 ( Pt 3), 617–32.
Stuart, G. & Spruston, N. (1998). Determinants of voltage attenuation in neocortical
pyramidal neuron dendrites. J Neurosci, 18(10), 3501–10.
Stuart, G. J. & Sakmann, B. (1994). Active propagation of somatic action potentials into
neocortical pyramidal cell dendrites. Nature, 367(6458), 69–72.
Traub, R. D., Buhl, E. H., Gloveli, T., & Whittington, M. A. (2003). Fast rhythmic
bursting can be induced in layer 2/3 cortical neurons by enhancing persistent na+ conductance or by blocking bk channels. Journal of Neurophysiology, 89(2), 909–21.
Traub, R. D., Contreras, D., Cunningham, M. O., Murray, H., LeBeau, F. E. N., Roopun,
A., Bibbig, A., Wilent, W. B., Higley, M. J., & Whittington, M. A. (2005). Singlecolumn thalamocortical network model exhibiting gamma oscillations, sleep spindles,
and epileptogenic bursts. Journal of Neurophysiology, 93(4), 2194–232.
Van Geit, W., De Schutter, E., & Achard, P. (2008). Automated neuron model optimization techniques: a review. Biol Cybern, 99(4-5), 241–51.
Vanier, M. C. & Bower, J. M. (1999). A comparative survey of automated parametersearch methods for compartmental neural models. J Comput Neurosci, 7(2), 149–71.
Weaver, C. M. & Wearne, S. L. (2006). The role of action potential shape and parameter constraints in optimization of compartment models. Neurocomputing, 69(10-12),
1053–1057.
Weaver, C. M. & Wearne, S. L. (2008). Neuronal firing sensitivity to morphologic and
active membrane parameters. PLoS Comput Biol, 4(1), e11.
93
REFERENCES
Wheeler, B. C. & Smith, S. R. (1988). High-resolution alignment of action potential
waveforms using cubic spline interpolation. Journal of biomedical engineering, 10(1),
47–53.
Winograd, M., Destexhe, A., & Sanchez-Vives, M. V. (2008). Hyperpolarizationactivated graded persistent activity in the prefrontal cortex. Proc Natl Acad Sci USA,
105(20), 7298–303.
Yu, Y., Shu, Y., & McCormick, D. A. (2008). Cortical action potential backpropagation
explains spike threshold variability and rapid-onset kinetics. J Neurosci, 28(29), 7260–
72.
Zhu, J. J. (2000). Maturation of layer 5 neocortical pyramidal neurons: amplifying salient
layer 1 and layer 4 inputs by ca2+ action potentials in adult rat tuft dendrites. The
Journal of Physiology, 526 Pt 3, 571–87.
94
Eigenständigkeitserklärung
Hiermit versichere ich, dass ich die vorliegende Diplomarbeit selbständig verfasst und
keine anderen als die angegebenen Quellen und Hilfsmittel verwendet habe.
München, den . . . . . . . . . . . .
Unterschrift: . . . . . . . . . . . . . . . . . .