Function follows dynamics: state-dependency of

Transcription

Function follows dynamics: state-dependency of
Function follows dynamics: state-dependency of
directed functional influences
Demian Battaglia
Abstract Brain function require the control of inter-circuit interactions on timescales faster than synaptic changes. In particular, strength and direction of causal influences between neural populations (described by the so-called directed functional
connectivity) must be reconfigurable even when the underlying structural connectivity is fixed. Such influences can be quantified through causal analysis of time-series
of neural activity with tools like Transfer Entropy. But how can manifold functional networks stem from fixed structures? Considering model systems at different
scales, like neuronal cultures or cortical multi-areal motifs, we show that “function
and information follow dynamics”, rather than structure. Different dynamic states
of a same structural network, characterized by different synchronization properties,
are indeed associated to different directed functional networks, corresponding to alternative information flow patterns. Here we discuss how suitable generalizations of
Transfer Entropy, taking into account switching between collective states of the analyzed circuits, can provide a picture of directed functional interactions in agreement
with a “ground-truth” description at the dynamical systems level.
1 Introduction
Even before unveiling how neuronal activity represents information, it is crucial to
understand how this information, independently from the used encoding, is routed
across the complex multi-scale circuits of the brain. Flexible exchange of information lies at the core of brain function. A daunting amount of computations must
Demian Battaglia
Max Planck Institute for Dynamics and Selforganization and Bernstein Center for Computational
Neuroscience, Am Faßberg 17, D-37077 Göttingen.
On leave to: Aix-Marseille University, Institute for Systems Neuroscience, INSERM UMR 1106,
27, Boulevard Jean Moulin, F-13005 Marseille.
e-mail: [email protected].
1
2
Demian Battaglia
be performed in a way dependent from external context and internal brain states.
But how can information be rerouted “on demand”, given that anatomic inter-areal
connections can be considered as fixed, on timescales relevant for behavior?
In systems neuroscience, a distinction is made between structural and directed
functional connectivities [32, 33]. Structural connectivity describes actual synaptic
connections. On the other hand, directed functional connectivity is estimated from
time-series of simultaneous neural recordings using causal analysis [20, 36, 41],
to quantify, beyond correlation, directed influences between brain areas. If the
anatomic structure of brain circuits certainly constrains the functional interactions
that these circuits can support (see e.g. [42]), it is not however sufficient to specify
them fully. Indeed, a given structural network might give rise to multiple possible
collective dynamical states, and such different states could lead to different information flow patterns. It has been suggested, for instance, that multi-stability of neural
circuits underlies switching between different perceptions or behaviors [21, 40, 48].
In this view, transitions between alternative attractors of the neural dynamics would
occur under the combined influence of structured “brain noise” [47] and of the bias
exerted by sensory or cognitive driving [16, 17, 18].
Due to a possibly non trivial attractor dynamics, the interrelation between
structural and functional connectivity becomes inherently complex. Therefore, dependencies from the analyzed dynamical regime have to be taken into
account explicitly when designing metrics of directed interactions.
Dynamic multi-stability can give rise, in particular, to transitions between different oscillatory states of brain dynamics [28]. This is highly relevant in this context, because long-range oscillatory coherence [58, 63] —in particular in the beta or
gamma band of frequency [6, 8, 22, 24, 29, 30, 50, 63]— is believed to play a central role in inter-areal communication. According to the “communication-throughcoherence” hypothesis [29], information exchange between two neuronal populations is enhanced when the oscillations of their coherent activity is phase-locked
with a suitable phase-relation. Therefore the efficiency and the directionality of information transmission between neuronal populations is affected by changes in their
synchronization pattern, as also advocated by modeling studies [12, 4]. More in general, the correct timing of exchanged signals is arguably crucial for a correct relay
of information and a natural device to achieve temporal coordination might be selforganized synchronization. Beyond tightly [64] or sparsely-synchronized [9, 10, 11]
periodic-like oscillations, synchronization in networks of spiking neurons can arise
in other forms, including low-dimensional chaotic rhythms [2, 3] or avalanche-like
bursting [1, 5, 45, 46], which are both highly temporal irregular, and yet able to
support modulation of information flow.
This chapter will concentrate on the directed functional connectivity analysis of
simulated neural dynamics, rather than of actual experiments. It will focus in particular on two representative systems at different spatial scales, both described as
large networks of hundredths or thousandths of model spiking neurons. The anal-
Function follows dynamics: state-dependency of directed functional influences
3
ysis will delve first on cultures of dissociated neurons, which after a certain critical maturation age, are known to spontaneously develop an episodic synchronous
bursting activity [14, 25, 61]. Then, mesoscopic circuits of few interconnected oscillating brain areas will be considered, stressing how even simple structural motifs
can give rise to a rich repertoire of phase-locked configurations. Emphasis on simulated systems allows disentangling the role of collective dynamics in mediating the
link between a given structural connectivity and the emergent directed functional interactions. In analogous experimental systems, the ground-truth connectivity or the
actual ongoing dynamics would not be known with precision. On the contrary, on
in silico neural circuits, structural topology can be freely chosen and its impact on
network dynamics thoroughly explored, showing in a direct way that a correspondence exists between supported dynamical regimes and inferred directed functional
connectivities.
Two phenomena will be highlighted: on one side, functional multiplicity, arising when multiple functional topologies stem out of a system with a given
structural topology (supporting multiple dynamics); on the other side, structural degeneracy, arising when systems with different structural topologies
(but similar dynamics) give rise to equivalent functional topologies.
2 State-conditioned Transfer Entropy
In this contribution, directed functional connectivity —used with the meaning of
causal connectivity or exploratory data-driven effective connectivity, as commented
in [7]— is characterized in terms of a generalized version of Transfer Entropy (TE)
[51], an information-theoretic implementation of the well-known notion of WienerGranger causality [37, 65]. The notion of Transfer Entropy is extensively discussed
in other chapters of this book. A specific generalization used for the analyses of
next sections will be presented here in a bivariate setting, although a multi-variate
extension is straightforward.
Let consider a pair of continuous time-series describing the dynamics of two
different neural circuit elements x and y, like e.g. LFPs from different brain areas,
or calcium imaging recordings of single neuron activity in a neuronal culture. These
time-series are quantized into B discrete amplitude levels `1 , . . . , `B (equal-sized for
simplicity) and are thus converted into (discretely-sampled) sequences X(t) and Y (t)
of symbols from a small alphabet.
Usually, two transition probability matrices are sampled as normalized histograms over very long symbolic sequences:
PY |XY (τ) i jk = P[Y (t) = `i |Y (t − τ) = ` j , X(t − τ) = `k ]
PY |Y (τ) i j = P[Y (t) = `i |Y (t − τ) = ` j ]
4
Demian Battaglia
where the lag τ is an arbitrary temporal scale on which causal interactions are
probed. The causal influence TEx7→y (τ) of circuit element x on circuit element y
is then operatively defined as the functional:
TEx7→y (τ) = ∑ PY |XY (τ) log2
PY |XY (τ)
PY |Y (τ)
(1)
where the sum runs over all the three indices i, j and k of the transition matrices.
Higher Markov order descriptions of the time-series evolution can also be adopted
for the modeling of the source and target time-series [51]. In general, the conditioning on the single past values X(t − τ) and Y (t − τ) appearing in the definition of
the matrices PY |XY (τ) and PY |Y (τ) is replaced by conditioning on vectors of several past values Yrp = [Y (t − rτ),Y (t − (r + 1)τ), . . . (t − (p − 1)τ),Y (t − pτ)] and
Xqs = [X(t − sτ), X(t − (s + 1)τ), . . . (t − (q − 1)τ), X(t − qτ)]. Here p and q correspond to the Markov orders taken for the target and source time-series Y (t) and X(t)
respectively. The parameters r, s < p, q are standardly set to r, s = 1, but might assume different values for specific applications (see later). A general Markov order
transfer entropy TEx7→y (τ; r, s, p, q) can then be written straightforwardly .
More importantly, to characterize the dependency of directed functional interactions on dynamical states, a further state conditioning is introduced. Let S(t) be a
vector describing the history of the entire system —i.e. not only the two considered
circuit elements x and y but the whole neural circuit to which they belong— over the
time-interval [t − T,t]. We define then a “state selection filter”, i.e. a set of time instants C for which the system history S(t) satisfy some arbitrary set of constraints.
The definition of C is left on purpose very general and will have to be instantiated
depending on the specific concrete application. It is then possible to introduce an
(arbitrary Markov orders) state-conditioned Transfer Entropy:
TECx7→y (τ; r, s, p, q) = ∑ PY |XY ;C (τ; r, s, p, q) log2
PY |XY ;C (τ; r, s, p, q)
PY |Y ;C (τ; r, s)
(2)
where the sum runs over all the possible values of Y , Yrp and Xqs and the transition probability matrices PY |XY ;C (τ; r, s, p, q) = P[Y (t)|Yrp (t), Xqs (t);t ∈ C ] and
PY |Y ;C (τ; r, s) = P[Y (t)|Yrp (t);t ∈ C ] are restrictedly sampled over time epochs in
which the ongoing collective dynamics is compliant with the imposed constraints.
Although such a general definition may appear hermetic, it becomes fairly natural
when specific constraints are taken. Simple constraints might be for instance based
on the dynamic range of the instantaneously sampled activity. A possible state selection filter might therefore be: “The activity of every node of the network must be
below a given threshold value”. As a consequence, the overall sampled time-series
would be inspected, and time-epochs in which some network node has an activity
with an amplitude above the threshold level would be discarded and not sampled
for the evaluation of PY |XY ;C and PY |Y ;C . Other simple constraints might be defined
based on the spectral properties of the considered time-series. For instance, the state
selection filter could be: “The power in the theta range of frequencies of the aver-
Function follows dynamics: state-dependency of directed functional influences
5
age network activity must have been above a given threshold during the last 500
milliseconds at least”). In this way, only transients (each one longer than 500 ms) in
which the system displayed collectively a substantial theta oscillatory activity would
be sampled for the evaluation of PY |XY ;C and PY |Y ;C . Even more specifically, additional constraints might be imposed by filtering for specific phase-relations between
two network nodes to be fulfilled. Once again, the result of imposing a constraint
would be to restrict the set of time-instants C over which the transition matrices
PY |XY ;C and PY |Y ;C are sampled for the evaluation of TECx7→y .
Therefore, state-conditioned Transfer Entropy provides a measure of the directed functional interactions associated to some definite dynamical regime,
specified through an ad hoc set of state-selection filtering constraints.
3 Directed functional interactions in bursting cultures
Neuronal cultures provide simple, yet versatile model systems [23] exhibiting a rich
repertoire of spontaneous activity [14, 61]. These aspects make cultures of dissociated neurons particularly appealing for studying the interplay between activity
and connectivity. The activity of hundreds to thousands of cells in in vitro cultured
neuronal networks can be simultaneously monitored using calcium fluorescence
imaging techniques [39, 53] (cfr. Figure 1A). Calcium imaging can be applied both
in vitro and in vivo and can potentially be combined with interventional techniques
like optogenetic stimulation [68]. A major drawback of this technique, however, is
that the typical frame rate during acquisition is slower than the cell’s firing dynamics by an order of magnitude. Furthermore the poor signal-to-noise ratio is such to
make hard the detection of elementary firing events.
The experimental possibility of following in parallel the activity of most nodes
of a large network provides ideal datasets for the extraction of directed functional
connectivity. In particular, model-free information theory-based metrics [34, 43, 55]
can be applied, since recordings are can be stable over several hours [53]. A proper
understanding of state-dependency of directed functional connectivity allows then
to restrict the analysis to regimes in which directed functional connectivity and
structural connectivity are expected to have a good match, thus opening the way
to the algorithmic reconstruction of the connectivity of an entire neuronal network
in vitro. Such understanding can be built by the systematic analysis of semi-realistic
synthetic data from simulated neuronal cultures, in which the ground-truth structural connectivity is known and can be arbitrarily tuned to observe its impact on the
resulting dynamics and functional interactions.
6
Demian Battaglia
A
100μm
100μm
experiment
B
avg. fluorescence (a.u.)
fluorescence (a.u.)
65
60
55
20
0
60
time (s)
52
51
40
20
60
time (s)
2
20
40
60
80
40
60
80
time (s)
1.0
0.8
0.6
0.4
0.2
20
time (s)
1000
500
nr. of occurrences
nr. of occurrences
4
0.0
0
80
1000
500
6
0
0
80
53
0
D
40
avg. fluorescence (a.u.)
fluorescence (a.u.)
70
50
C
simulation
8
75
100
50
10
5
51.0
51.5
52.0
52.5
fluorescence (a.u.)
53.0
100
50
10
5
0.0
0.1
0.2
0.3
fluorescence (a.u.)
0.4
0.5
Fig. 1 Bursting neuronal cultures in vivo and in silico. A Bright field image (left panel) of a
region of a neuronal culture at day in vitro 12, together with its corresponding fluorescence image
(right panel), integrated over 200 frames. Round objects are cell bodies of neurons. B Examples of
real (left) and simulated (right) calcium fluorescence time series for different individual neurons.
C Corresponding averages over the whole population of neurons. Synchronous network bursts
are clearly visible from these average traces. D Distribution of population averaged fluorescence
amplitudes, for a real network (left) and a simulated one (right). These distributions are strongly
right skewed, with a right tail corresponding to the strong average fluorescence during bursting
events. Figure adapted from [55]. (Copyright: Stetter et al. 2012, Creative Commons licence).
3.1 Neuronal cultures “in silico”
A neuronal culture is modeled as a random network of N leaky integrate-and-fire
neurons. Synapses provide post-synaptic currents with a difference-of-exponentials
Function follows dynamics: state-dependency of directed functional influences
7
time-course [15]. For simplicity, all synapses are excitatory, to mimic common experimental conditions in which inhibitory synaptic transmission is pharmacologically blocked [53]. Neurons in culture show a rich spontaneous activity that originates from both fluctuations in the membrane potential and small noise currents
in the pre-synaptic terminals [14]. To reproduce spontaneous firing, each neuron is
driven by statistically independent Poisson spike sources with a small rate, in addition to recurrent synaptic inputs.
A key feature required for the reproduction of network bursting is the introduction of synaptic short-term depression, described through classic Tsodyks-Markram
equations [57], which take into account the limited availability of neurotransmitter resources for synaptic release and the finite time needed to recharge a depleted synaptic terminal. Dynamics comparable with experiments [23] are obtained
by setting synaptic weights of internal connections to give a network bursting of
0.10 ± 0.01 Hz. To achieve these target rates, an automated conductance adjustment
procedure is used [55] for every considered topology.
Concerning more in detail the used structural topologies, connectivity is always sparse. The probability of connection is ”frozen” to lead an average degree
of about 100 neighbor neurons, compatible with average degrees reported previously for neuronal cultures in vitro of the mimicked age (DIV) and density [44, 53].
Two general types of networks are then considered: (i) a locally-clustered ensemble,
where the probability of connection decays with the planar distance between two
neurons and connections tend therefore to be locally clustered; and (ii) a non-locally
clustered ensemble, with connections first randomly drawn and, then, rewired to
reach a specified target degree of clustering.
Finally, surrogate calcium fluorescence signals are generated based on the spiking dynamics of the simulated cultured network. A common fluorescence model
introduced in [59] gives rise to an initial fast increase of fluorescence after activation, followed by a decay with a slow time-constant τCa = 1 s. Such a model
describes the intra-cellular concentration of calcium that is bound to the fluorescent
probe. The concentration changes rapidly for each action potential locally elicited
in a time bin corresponding to the acquisition frame. The net fluorescence level Fi
associated to the activity of a neuron i is finally obtained by further feeding the
Calcium concentration into a saturating static non-linearity, and by adding a Gaussian distributed noise. Example surrogate calcium fluorescence time-series, together
with actual recordings for comparison, can be seen in Figure 1B.
All the details and the parameters of the used neuronal and network models and
calcium surrogate signals —including the modeling of systematic artifacts like light
scattering for an increased realism— can be found in the original publication by
Olav Stetter et al. [55]. With the selected parameters, the simulated neuronal cultures display temporally irregular network bursting as highlighted by Figures 1C,
reporting fluorescence averaged over the entire network, and Figure 1D, showing
the right-skewed distribution of average fluorescence, with its right tail associated
to the high fluorescence during network bursts.
8
Demian Battaglia
3.2 Extraction of directed functional networks
A generalized TE score is calculated for every possible directed pair of nodes in the
analyzed simulated culture. The adjacency matrix of a directed functional network
is then obtained by applying a threshold to the TE values at an arbitrary level. Only
links whose TE value raises above this threshold are retained in the reconstructed
digraph. Selecting a threshold for the inclusion of links corresponds to set the average degree of the reconstructed network. An expectation about average degree in the
culture directly translates thus into a specific threshold number of links to include.
The estimation problem for TE scores themselves is, in this context, less severe
than usual. Indeed time-series generated by models are less noisy than real experimental recordings. Furthermore they can be generated to be as long as required for
proper estimation. Yet, the length of simulated calcium fluorescence time-series is
restricted in [55] to a duration achievable in actual experiments. it is important to
mention that, for network reconstruction, it is not required to correctly estimate the
values of individual TE scores. Indeed, only their relative ranking matters. Since firing and connectivity are homogeneous across the simulated network, biases are not
expected to vary strongly for different edges. Moreover, the problem of assessing
statistical significancy is also irrelevant, since the threshold used for deciding link
inclusion is based on an extrinsic criterion (i.e. achieving a specific target average
degree compatible with experimental knowledge) not dependent of TE estimation
itself. Thus, even rough plug-in estimates of generalized TE can be adopted 1 .
3.3 Zero-lag causal interactions for slow-rate calcium imaging
Original formulation of Transfer Entropy were meant to detect the causal influence
of events in the past toward events at a later time. However, since the slow acquisition rate of calcium imaging techniques is an order of magnitude slower than the
actual synaptic and integration delays of neurons in the culture, it is conceivable that
many “cause” and-“effect” spike pairs may occur within a same acquisition frame.
A practical trick avoiding to completely ignore such causally-relevant correlation
events is to include “same-bin” interactions in the evaluation of (state-conditioned)
Transfer Entropy [55]. in practice, referring to the parameters labeling in Equation
2, this amounts to set r = 1, but s = 0, i.e. to condition the probability of transitions
from past to present values of the time-series Y (t) on present values of the (putative
cause) time-series X(t). When not otherwise specified, Transfer Entropy analyses
of calcium fluorescence time-series from neuronal cultures will be performed taking (r = 1, s = 0, p = 2, q = 1).
Note that a similar approach is adopted in this volume’s chapter by Luca Faes, to
cope with volume conduction in a Granger Causality analysis of EEG signals.
1
We have verified, in particular, that bootstrap corrections would not alter the obtained results.
Function follows dynamics: state-dependency of directed functional influences
Frequency of observation
A
I
II
9
III
Network-averaged fluorescence (a.u.)
I
B
Fraction of true positives
C
II
III
1.0
1.0
1.0
0.5
0.5
0.5
0.0
0.0
0.5
1.0
Fraction of false positives
0.0
0.0
0.5
1.0
Fraction of false positives
0.0
0.0
0.5
1.0
Fraction of false positives
Fig. 2 Functional multiplicity in simulated cultures. A Three ranges of amplitude are highlighted
in the distribution of network-averaged fluorescence G(t). Directed functional interactions associated to different dynamical regimes are assessed by conditioning the analysis to these specific
amplitude ranges. Range I corresponds to low-amplitude noise. Range II to fluorescence level typical of sparse inter-burst activity. Range III to high average fluorescence during network bursts.
B Visual representation of the reconstructed functional networks topology in the three considered
dynamical regimes (top 10% of TE score links only are shown). Qualitative topological differences in the three extracted networks are evident. C ROC analysis of the correspondence between
inferred functional networks and the ground-truth structural network. Overlap is random for noisedominated range I, is marked for inter-burst regime II and is only partial for bursting regime III.
3.4 State-selection constraints for neuronal cultures
Neuronal cultures in vitro and in silico display stochastic-like switching between
relatively quiet inter-burst periods, characterized by low-rate and essentially asynchronous firing of few neurons at a time, and bursting events, characterized by exponentially fast rise of the number of recruited synchronously firing neurons. In
general, there is no reason to expect that such two regimes may be associated to
10
Demian Battaglia
identical directed functional connectivity networks. As a matter of fact, firing of a
neuron during an inter-burst period is facilitated by firing of pre-synaptic neurons.
As a consequence, it is reasonable to expect that directed functional connectivity associated to inter-burst epochs has a large overlap with the underlying structural connectivity of the culture. On the contrary, during a bursting event and its advanced
buildup phase, the network is over-excitable and the firing of a single neuron can
easily cause the firing within a very short time of many other neurons not necessarily connected to it. For this reason, intuition suggests that the directed functional
connectivity during bursting events is dominated by collective behavior, rather than
by synaptic coupling.
To confirm these expectations, it is necessary to extract directed functional interactions from calcium fluorescence time-series separately for each dynamical regime.
This can be achieved by defining an appropriate set of filtering constraints for the
evaluation of state-conditioned Transfer Entropy. A fast way to implement these
constraints is to track variations of the average fluorescence G(t) = ∑Ni=1 Fi (t) of
the entire network. Fully developed network bursts will be associated to anomalously high average network fluorescence G(t) (fluorescence range denoted as III
in Figure 2A). Conversely, inter-bursts epochs will be associated to weaker network
fluorescence (fluorescence range denoted as II in Figure 2A). Too low network fluorescence would be indistinguishable from mere baseline noise (fluorescence range
denoted as I in Figure 2A).
A straightforward way to define a “state” based on average fluorescence might
thus be to restrict sampling to acquisition frames t in which the network-averaged
fluorescence G(t) falls within a prescribed range:
C = {t|Gbottom < G(t) ≤ Gtop }
(3)
Different ranges of fluorescence will identify different dynamical regimes, to which
the evaluation of state-conditioned Transfer Entropy will be particularized.
3.5 Functional multiplicity in simulated cultures
The state dependency of directed functional connectivity is illustrated by generating
a random network from the local clustering ensemble and by simulating its dynamics. The resulting distribution of network-averaged fluorescence and the three
dynamical ranges we focus on in detail are highlighted in Figure 2A.
For simulated data, the inferred connectivity can be directly compared to the
ground truth. A standard Receiver-Operator Characteristic (ROC) analysis is used
to quantify the quality of reconstruction. ROC curves are generated by gradually
moving a threshold level from the lowest to the highest TE value, and by plotting
at each point the fraction of included true positive links against the corresponding
fraction of included false positive links. The functional networks extracted in the
three dynamical ranges I, II and III and their relation with structural connectivity are
Function follows dynamics: state-dependency of directed functional influences
11
shown, respectively in Figures 2B and 2C. For a fair comparison, an equal number
of samples is used to estimate TE in the three fluorescence ranges.
The lowest range I corresponds to a regime in which spiking-related signals are
buried in noise. Correspondingly, the associated functional connectivity is indistinguishable from random, as indicated by a ROC curve close to the diagonal. Note,
however, that a more extensive sampling (i.e. using all the available observation
samples) would show that some information about structural topology is still conveyed by the activity in this regime [55].
At the other extreme, represented by range III —associated to fully developed
synchronous bursts— the functional connectivity has also a poor overlap with the
underlying structural network. The extracted functional networks are characterized
by the existence of hub nodes with an elevated out- and in-degree. The spatiotemporal organization of bursting can be described in terms of these functional
connectivity hubs, since nodes within the neighborhood of a same functional hub
experience a strongest mutual synchronization than arbitrary pair of nodes across
the network [55]. In particular, figure 2B displays three visually-evident communities of “bursting-together” neurons.
The best agreement between functional and excitatory structural connectivity is
obtained for the middle range II, corresponding to above base-line noise activity
during inter-bursts epochs and the early building-up phases of synchronous bursts.
Thus, the retrieved TE-based functional networks confirm intuitive expectations
outlined in the previous section. Note that the state-dependency of functional connectivity is not limited to synthetic data. Very similar patterns of state-dependency
are observed also in real data from neuronal cultures. In particular, in both simulated
and real cultures, the functional connectivity associated to the buildup of bursts displays a stronger clustering level than during inter-burst periods [55].
The existence of such different topologies of functional interactions stemming
out of different dynamical ranges of a same structural network constitutes a perfect example of the notion of functional multiplicity, outlined in the introduction.
Certainly, it is possible to define “right” ranges for structural network reconstruction, importantly for practical applications in connectomics. However, this statement
should not be over-interpreted to claim that the directed functional connectivity inferred in a regime like the one associated to range III is “wrong”. On the contrary,
the extracted functional connectivity is capturing correctly the topology of causal
influences in such a collective state, in which firing of a single neuron is likely to
trigger firing of a whole dense community of nodes.
3.6 Structural connectivity from directed functional connectivity
A more refined analysis of function-to-structure overlap suggests that best matching
is achieved for a range including fluorescence levels just at the right of the Gaussianlike peak in the histogram of Fig. 2A [55]. Characterizing state-dependency allows
thus defining the best TE-conditioning range for reconstruction of structural con-
12
B
1.0
Functional connectivity clustering
A
Demian Battaglia
True positives fraction
0.8
0.6
0.4
0.2
0.0
0.0
Conditioning only
Zero-lag interactions
0.2
0.4
0.6
False positives fraction
0.8
1.0
0.8
0.6
0.4
0.2
0.0
0.0
Cross-corr
TE
0.2
0.4
0.6
0.8
Structural connectivity clustering
Fig. 3 From functional to structural connectivity in simulated cultures. Good matching between
structural and inferred directed functional connectivity is achieved in simulated neuronal cultures
by optimizing the state-conditioning of TE and by correcting for slow acquisition rate of calcium
imaging. A ROC curves for a network reconstruction with generalized TE with fluorescence data
optimally conditioned at G < Gtop = 0.112. The area surrounded by dashed lined depicts ROC
fluctuation interval, based on analysis of 6 networks. The black ROC curve refers to reconstruction
performed with TE using (r = 1, s = 0, p = 2, q = 1), i.e. introducing zero-lag causal interactions.
The gray curve is for (r = s = 1, p = q = 1), i.e. always Markov Order 2, but not correcting for
slow acquisition rate. B Clustering of inferred directed functional connectivity as a function of
ground-truth structural clustering. In TE-based reconstructions, functional and structural clustering are linearly correlated, in contrast with cross-correlation-based reconstructions, overestimating
clustering. Figure adapted from [55]. (Copyright: Stetter et al. 2012, Creative Commons licence).
nectivity of the culture. This range should exclude regimes of highly synchronized
activity (like range III) while keeping most of data points for the analysis. More
details are provided in the original study by Stetter et al. [55], showing that very
good reconstruction performance is achieved on simulated data, by implementing a
state-selection filter with optimized threshold Gtop close to the upper limit of Range
II and no lower threshold Gbottom . ROCs corresponding to this choice can be seen
in Figure 3A. Good reconstruction is possible for a vast spectrum of topologies, as
denoted by a good correlation between ground-truth structural clustering coefficient
and reconstructed functional clustering level. Note that a cross-correlation analysis performed over the same state-conditioned set of simulated observations would
systematically overestimates the level of clustering (Figure 3B, cfr. [55]).
3.7 Structural degeneracy in simulated cultures
Different dynamical regimes of a structural network can give rise to multiple functional networks. At the same time, functional networks associated to comparable
dynamical regimes are similar. Therefore, since comparable dynamical regimes can
B
C
20 s
Freq. of observation
A
13
# neuron (1-100)
Function follows dynamics: state-dependency of directed functional influences
0
30
60
Inter-burst interval (s)
Struct. CC
~ 0.1
Func. CC
~ 0.7
20 s
0
30
60
Inter-burst interval (s)
Struct. CC
~ 0.3
Func. CC
~ 0.7
20 s
0
30
60
Inter-burst interval (s)
Struct. CC
~ 0.7
Func. CC
~ 0.7
s
Fig. 4 Structural degeneracy in simulated cultures. A Examples of spike raster plots for three
simulated cultures with different structural clustering coefficients (non-local clustering ensemble,
structural clustering coefficient equal, respectively from left to right, to 0.1, 0.3 and 0.7). B As revealed by histograms of inter-burst intervals, the temporally-irregular network bursting dynamics
of these strongly different cultures are very similar. Vertical lines indicating the mean of each distribution. C: panels below the IBI distributions illustrate graphically the amount of clustering in the
actual structural network and in the directed functional network reconstructed from fluorescence
range III (bursting regime) as given by Figure 2. To very different degrees of structural clustering correspond equivalent elevated levels of functional clustering, due to the common bursting
statistics. Figure adapted from [55]. (Copyright: Stetter et al. 2012, Creative Commons licence).
be generated by very different networks, a same functional connectivity topology
can be generated by multiple structural topologies.
Figure 4 illustrates the dynamics of three simulated cultures with different clustering coefficients (with a same total number of links). The synaptic strength is adjusted in each network using an automated procedure to obtain comparable bursting
and firing rates (see Stetter et al. 2012 [55] for details on the procedure and on the
models). The simulated spiking dynamics of the three cultures in silico is shown
in the raster plots of Figure 4A. These three networks display indeed very similar
bursting dynamics, not only in terms of the mean bursting rate, but also in terms of
the entire inter-burst interval (IBIs) distribution, shown in Figure 4B.
Based on these bursting dynamics, directed functional connectivity is extracted
for the three differently clustered structural networks, but by state-conditioning TE
on a same dynamic range, matching range III in Figure 2, i.e. in the fully-developed
burst regime. The extracted functional networks have always an elevated clustering
level (close to 0.7) at contrast with the actual structural clusterings, varying in a
broad range between 0.1 and 0.5 (see Figure 4C).
14
Demian Battaglia
The automatic procedure for the generation of networks with similar bursting
dynamics is not guaranteed to converge for such a wide range of clustering coefficients. Thus, the illustrative simulations of Figure 4 genuinely confirms that the relation between network dynamics and network structure is not trivially “one-to-one”,
manifesting the phenomenon of structural degeneracy, outlined in the introduction.
4 Directed functional interactions in motifs of oscillating areas
Ongoing local oscillatory activity modulates rhythmically neuronal excitability in
brain cortex [60]. As also reviewed in Andre Bastos’ contribution to this volume,
the communication-through-coherence hypothesis [29] states that neuronal groups
oscillating in a suitable phase coherence relation —such to align their respective
“communication windows”— are likely to interact more efficiently than neuronal
groups which are not synchronized. Similar mechanisms are believed to be involved
in selective attention and top-down modulation [6, 24, 31, 38].
To cast light on the role of self-organized collective dynamics in establishing
flexible patterns of communication-through-coherence, it is possible to introduce
simple models of generic motifs of interacting brain areas (Figure 5A), each one
undergoing locally generated coherent oscillations (Figure 5B). Simple mesoscopic
circuits involving a small number of local areas, mutually coupled by long-range
excitatory projections (Figure 5C) are in particular considered. As analyzed also
with mean-field developments in [2, 4], phase-locking between the oscillations of
different local areas develops naturally in such structural motifs. Phase-relations
between the oscillations of different areas depend non trivially on the delays of local
and long-range interactions and on the actual strength of local inhibition. When
local inhibition gets sufficiently strong, phase-locking tends to occur in an out-ofphase fashion, in which phase-leading and phase-lagging areas emerge, despite the
symmetry of their mutual long-range excitatory coupling [2, 4].
Through large-scale simulations of networks of spiking neurons representing cortical structural motifs [54], directed functional connectivity between the different
local areas involved is extracted through state-conditioned TE analyses of simulated
local-field-potential (LFP) signals. Once again, it is found that “causality follows
dynamics”, in the sense in which different phase-locked patterns of collective oscillations are mapped to different directed functional connectivity motifs [4]. The
used in silico approach allows as well to investigate how information encoded at
the level of the detailed spiking activity of thousands of neurons is routed between
the modeled areas, depending on the active directed functional connectivity. As a
matter of fact, TE-based directed functional connectivity reflects collective activity
of population of neurons, while neuronal representations of informations are carried by spiking activity. TE-based analyses of “macroscopic” signals, like LFPs, or
EEGs are therefore not guaranteed a priori to describe information transmission at
the level of “microscopic” spiking activity. Complementary analyses are thus re-
Function follows dynamics: state-dependency of directed functional influences
Fig. 5 Model oscillating
areas. A A local area is
modeled as a random network of conductance-based
excitatory and inhibitory neurons. A moderate fraction
of them is transduced with
Channel-rhodopsine (ChOP)
conductances [68], allowing optogenetic perturbation.
B Sparsely-synchronized oscillations develop, in which
Poisson-like firing of single
neurons and strongly oscillating LFPs coexist. C Two
local areas mutually coupled
by long-range excitation.
A
15
C
E
I
With ChOP
B
#100
Spikes
#1
“LFP”
40 ms
quired to capture actual flows of represented information, in the conventional sense
of neuronal information processing.
The spiking of individual neurons can be very irregular even when the collective
rate oscillations are regular (cfr. Figure 5B). Therefore, even local rhythms in which
the firing rate is modulated in a very stereotyped way, might correspond to irregular (highly entropic) sequences of codewords encoding information in a digital-like
fashion (e.g. by the firing —“1”— or missed firing —“0”— of specific spikes at
a given cycle [56]). In such a framework, oscillations would not directly represent
information, but would rather act as a carrier of “data-packets” associated to spike
patterns of synchronously active cell assemblies. By quantifying through a Mutual
Information (MI) analysis the maximum amount of information encoded potentially
in the spiking activity of a local area and by evaluating how much of this information is actually transferred to distant interconnected areas, it is possible to demonstrate that different directed functional connectivity configurations lead to different
modalities of information routing. Therefore, the pathways along which information
propagates can be reconfigured within the time of a few reference oscillation cycles,
by switching to a different effective connectivity motif, for instance by means of a
spatially and temporally precise optogenetic stimulation [4, 66].
4.1 Oscillating local areas “in silico”
Each local area is represented by a random network of NE = 4000 excitatory and
NI = 4000 inhibitory Wang-Buzsáki-type conductance-based neurons [62]. The
Wang-Buzsáki model is described by a single compartment endowed with sodium
and potassium currents. Each neuron receives an external noisy driving current due
to background Poisson synaptic bombardment, representing cortical noise. Other
inputs are due to recurrent interactions with other neurons in the network. Excita-
16
Demian Battaglia
tory synapses are of the AMPA-type and inhibitory synapses of the GABAA -type
and are modeled as time-dependent conductances with difference-of-exponential
time-course [15]. LFP signals Λ (t) = hV (t)i are defined as the average membrane
potential over the NE + NI cells in each area.
Connectivity is random. Short-range connections within a local area are excitatory and inhibitory. Excitatory neurons are as well allowed to establish long-range
connections toward distant areas. For the used parameters, each area develops a
sparsely synchronized collective oscillation with a collective frequency in the 4060 Hz range. Firing frequency of individual neurons remains on average of a spike
every 6 LFP oscillation cycles. A complete description of the model can be found
in [4]. For simplicity, only fully connected structural motifs involving a few areas
(K = 2, 3) are studied. Note however that the used approach might be extended to
other structural motifs [54] or even to large-scale thalamocortical networks [35, 42].
4.2 State-selection constraints for motifs of oscillating areas
The dynamical regimes generated by motifs of interconnected areas are phaselocked oscillatory configurations. Therefore a natural way of defining state-selection
constraints is to restrict the analysis to epochs with consistent phase-relations between the oscillations of different areas. Phases are extracted from LFP time-series
with spectral analysis techniques like Hilbert transform. Considering then instantaneous phase-differences ∆ Φab (t) = (Φ[Λa (t)] − Φ[Λb (t)]) mod 2π (between pairs
of areas a and b) and the stable values φab around which they fluctuate in a given
locking mode, state selection constraints can be written as:
C = {t|∀(a, b), (φab − δ ) < ∆ Φab (t) < (φab + δ )}
(4)
In the more realistic case in which coherent oscillations and phase-locking arise only
transiently [58], unlike in the model of [4] in which oscillations are stationary and
stable, additional constraints might be added, guaranteeing that the instantaneous
power of LFP time-series integrated over specified frequency band (e.g. the gamma
band) exceeds a given minimum threshold.
Since the sampling rates of the electrophysiological recordings simulated by the
computational model is elevated, there is no need to incorporate zero-lag causal
interactions. Therefore, the standard settings (r = s = 0, p = q = 1) are used.
Confidence intervals and statistical significancy of causal interaction strengths
are assessed by comparisons with TE estimates from surrogate time-series, randomly resampled through a geometric bootstrap procedure [49], preserving the autocorrelation structure of individual time-series and therefore compliant with their oscillatory nature. Details can be found in [4].
Function follows dynamics: state-dependency of directed functional influences
A
B
C
17
G
x6
D
E
F
x6
0.8
Fig. 6 Functional multiplicity in motifs of oscillating areas. Dynamical states and resulting directed functional connectivities, generated by structural motifs of K = 2, 3 mutually and symmetrically connected brain areas. A–C simulated “LFPs” and spike trains of the two populations of a
K = 2 motif for three different strengths of the symmetric inter-areal coupling, leading to more or
less regular phase-locked states. D–E Transfer entropies for the two possible directions of functional interaction, associated to the dynamic states in panels A–C. A grey band indicates threshold
for statistical significancy. Below the TE plots: graphic depiction of the functional interactions
between the two areas, as captured by Transfer Entropy. Only arrows corresponding to significant causal interactions are shown. Arrow thickness reflects TE strength. G Analogous directed
functional connectivity motifs generated by a K = 3 symmetric structural motif. Multiplier factors
indicate multistability between motifs with same topology but different directions. Figure adapted
from [4]. (Copyright: Battaglia et al. 2012, Creative Commons licence).
4.3 Functional multiplicity in motifs of oscillating areas
Different dynamical states —characterized by oscillations with different phaselocking relations and degrees of periodicity— arise from simple symmetric structural topological motifs [2, 4]. Changes in the strength of local inhibition, of longrange excitation or of delays of local and long-range connections can lead to phase
transitions between qualitatively distinct dynamical states (Figure 6A–C). Moreover, within broad ranges of parameters, multi-stabilities between different phaselocking patterns take place even without changes in connection strength or delay.
Multivariate time-series of simulated “LFPs” are generated for different dynamical states of the model structural motifs and TEs for all the possible directed pairwise interactions are calculated. The resulting directed connectivities are depicted
in diagrammatic form by drawing an arrow for each statistically significant causal
interaction, the thickness of each arrow encodeing the strength of the corresponding
interaction (Figure 6D–F). This graphical representations make thus apparent that
many directed functional connectivity motifs emerge from a same structural motif.
Such functional motifs are organized into families. Motifs within a same family cor-
18
Demian Battaglia
respond to dynamical states which are multi-stable for a given choice of parameters,
while different families of motifs are obtained for different ranges of parameters
leading to different ensembles of dynamical states.
A first family of functional motifs occurs for weak inter-areal coupling. In this
case, neuronal activity oscillates in a roughly periodic fashion (Figure 6A). When
local inhibition is strong, the local oscillations generated within different areas lock
in an out-of-phase fashion. It is therefore possible to identify a leader area whose
oscillations lead in phase over the oscillation of laggard areas [2]. In this family,
causal interactions are statistically significant only for pairwise interactions proceeding from a phase-leading area to a phase-lagging area, as shown by the the
box-plots of Figure 6D (unidirectional driving).The anisotropy of functional influences in the leader-to-laggard and laggard-to-leader directions can be understood
in terms of the communication-through-coherence theory. Indeed the longer latency
from the oscillations of the laggard area to the oscillations of the leader area reduces
the likelihood that rate fluctuations originated locally within a laggard area trigger
correlated rate fluctuations within a leading area [67].
A second family of functional motifs occurs for intermediate inter-areal coupling.
In this case, the periodicity of the “LFP” oscillations is disrupted by the emergence
of large correlated fluctuations in oscillation cycle amplitudes and durations. Phaselocking between “LFPs” becomes only approximate, even if still out-of-phase on
average. The rhythm of the laggard area is now more irregular than the rhythm in
the leader area (Figure 6B). Fluctuations in cycle length do occasionally shorten
the laggard-to-leader latencies, enhancing non-linearly and transiently the influence
of laggard areas on the leader activity. Correspondingly, TEs in leader-to-laggard
directions continue to be larger, but TEs in laggard-to-leader directions are now also
statistically significant (Figure 6E). The associated effective motifs are no more
unidirectional, but continue to display a dominant direction (leaky driving).
A third family of effective motifs occurs for stronger inter-areal coupling. In
this case the rhythms of all the areas become equally irregular, characterized by
an analogous level of fluctuations in cycle and duration amplitudes. During brief
transients, leader areas can still be identified, but these transients do not lead to
a stable dynamic behavior and different areas in the structural motif continually
exchange their leadership role (Figure 6C). As a result of the instability of phaseleadership relations, only average TEs can be evaluated, yielding to equally large
TE values for all pairwise directed interactions (Figure 6F, mutual driving).
Analogous unidirectional, leaky or mutual driving motifs of functional interaction can be found in larger motifs with K = 3 areas, as shown by Figure 6G [4].
4.4 Control of information flow directionality
The considered structural motifs are left unchanged after a permutation of interconnected areas. However, while anti-phase or in-phase locking configurations would
share this permutation symmetry with the full system, this is not true for the out-of-
Function follows dynamics: state-dependency of directed functional influences
B
C
0.25
100%
Phase
50%
0.5
Switching
frequency
0
MI / H
10
10
10
10
0
10
−1
−2
−3
MI / H
A
19
10
10
10
0
−1
−2
−3
0.75
Fig. 7 Switching information flow in motifs of oscillating areas.A A precisely-phased optogenetic or electric stimulation pulse can trigger switching between alternative phase-locking modes
of a structural motif of oscillating areas (here shown a switching from black-preceding-gray to
gray-preceding-black out-of-phase locking). For a given perturbation intensity, the probability that
a pulse attractor switching concentrates within a narrow application phase interval. B-C: Actual
information transmission efficiency is quantified by the Mutual Information (MI) between spike
trains of pairs of source and target cells connected by a unidirectional transmission-line (TL)
synapse, normalized by the entropy (H) of the source cell. Boxplots show values of MI/H for
different groups of cell pairs and directed functional motifs. Black and pale gray arrows below
boxplots indicate pairs of cells interconnected by the TL marked with the corresponding color. A
dot indicates control pairs of cells interconnected by ordinary weak synapses. The dominant directionality of the active functional motif is also shown. B Unidirectional driving functional motif
family. Communication efficiency is enhanced only along the TL aligned to the directionality of
the active functional motif, while it is undistinguishable from control along the other TL. C Leaky
driving functional motif family. Communication efficiency is enhanced along both TLs, but more
along the TL aligned to the dominant directionality of the active functional motif. Figure adapted
from [4]. (Copyright: Battaglia et al. 2012, Creative Commons licence).
phase-locking configurations stable for strong local inhibition (cfr. Figure 6A–B).
In general, one speaks about spontaneous symmetry breaking whenever a system
with specific symmetry properties assumes dynamic configurations whose degree
of symmetry is reduced with respect to the full symmetry of the system. However,
due to the overall structural symmetry, configurations in which the areas exchange
their leader or laggard roles must also be stable, i.e. the complete set of dynamical
attractors continues to be symmetric, even if individual attractors are asymmetric.
Exploiting multi-stability, fast reconfiguration of directed functional influences
can be obtained just by inducing switching between alternative multi-stable attractors, associated to functional motifs in a same family but with different directionality. As elaborated in [4], an efficient way to trigger “jumps” between phase-locked
configurations is to perturb locally the dynamics of ongoing oscillations with precisely phased stimulation pulses. Such an external perturbation can be provided for
instance by optogenetic stimulation, if a sufficient fraction of cells in the target area
has been transduced with light-activated conductance. Simulation studies [66] suggest that even transduction rates as low as 5-10% might be sufficient to optogenetically induce functional motif switching, if the pulse perturbation are properly
20
Demian Battaglia
phased with respect to the ongoing rhythm (Figure 7A), as predicted also by a meanfield theory [4]. But what is the impact of functional motif switching on the actual
flow of information encoded at the microscopic level of detailed spiking patterns?
In the studied model, rate fluctuations can encode only a limited amount of information, because firing rate oscillations are rather stereotyped. Higher amounts of
information can be carried by spiking patterns, since the spiking activity of single
neurons during sparsely synchronized oscillations remains very irregular and thus
characterized by a potentially large entropy. To quantify information exchanged by
interacting areas, a reference code is considered, in which a “1” or a “0” symbol
denote respectively firing or missed firing of a spike by a specific neuron at each
given oscillation cycle. Based on such an encoding, the neural activity of a group
of neurons is mapped to digital-like streams, “clocked” by the network rhythm, in
which a different “word” is broadcast at each oscillation cycle2 .
Focusing on a fully symmetric structural motif of K = 2 areas, the network
is modified by embedding into it transmission lines (TLs), i.e. mono-directional
fiber tracts dedicated to inter-areal communication. In more detail, selected subpopulations of source excitatory neurons within each area establish synaptic contacts with matching target excitatory or inhibitory cells in the other area, in a oneto-one cell arrangement. Synapses in a TL are strengthened with respect to usual
synapses, in the attempt to enhance communication capacity, but not too much, in
order not to alter phase-relations between the collective oscillations of the two areas
(for more details, see [4]). The information transmission efficiency of each TL, for
the case of different effective motifs, is assessed by quantifying Mutual Information
(MI) [56] between the “digitized” spike trains of pairs of source and target cells.
Since a source cell spikes on average every five or six oscillation cycles, the firing
of a single neuron conveys H ' 0.7 bits of information per oscillation cycle. MI normalized by the source entropy H indicates the fraction of this information reaching
the target cell. Due to the possibility of generating very long simulated recordings
in stationary conditions, straight plug-in estimates of MI and H provide already reasonable levels of accuracy (in the sense in which taking into account finite sampling
corrections [56] would not change the described phenomenology [4]).
As shown by Figure 7B–C, the communication efficiency of embedded TLs depends strongly on the active functional motif. Preparing the structural motif in a
unidirectional driving functional motif (Figure 7B), communication is nearly optimal along the TL aligned with the functional motif itself. The misaligned TL, however, shows no enhancement with respect to control (i.e. pairs of connected cells
not belonging to a TL). In the case of leaky driving functional motifs (Figure 7C),
communication efficiency is boosted for both TLs, but more for the TL aligned with
the dominant functional influence direction. For both families of functional motifs,
communication efficiencies of the two embedded TLs can be “swapped” within one
or two oscillation cycles only, by reversing the dominant functional influence direction through a suitable perturbation inducing attractor switching.
2
Such a code is here introduced uniquely as a theoretical construct grounding a rigorous analysis
of information transmission, without claim that it is actually being used in the brain.
Function follows dynamics: state-dependency of directed functional influences
21
In conclusion, the parallelism between TE analyses of directed functional connectivity and MI analyses of information transmission is manifest. In simulated
structural motifs, indeed, information flow quantified by spike-based MI follows
closely in direction and strength the functional topology inferred by LFP-based TE.
5 Function from structure, via dynamics
The architect Louis Sullivan first popularized a celebrated tag-line stating that “form
follows function”. The two model frameworks here reviewed, cultures of dissociated neurons and motifs of interacting oscillating areas, disclose on the contrary
that function doesn’t follow structure, or, at least, not in some trivial sense. Both
functional multiplicity and structural degeneracy can be understood considering the
primacy of dynamics on determining the emergent functional interactions. Therefore, function seem to follow dynamics, rather than structure.
State-conditioning is the key methodological device allowing generalized Transfer Entropy to portray in an intuitively appealing way causal influences and information routing modalities enabled by different dynamical regimes.
Still and all, functional connectivity patterns are known to be strongly determined
by structure. A clear example is provided by resting-state functional connectivity
[26], which can largely be understood in terms of noise-driven fluctuations of the
spontaneous dynamics of thalamocortical macroscale structures [18, 35, 42]. In the
examples here considered, structure was fixed a priori. However, in nature (or in
the dish) networks are shaped by spontaneous growth, learning and, on longer timescales, evolution. Which optimization goal is than this self-organized design trying
to achieve? A possible answer might be the attempt to maximize functional multiplicity, for guaranteeing elevated functional flexibility through the generation of
a repertoire of possible dynamics as rich as possible [18, 35]. Thus, it might well
be that Louis Sullivan’s motto applies as well to the description of brain circuits,
even if the structure to function relation is only indirect and can be understood only
through detour involving nonlinear dynamics. As a matter of fact, for evolution or
development, the problem of engineering a circuit implementing a given set of functions, could be nothing else than the design of structural networks acting as emergent
“functional collectivities” [27] with suitable dynamical regimes.
An advantageous feature allowing a dynamical network to transit fluently between dynamical regimes would be criticality [13]. Switching would be indeed
highly facilitated for a system tuned to be close to the edge between multiple dynamic attractors. This is eventually the case for neuronal cultures, which undergo
spontaneous switching to bursting due to their proximity to a rate instability (compensated for by synaptic resource depletion). Beyond that, networks at the edge of
synchrony might undergo noise-induced switching between a baseline essentially
asynchronous activity and phase-locked transients with elevated local and interareal oscillatory coherence. In networks critically tuned to be at the edge of synchrony, specific patterns of directed functional interactions associated to a latent
22
Demian Battaglia
phase-locked attractor —becoming manifest only for fully developed synchrony—
might be “switched on” just through the application of weak biasing inputs which
stabilizing its metastable strong-noise “ghost” [19].
Acknowledgements The research framework here reviewed would not have been developed without the precious contributions of many colleagues and students. Credit for these and other related
results must be shared with (in alphabetic order): Ahmed El Hady, Theo Geisel, Christoph Kirst,
Erik Martens, Andreas Neef, Agostina Palmigiano, Javier Orlandi, Jordi Soriano, Olav Stetter,
Marc Timme, Annette Witt, Fred Wolf. I am also grateful to Dante Chialvo, Gustavo Deco and
Viktor Jirsa for inspiring discussions.
References
1. de Arcangelis L, Perrone-Capano C, Herrmann HJ (2006) Self-organized criticality model for
brain plasticity. Phys Rev Lett 96:028107
2. Battaglia D, Brunel N, Hansel D (2007) Temporal decorrelation of collective oscillations in
neural networks with local inhibition and long-range excitation. Phys Rev Lett 99:238106
3. Battaglia D, Hansel D (2011) Synchronous chaos and broad band gamma rhythm in a minimal
multi-layer model of primary visual cortex. PLoS Comp Biol 7:e1002176
4. Battaglia D, Witt A, Wolf F, Geisel T (2012) Dynamic effective connectivity of inter-areal
brain circuits. PLoS Comp Biol 8:e1002438
5. Beggs J, Plenz D (2003) Neuronal avalanches in neocortical circuits. Journal of Neuroscience
23:11167–11177
6. Bosman CA, Schoffelen J-M, Brunet N, Oostenveld R, Bastos AM et al. (2012) Attentional
stimulus selection through selective synchronization between monkey visual areas. Neuron
75:875–888
7. Bressler SL, Seth AK (2011) Wiener-Granger causality: a well established methodology. NeuroImage 58:323–329
8. Brovelli A, Ding M, Ledberg A, Chen Y, Nakamura R, Bressler SL (2004) Beta oscillations
in a large-scale sensorimotor cortical network: directional influences revealed by Granger
causality. Proc Natl Acad Sci USA 101:9849–9854
9. Brunel N, Wang XJ (2003) What determines the frequency of fast network oscillations with
irregular neural discharges? J Neurophysiol 90: 415–430
10. Brunel N, Hansel D (2006) How noise affects the synchronization properties of recurrent
networks of inhibitory neurons. Neural Comput 18: 1066–1110
11. Brunel N, Hakim V (2008) Sparsely synchronized neuronal oscillations. Chaos 18: 015113
12. Buehlmann A, Deco G (2010) Optimal information transfer in the cortex through synchronization. PLoS Comput Biol 6(9): e1000934
13. Chialvo DR (2010) Emergent complex neural dynamics. Nat Phys 6:744–750
14. Cohen E, Ivenshitz M, Amor-Baroukh V, Greenberger V, Segal M (2008) Determinants of
spontaneous activity in networks of cultured hippocampus. Brain Res 1235: 21–30
15. Dayan P, Abbott L (2001) Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems. Cambridge, MA: MIT Press
16. Deco G, Romo R (2008) The role of fluctuations in perception. Trends Neurosci 31: 591–8
17. Deco G, Rolls ET, Romo R (2009) Stochastic dynamics as a principle of brain function. Prog
Neurobiol 88: 1–16
18. Deco G, Jirsa VK, McIntosh R (2011) Emerging concepts for the dynamical organization of
resting-state activity in the brain. Nat Rev Neurosci 12: 43–56
19. Deco G, Jirsa VK (2012) Ongoing cortical activity at rest: criticality, multistability, and ghost
attractors. Journal of Neuroscience 32:3366–3375
Function follows dynamics: state-dependency of directed functional influences
23
20. Ding M, Chen Y, Bressler SL (2006) Granger causality: basic theory and application to neuroscience. In: Schelter B, Winterhalder M and Timmer J (eds) Handbook of time series analysis.Wiley, New York
21. Ditzinger T, Haken H (1989) Oscillations in the perception of ambiguous patterns: a model
based on synergetics. Biol Cybern 61: 279–287
22. Eckhorn R, Bauer R, Jordan W, Brosch M, Kruse W, Munk M, Reitboeck HJ (1988) Coherent oscillations: a mechanism of feature linking in the visual cortex? Multiple electrode and
correlation analyses in the cat. Biol Cybern 60:121–130
23. Eckmann JP, Feinerman O, Gruendlinger L, Moses E, Soriano J, et al. (2007) The physics of
living neural networks. Physics Reports 449: 54–76
24. Engel A., Fries P, Singer W (2001) Dynamic predictions: oscillations and synchrony in topdown processing. Nat Rev Neurosci 2: 704–716
25. Eytan D, Marom S (2006) Dynamics and effective topology underlying synchronization in
networks of cortical neurons. J Neurosci 26: 8465–8476
26. Fox MD, Snyder AZ, Vincent JL, Corbetta M, Van Essen DC et al. (2005) The human brain is
intrinsically organized into dynamic, anticorrelated functional networks. Proc Natl Acad Sci
USA 102:9673–9678
27. Fraiman D, Balenzuela P, Foss J, Chialvo DR (2009) Ising-like dynamics in large-scale functional brain networks. Phys Rev E Stat Nonlin Soft Matter Phys 79:061922
28. Freyer F, Roberts JA, Becker R, Robinson PA, Ritter P et al. (2011) Biophysical mechanisms
of multistability in resting-state cortical rhythms. J Neurosci 31: 6353–6361
29. Fries P (2005) A mechanism for cognitive dynamics: neuronal communication through neuronal coherence. Trends Cogn Sci 9: 474–480
30. Fries P, Nikolić D, Singer W (2007) The gamma cycle. Trends Neurosci 30: 309-16
31. Fries P, Womelsdorf T, Oostenveld R, Desimone R (2008) The effects of visual stimulation
and selective visual attention on rhythmic neuronal synchronization in macaque area V4. J
Neurosci 28: 4823–4835
32. Friston KJ (1994) Functional and Effective Connectivity in Neuroimaging: A Synthesis. Human Brain Mapping 2:56–78
33. Friston KJ (2011) Functional and Effective Connectivity: A Review. Brain Connectivity 1:13–
36
34. Garofalo M, Nieus T, Massobrio P, Martinoia S (2009) Evaluation of the performance of information theory-based methods and cross-correlation to estimate the functional connectivity
in cortical networks. PLoS One 4: e6482
35. Ghosh A, Rho Y, McIntosh AR, Ktter R, Jirsa VK (2008) Noise during rest enables the
exploration of the brain’s dynamic repertoire. PLoS Comp Biol 4: e1000196.
36. Gourévitch B, Bouquin-Jeannès RL, Faucon G (2006) Linear and nonlinear causality between
signals: methods, examples and neurophysiological applications. Biol Cybern 95:349–369
37. Granger CWJ (1969) Investigating causal relations by econometric models and cross-spectral
methods. Econometrica 37: 424–438
38. Gregoriou GG, Gotts SJ, Zhou H, Desimone R (2009) High-frequency, long-range coupling
between prefrontal and visual cortex during attention. Science 324: 1207–1210
39. Grienberger C, Konnerth A (2012) Imaging Calcium in Neurons. Neuron 73: 862–885
40. Haken H, Kelso JA, Bunz H (1985) A theoretical model of phase transitions in human hand
movements. Biol Cybern 51: 347–56
41. Hlavackova-Schindler K, Palus M, Vejmelka M, Bhattacharya J (2007) Causality detection
based on information-theoretic approaches in time series analysis. Phys Rep 441:1–46
42. Honey CJ, Kötter R, Breakspear M, Sporns O (2007) Network structure of cerebral cortex
shapes functional connectivity on multiple time scales. Proc Natl Acad Sci USA 104:10240–
10245
43. Ito S, Hansen ME, Heiland R, Lumsdaine A, Litke AM, Beggs JM (2011) Extending transfer
entropy improves identification of effective connectivity in a spiking cortical network model.
PLoS ONE 6:e27431
44. Jacobi S, Soriano J, Segal M, Moses E (2009) BDNF and NT-3 increase excitatory input
connec- tivity in rat hippocampal cultures. Eur J Neurosci 30: 998–1010
24
Demian Battaglia
45. Levina A, Herrmann JM, Geisel T (2007) Dynamical synapses causing self-organized criticality in neural networks. Nat Phys 3:857–860
46. Levina A, Herrmann JM, Geisel T (2009) Phase Transitions towards Criticality in a Neural
System with Adaptive Interactions. Phys Rev Lett 102:118110
47. Misic B, Mills T, Taylor MJ, McIntosh AR (2010) Brain noise is task-dependent and region
specific. J Neurophysiol 104: 2667–2676
48. Moreno-Bote R, Rinzel J, Rubin N (2007) Noise-induced alternations in an attractor network
model of perceptual bistability. J Neurophysiol 98: 1125–39
49. Politis DN, Romano JP (1994) Limit theorems for weakly dependent Hilbert space valued
random variables with applications to the stationary bootstrap. Statistica Sinica 4: 461–476
50. Salazar RF, Dotson NM, Bressler SL, Gray CM (2012) Content-specific fronto-parietal synchronization during visual working memory. Science 338:1097–1100
51. Schreiber T (2000) Measuring information transfer. Phys Rev Lett 85: 461–464
52. Seamans JK, Yang CR (2004) The principal features and mechanisms of dopamine modulation in the prefrontal cortex. Prog Neurobiol 74: 1–58
53. Soriano J, Martinez MR, Tlusty T, Moses E (2008) Development of input connections in
neural cultures. Proc Natl Acad Sci USA 105: 13758–13763
54. Sporns O, Kötter R (2004) Motifs in brain networks. PLoS Biol 2: e369
55. Stetter O, Battaglia D, Soriano J, Geisel T (2012) Model-free reconstruction of excitatory
neuronal connectivity from calcium imaging signals. PLoS Comp Biol 8:e1002653
56. Strong SP, Koberle R, de Ruyter van Steveninck RR, Bialek W (1998) Entropy and information in neural spike trains. Phys Rev Lett 80: 197–200
57. Tsodyks M, Uziel A, Markram H (2000) Synchrony generation in recurrent networks with
frequency-dependent synapses. J Neurosci 20: 1–5
58. Varela F, Lachaux JP, Rodriguez E, Martinerie J (2001) The brainweb: Phase synchronization
and large-scale integration. Nat Rev Neurosci 2: 229–239
59. Vogelstein JT, Watson BO, Packer AM, Yuste R, Jedynak B, et al. (2009) Spike inference
from calcium imaging using sequential Monte Carlo methods. Biophys J 97: 636–655
60. Volgushev M, Chistiakova M, Singer W (1998) Modification of discharge patterns of neocortical neurons by induced oscillations of the membrane potential. Neuroscience 83: 15–25
61. Wagenaar DA, Pine J, Potter SM (2006) An extremely rich repertoire of bursting patterns
during the development of cortical cultures. BMC Neuroscience 7: 1–18
62. Wang XJ, Buzsáki G (1996) Gamma oscillation by synaptic inhibition in a hippocampal interneuronal network model. J Neurosci 16: 6402–6413
63. Wang XJ (2010) Neurophysiological and computational principles of cortical rhythms in cognition. Physiol Rev 90: 1195–1268
64. Whittington MA, Traub RD, Kopell N, Ermentrout B, Buhl EH (2000) Inhibition-based
rhythms: experimental and mathematical observations on network dynamics. Int J Psychophysiol 38:315–336
65. Wiener N (1956) The theory of prediction. In: Beckenbach E (ed), Modern Mathematics for
Engineers. McGraw-Hill, New York
66. Witt A, Neef A, El Hady A, Wolf F, Battaglia D (2013) Controlling oscillation phase through
precisely timed closed-loop optogenetic stimulation: a computational study, in revision
67. Womelsdorf T, Lima B, Vinck M, Oostenveld R, Singer W et al. (2012) Orientation selectivity
and noise correlation in awake monkey area V1 are modulated by the ? cycle. Proc Natl Acad
Sci USA 109:4302–4307
68. Yizhar O, Fenno LE, Davidson TJ, Mogri M, Deisseroth K (2011) Optogenetics in neural
systems. Neuron 71:9–34