Title A Generalized Rate Model for Neuronal Ensembles( fulltext

Transcription

Title A Generalized Rate Model for Neuronal Ensembles( fulltext
Title
Author(s)
Citation
Issue Date
URL
Publisher
Rights
A Generalized Rate Model for Neuronal Ensembles( fulltext )
HASEGAWA, Hideo
東京学芸大学紀要. 自然科学系, 58: 75-93
2006-09-00
http://hdl.handle.net/2309/63441
Committee of Publication of Bulletin of Tokyo Gakugei
University
Bulletin of Tokyo Gakugei University, Natural Sciences, 58, pp.75∼93, 2006
A Generalized Rate Model for Neuronal Ensembles
Hideo HASEGAWA*
Department of Physics
(Received for Publication; May 26, 2006)
HASEGAWA, H.: A Generalized Rate Model for Neuronal Ensembles. Bull. Tokyo Gakugei Univ. Natur. Sci., 58: 75–93 (2006)
ISSN 1880–4330
Abstract
There has been a long-standing controversy whether information in neuronal networks is carried by the firing rate code or by the
firing temporal code. The current status of the rivalry between the two codes is briefly reviewed with the recent experimental and
theoretical studies. Then we have proposed a generalized rate model based on the finite N-unit Langevin model subjected to
additive and/or multiplicative noises, in order to understand the firing property of a cluster containing N neurons. The stationary
property of the rate model has been studied with the use of the Fokker-Planck equation (FPE) method. Our rate model is shown to
yield various kinds of stationary distributions such as the interspike-interval distribution expressed by non-Gaussians including
gamma, inverse-Gaussian-like and log-normal-like distributions.
The dynamical property of the generalized rate model has been studied with the use of the augmented moment method (AMM)
which was developed by the author [H. Hasegawa, J. Phys. Soc. Jpn. 75 (2006) 033001]. From the macroscopic point of view in
the AMM, the property of the N-unit neuron cluster is expressed in terms of three quantities; µ, the mean of spiking rates of
R = (1/N)∑i ri where ri denotes the firing rate of a neuron i in the cluster: γ, averaged fluctuations in local variables (ri): ρ,
fluctuations in global variable (R). We get equations of motions of the three quantities, which show ρ ∼ γ /N for weak couplings.
This implies that the population rate code is generally more reliable than the single-neuron rate code. Dynamical responses of the
neuron cluster to pulse and sinusoidal inputs have been calculated.
Our rate model is extended and applied to an ensemble containing multiple neuron clusters. In particular, we have studied the
property of a generalized Wilson-Cowan model for an ensemble consisting of two kinds of excitatory and inhibitory clusters.
Key words: neuron ensemble, rate code, temporal code, rate model, Wilson-Cowan model
Department of Physics, Tokyo Gakugei University, Koganei-shi, Tokyo 184-8501, Japan.
1 Introduction
standing controversy how neurons communicate information
by firings or spikes [2]-[6]. The issue on the neural code is
Human brain contains more than 10
10
neurons [1].
whether information is encoded in the rate of firings of
Neurons communicate information, emitting short voltage
neurons (rate code) or in the more precise firing times
pulses called spikes, which propagate through axons and
(temporal code). The rate code was first recognized by
dendrites to neurons at the next stage. It has been a long-
Adrian [7] who noted that the neural firing was increased
*
Electronic address: [email protected]
− 75 −
Bulletin of Tokyo Gakugei University, Natural Sciences, Vol. 58 (2006)
with increasing the stimulus intensity. The firing rate r(t) of
accumulate information on firing rate, and then its information
a neuron is defined by
capacity and speed are limited compared to the temporal code.
It is, however, not true when the rate is averaged over the
(1)
ensembles (population rate code). The population average
promotes the response, suppressing the effects of neuronal
where tw denotes the time window, n (t, t + tw) the number of
variability [17][18]. The population rate code is expected to
firings between t and t + tw, and tk the kth firing time. It has
be a useful coding method in many areas in the brain. Indeed,
been widely reported that firing activities of motor and
in recent brain-machine interface (BMI) [19][20][21], the rate-
sensory neurons vary in response to applied stimuli. In the
code information on arm position and velocity of a monkey
temporal code, on the contrary, the temporal information like
is collected by multi-electrodes, and it is transacted in the
the inter-spike interval (ISI) defined by
computer. It is surprising that such systems successfully
manipulate the movement of an artificial hand. Feedback
Tk = tk+1 – tk ,
(2)
signals in the form of pressure on the animal’s skin may be
sent back to the brain. A success in BMI strongly suggests
and its distribution π (T), ISI histogram (ISIH), are
that the population rate code is employed in sensory and
considered to play the important role in the neural activity.
motor neurons while it is still not clear which code is
More sophisticate methods such as the auto-and cross-
adopted in higher-level cortical neurons.
correlograms and joint peri-stimulus time histogram
The microscopic, conductance-based mechanism of firings
(JPSTH) have been also employed for an analysis of
of neurons is fairly well understood. The dynamics of the
correlated firings based on the temporal-code hypothesis.
membrane potential Vi of the neuron i in the neuron cluster is
There have been accumulated experimental evidences which
expressed by the Hodgkin-Huxley-type model given by [22]
seem to indicate a use of the temporal code in brain [2][8][10]. A fly can react to new stimuli and the change the
(3)
direction of flight within 30-40 ms [2]. In primary visual
cortices of mouse and cat, repetitions of spikes with the
Here C expresses the capacitance of a cell: Vr is the recovery
millisecond accuracy have been observed [8]. Humans can
voltage: gn(Vi, αn) denotes the conductance for n ion channel
recognize visual scenes within a few hundred milliseconds,
(n=Na, K etc.) which depends on Vi and αn, the gate function
even though recognition is involved several processing steps
for the channel n: Ii is the input arising from couplings with
[9]. Human can detects images in a sequence of unrelated
other neurons and an external input. The dynamics of αn is
pictures even if each image is shown for only 14-100
expressed by the first-order differential equation. Since it is
milliseconds [10].
diffcult to solve nonlinear differential equations given by Eq.
The other issue on the neural code is whether information
(3), the reduced, simplified neuron models such as the integrate-
is encoded in the activity of a single (or very few) neuron or
and-fire (IF), the FitzHugh-Nagumo (FN) and Hindmarsh-
that of large number of neurons (population code) [11][12].
Rose (HR) models have been employed. In the simplest IF
The classical population code is employed in a number of
model with a constant conductance, Eq. (3) reduces to the
systems such as motor neurons [13], the place cell in
linear differential equation, which requires an artificial reset
hippocampus [14] and neurons in middle temporal (MT)
of Vi at the threshold θ.
areas [15]. Neurons in motor cortex of the monkey, for
It is not easy to analytically obtain the rate r(t) or ISI T (t)
example, encode the direction of reaching movement of the
from spiking neuron model given by Eq. (3). There are two
arm. Information on the direction of the arm movement is
approaches in extracting the rate from spiking neuron model.
decoded from a set of neuronal firing rates by summing the
In the first approach, Fokker-Planck equation (FPE) is applied
preferred direction vector weighted by the firing rate of each
to the IF model to calculate the probability p(V, t), from
neuron in the neural population [13]. Recently, a probabilistic
which the rate is given by r(t) = p(θ , t). In order to avoid the
population code based on the Bayesian statistics has been
diffculty of the range of variable: – ∞ < V ≤ θ in the original
proposed [16]. It has been considered that the rate code
IF model, sophisticate quadratic and exponential IF models,
needs the time window of a few hundred milliseconds to
in which the range of variable is – ∞ < V ≤ ∞, have been
− 76 −
HASEGAWA:A Generalized Rate Model for Neuronal Ensembles
proposed [23]. In the second approach, the rate model is
It is theoretically supposed that there are two types of noises:
derived from the conductance-based model with synapses by
additive and multiplicative noises. The magnitude of the former
using the f – I relation between the applied dc current I and
is independent of the state of variable while that of the latter
the frequency f of autonomous firings [24][25][26]. It has
depends on its state. Interesting phenomena caused by the two
been shown that the conductance-based model may lead to
noises have been investigated [35]. It has been realized that the
the rate model if the network state does not posses a high
property of multiplicative noises is different from that of additive
degree of synchrony [26].
noises in some respects. (1) Multiplicative noises induce the
In the rate-code hypothesis, a neuron is regarded as a
phase transition, creating an ordered state, while additive noises
black box receiving and emitting signals expressed by the
are against the ordering [36][37]. (2) Although the probability
firing rate. The dynamics of the rate ri(t) of the neuron i in a
distribution in stochastic systems subjected to additive Gaussian
cluster is expressed by
noise follows the Gaussian, it is not the case for multiplicative
Gaussian noises which generally yield non-Gaussian distribution
(4)
[38]-[43]. (3) The scaling relation of the effective strength for
additive noise given by β (N) = β(1)
is not applicable to that
where τ denotes the relaxation time, wij the coupling between
for multiplicative noise: α(N) ≠ α(1)
neurons i and j, and Ii an external input. The gain function
denote effective strengths of multiplicative and additive noises,
where α(N) and β(N)
K(x) is usually given by the sigmoid function or by the f – I
respectively, in the N-unit system [44]. A naive approximation
relation mentioned above. One of disadvantages of the rate
of the scaling relation for multiplicative noise: α(N) = α(1)
model is that the mechanism is not well biologically supported.
as adopted in Ref. [36], yields the result which does not agree
Nevertheless, the rate model has been adopted for a study on
with that of direct simulation (DS) [44].
many subjects of the brain. The typical rate model is the
Formally, noise can be introduced to the spiking and rate
Wilson-Cowan (WC) model, with which the stability of a
models, by adding a fluctuating noise term on the right-hand
cluster consisting of excitatory and inhibitory neurons is
side of Eqs. (3) and (4), respectively. ISIH obtained from
investigated [27][28]. The rate model given by Eq. (4) with
neuronal systems has been analyzed by using various population
K(x)= x is the Hopfield model [29], which has been extensively
rate methods (for a recent review, see Refs. [45, 46]). It has
employed for a study on the memory in the brain with
been tried to fit experimental data by a superposition of some
incorporating the plasticity of synapses into wij .
known probability densities such as the gamma, inverse-
It is well known that neurons in brains are subjected to
various kinds of noises, though their precise origins are not
Gaussian and log-normal distributions. The gamma distribution
with parameters λ and µ is given by
well understood. The response of neurons to stimuli is modified
by noises in various ways. Although firings of a single in vitro
(5)
neuron are reported to be precise and reliable [30], those of in
vivo neurons are quite unreliable, which is expected to be
which is derived from a simple stochastic integrate-and-fire
due to noisy environment. The strong criticism against the
(IF) model with additive noises for Poisson inputs [47], Γ(x)
temporal code is that it is not vulnerable to noises, while the
being the gamma function. The inverse Gaussian distribution
rate code is robust against them.
with parameters λ and µ given by
Noises may be, however, beneficial for the signal transmission
in the brain against our wisdom. The most famous phenomenon
(6)
is the stochastic resonance [31], in which the signal-to-noise
ratio of subthreshold signals is improved by noises. It has
is obtained from a stochastic IF model in which the
been shown that the noise is essential for the rate-code signal
membrane potential is represented as a random walk with
to propagate through multilayers described by the IF model:
drift [48]. The log-normal distribution with parameters µ
otherwise firings tend to synchronize, by which the rate-code
and σ given by
signal is deteriorated [32][33]. Recent study using HH model
has shown that firing-rate signals propagate through the
multiplayer with the synchrony [34].
− 77 −
(7)
Bulletin of Tokyo Gakugei University, Natural Sciences, Vol. 58 (2006)
is adopted when the log of ISI is assumed to follow the
Gaussian [49].
Much study has been made on the spiking neuron model for
coupled ensembles with the use of two approaches: direct
simulations (DSs) and analytical approaches such as FPE and
moment method. DSs have been performed for large-scale
networks mostly consisting of IF neurons. Since the time to
simulate networks by conventional methods grows as N2 with
N, the size of the network, it is rather diffcult to simulate
realistic neuron clusters. In the FPE, dynamics of neuron
ensembles is described by the population activity. Although
the FPE is powerful method which is formally applicable to
the arbitrary N, actual calculations have been made for N = ∞
with the mean-field and diffusion approximations. Quite
recently, the population density method has been developed
as a tool modeling large-scale neuronal clusters [50][51]. As
a useful semi-analytical method for stochastic neural models,
the moment method was proposed [52]. For example, when
the moment method is applied to N-unit FN model, original
2N-dimensional stochastic equations are transformed to
Figure 1: Response of a 10-unit FN neuron cluster subjected
to additive and multiplicative noises; (a) input signal I(t), (b)
a local membrane potential [v(t)] and (c) a global membrane
potential [V(t) = (1/N)∑i vi (t)] obtained by direct simulation
(DS) with a single trial: (d) a global membrane potential
[V(t)] calculated by DS with 100 trials: (e) the result of the
AMM [µ (t)].
N (2N + 3)-dimensional deterministic equations. This figure
The AMM in Ref. [54] was originally developed by
becomes 230, 20300 and 2 003 000 for N = 10, 100 and 100,
expanding variables around their stable mean values in order
respectively.
to obtain the second-order moments both for local and global
In many areas of the brain, neurons are organized into
variables in stochastic systems. In recent papers [44][60], we
groups of cells such as column in the visual cortex [53].
have reformulated the AMM with the use of FPE to discuss
Experimental findings have shown that within small clusters
stochastic systems subjected to multiplicative noise: the FPE
consisting of finite number of neurons (∼ 10 – 1000), there
is adopted to avoid the diffculty due to the Ito versus
exist many cells with very nearly identical responses to
Stratonovich calculus inherent to multiplicative noise [61].
identical stimuli [53]. Analytical, statistical methods having
An example of calculations with the use of the AMM for a
been developed so far are mean-field-type theories which
FN neuron cluster subjected to additive and multiplicative
may deal with infinite-size systems, but not with finite-size
noises is presented in Figs.1(a)-(e) [62]. When input pulses
ones. Based on a macroscopic point of view, Hasegawa [54]
shown in Fig.1 (a) are applied to the 10-unit FN neuron
has proposed the augmented moment method (AMM), which
cluster, the membrane potential v i(t) of a given neuron
emphasizes not the property of individual neurons but rather
depicted in Fig.1 (b) is obtained by DS with a single trial. It
that of ensemble neurons. In the AMM, the state of finite N-
has the much irregularity because of added noises. When we
unit stochastic ensembles is described by a fairly small
get the ensemble-averaged potential given by V(t) = (1/N) ∑i
number of variables: averages and fluctuations of local and
vi(t), the irregularity is reduced as shown in Fig.1 (c). This is
global variables. For N-unit FN neuron ensembles, for
one of the advantages of the population code [17]. The
example, the number of deterministic equation in the AMM
results shown in Figs.1 (b) and 1 (c) are obtained by a single
becomes eight independent of N. This figure of eight is much
trial. When we have repeated DS and taken the average over
smaller that those in the moment method mentioned above.
100 trials, the irregularity in V(t) is furthermore reduced as
The AMM has been successfully applied to a study on the
shown in Fig.1(d). The result of the AMM, µ (t), plotted in
dynamics of the Langevin model and stochastic spiking
Fig.1 (e) is in good agreement with that of DS in Fig.1 (d).
models such as FN and HH models, with global, local or
The purpose of the present paper is two fold: to propose a
small-world couplings (with and without transmission
generalized rate model for neuron ensembles and to study its
delays) [55]-[59].
property with the use of FPE and the AMM. The paper is
− 78 −
HASEGAWA:A Generalized Rate Model for Neuronal Ensembles
organized as follows. In Sec.2, we discuss the generalized
for Ti > 15 ms. This is consistent with the recent calculation
rate model for a single cluster containing N neurons, investigating
for HH neuron multilayers, which shows a nearly linear
its stationary and dynamical properties. In Sec.3, our rate model
relationship between the input (ri) and output rates (ro) for ri
is extended and applied to an ensemble containing multiple
< 60 Hz [34]. It is interesting that the ri – ro relation is
neuron clusters. In particular, we study the two-cluster ensemble
continuous despite the fact that the HH neuron has the
consisting of excitatory and inhibitory clusters. The final
discontinuous first-type f – I relation. We will adopt, in this
Sec.4 is devoted to discussion and conclusion.
paper, a simple expression given by [64]
2 Single neuron clusters
(13)
2. 1 Generalized rate model
although our result is valid for an arbitrary form for H(x).
We have adopted a neuronal cluster consisting of N
The nonlinear, saturating behavior in H(x) arises from the
neurons. The dynamics of firing rate ri (≥ 0) of a neuron i is
property of the refractory period (τr) where spike outputs are
assumed to be described by the Langevin model given by
prevented for tf < t < tf + τr after firing at t = tf .
2. 2 Stationary property
(8)
2. 2. 1 Distribution of r
The Fokker-Planck equation for the distribution of p({ri},
with
t) is given by [65]
(9)
where F(x), G(x) and H(x) are arbitrary functions of x: Z (=
(14)
N – 1) denotes the coordination number: w is the coupling
strength: Ii (t) expresses an input from external sources: α
where G′(x) = dG(x)/dx, and φ = 1 and 0 in the Stratonovich
and β are the strengths of additive and multiplicative noises,
and Ito representations, respectively.
The stationary distribution p(x) for w = 0 and Ii(b) +Ii(ex)(t) =
respectively, given by ηi (t) and ξi (t) expressing zero-mean
Gaussian white noises with correlations given by
I is given by
<ηi (t) ηj (t′) > = δ ij δ( t – t′),
(10)
<ξi (t) ξj (t′) > = δ ij δ( t – t′),
(11)
<ηi (t) ξj (t′) > = 0.
(12)
(15)
with
The rate models proposed so far have employed F(x) = –λ x
(16)
and G(x) = 0 (no multiplicative noises). In this paper, we will
adopt several functional forms for F(x) and G(x). As for the
(17)
gain function H(x), two types of expressions have been
adopted. In the first category, the sigmoid function such as
H(x) = tanh(x), 1/(1 + e–x ), atan(x), etc. have been adopted.
Hereafter we mainly adopt the Stratonovich representation.
In the second category, H(x) is given by the f – I function as
Case I F(x) = – λ x and G(x)= x
H(x) = (x – xc)Θ(x – xc) which expresses the frequency f of
For the linear Langevin model, we get
autonomous oscillation of a neuron for the applied dc current
I, x c denoting the critical value and Θ (x) the Heaviside
(18)
function: Θ(x) = 1 for x ≥ 0 and 0 otherwise. It has been
theoretically shown in Ref. [63] that when spike inputs with
with
the mean ISI (Ti) are applied to an HH neuron, the mean ISI
< 15 ms and To ∼ 15
of output signals (To) is To = Ti for < Ti ∼
− 79 −
(19)
Bulletin of Tokyo Gakugei University, Natural Sciences, Vol. 58 (2006)
where H = H(I). In the case of H = Y(r) = 0, we get the qGaussian given by [41, 42]
(20)
with
(21)
(22)
We examine the some limiting cases of Eq. (20) as follows.
(A) For α = 0 and β ≠ 0, Eq. (20) becomes
(23)
(B) For β = 0 and α ≠ 0, Eq. (20) becomes
(24)
Distributions p(x) calculated with the use of Eqs. (20)-(24)
are plotted in Figs.2(a)-2(c). The distribution p(x) for α =
0.0 (without multiplicative noises) in Fig.2(a) shows the
Gaussian distribution which is shifted by an applied input I =
0.1. When multiplicative noises are added (α ≠ 0), the form
of p(x) is changed to the q-Gaussian given by Eq. (20).
Figure 2(b) shows that when the magnitude of additive
noises β is increased, the width of p(r) is increased. Figure
Figure 2: (a) Distributions p(r) of the (local) firing rate r for
various α with λ = 1.0, β = 0.1, I = 0.1 and w = 0.0, (b) p(x)
for various β with λ = 1.0, α = 1.0, I = 0.1 and w = 0.0, and
(c) p(x) for various I with λ = 1.0, α = 0.5, β = 0.1 and w =
0.0.
2(c) shows that when the magnitude of external input I is
limiting cases of Eqs. (25)-(27) are shown in the following.
increased, p(x) is much shifted and widely spread. Note that
(a) The case of H = Y(r) = 0 was previously studied in Ref.
for α = 0.0 (no multiplicative noises), p(r) is simply shifted
[42].
without a change in its shape when increasing I.
(b) For α = 0 and β ≠ 0, we get
Case II F(x) = – λ x and G(x) = x (a, b ≥ 0)
a
b
The special case of a = 1 and b = 1 has been discussed in
the preceding case I [Eqs. (20)-(24)]. For arbitrary a ≥ 0 and
(28)
b ≥ 0, the probability distribution p(r) given from Eqs. (15)(29)
(17) becomes
(c) For β = 0 and α ≠ 0, we get
(25)
(30)
with
(26)
(31)
(27)
(32)
where F(a, b, c; z) is the hypergeometric function. Some
− 80 −
HASEGAWA:A Generalized Rate Model for Neuronal Ensembles
Figure 3: (a) Distributions p(r) of the (local) firing rate r and
(b) π (T) of the ISI T for a = 0.8 (chain curves), a = 1.0 (solid
curves), a = 1.5 (dotted curves) and a = 2.0 (dashed curves)
with b = 1.0, λ = 1.0, α = 1.0, β = 0.0 and I = 0.1.
(33)
Figure 4: (a) Distributions p(r) of the (local) firing rate r and
(b)π (T) of the ISI T for b = 0.5 (dashed curves), b = 1.0
(solid curves), b = 1.5 (dotted curves) and b = 2.0 (chain
curves) with a = 1.0, λ = 1.0, α = 1.0, β = 0.0 and I = 0.1:
results for b = 1.5 and b = 2 should be multiplied by factors
of 2 and 5, respectively.
2. 2. 2 Distribution of T
When the temporal ISI T is simply defined by T = 1/r, its
(d) In the case of a = 1 and b = 1/2, we get
distribution π (T) is given by
(34)
(37)
which reduces, in the limit of α = 0, to
For F(x) = – λx and G(x) = x and β = 0, Eq. (31) yields
(35)
(38)
Case III F(x) = – λ ln x and G(x) = x (x > 0)
1/2
which expresses the gamma distribution [see Eq. (5)] [40,
We get
47]. For a = 2 and b = 1, Eq. (30) yields
(36)
(39)
Figure 3(a) shows distributions p(r) for various a with fixed
values of I = 0.1, b = 1.0, α = 1.0 and β = 0.0 (multiplicative
which is similar to the inverse Gaussian distribution [see Eq.
noise only). With decreasing a, a peak of p(r) at r ∼ 0.1
(6)] [48]. Equation (36) leads to
becomes sharper. Figure 4(a) shows distributions p(r) for
various b with fixed values of I = 0.1, a = 1.0, α = 1.0 and β
(40)
= 0.0 (multiplicative only). We note that a change in the b
value yields considerable changes in shapes of p(r). Figures
which is similar to the log-normal distribution [see Eq. (7)] [49].
Figures 3(b) and 4(b) show distributions of T, π (T), which
3(b) and 4(b) will be discussed shortly.
are obtained from p (r) shown in Figs. 3(a) and 4(a),
− 81 −
Bulletin of Tokyo Gakugei University, Natural Sciences, Vol. 58 (2006)
respectively, by a change of variable [Eq. (37)]. Figure 3(b)
shows that with increasing a, the peak of π (T) becomes
sharper and moves left. We note in Fig.4(b) that the form of
π (T) significantly varied by changing b in G(x) = xb.
2. 2. 3 Distribution of R
When we consider global variables R(t) defined by
(41)
the distribution P(R, t) for R is given by
Figure 5: Distributions P(R) of the (global) firing rate R for
(a) α = 0.0 and (b) α = 0.5, with N = 1, 10 and 100: λ = 1.0,
β = 0.1, w = 0.0 and I = 0.1.
(42)
Analytic expressions of P(R) are obtainable only for limited
cases.
(a) For β ≠ 0 and α = 0, P(R) is given by
(43)
where H = H (I).
(b) For H = 0, we get [60]
(44)
Figure 6: Distributions p(r) (dashed curves) and P(R) (solid
curves) for (a) α = 0.0 and (b) α = 0.5 with I = 0.1 and I =
0.2: N = 10, λ = 1.0, β = 0.1 and w = 0.0.
with
(45)
where φ (k) is the characteristic function for p(x) given by
[66]
(46)
(47)
with
(48)
Figure 7: Distributions p(r) (dashed curves) and P(R) (solid
curves) for (a) α = 0.0 and (b) α = 0.5 with w = 0.0 and w =
0.5: N = 10, λ = 1.0, β = 0.1 and I = 0.1.
(49)
with increasing N. In contrast, P(R) for α = 0.5 is the nonGaussian, whose shape seems to approach the Gaussian as
Kν (x) expressing the modified Bessel function.
Some numerical examples of P(R) are plotted in Figs.5, 6
and 7 [67]. Figures 5(a) and 5(b) show P(R) for α = 0.0 and
increasing N. These are consistent with the central-limit
theorem.
α = 0.5, respectively, when N is changed. For α = 0.0, P(R)
is the Gaussian whose width is narrowed by a factor of 1
Effects of an external input I on p(r) and P(R) are examined
in Figs.6(a) and 6(b). Figure 6(a) shows that in the case of α
− 82 −
HASEGAWA:A Generalized Rate Model for Neuronal Ensembles
= 0.0 (additive noise only), p(r) and P(R) are simply sifted
(58)
by a change in I. This is not the case for α ≠ 0.0, for which
p(r) and P(R) are shifted and widen with increasing I, as
and retaining up to the order of < δriδrj >, we get equations of
shown in Fig.6(b).
motions for µ, γ and ρ given by
Figures 7(a) and 7(b) show effects of the coupling w on
p(r) and P(R). For α = 0.0, p(r) and P(R) are little changed
(59)
with increasing w. On the contrary, for α = 0.5, an
introduction of the coupling significantly modifies p(r) and
(60)
P(R) as shown in Fig.7(b).
2. 3 Dynamical property
(61)
2. 3. 1 AMM
Next we will discuss the dynamical property of the rate
where
model by using the AMM. Moments of local variables are
defined by
(62)
(63)
(50)
(64)
Equations of motions of means, variances and covariances of
(65)
local variables (ri) are given by [44]
Original N-dimensional stochastic equations given by Eqs.
(51)
(8), (9) and (13) are transformed to the three-dimensional
deterministic equations given by Eqs. (59)-(61).
Before discussing the dynamical property, we study the
stationary property of Eqs. (59)-(61). In order to make
numerical calculations, we have adopted
(52)
(66)
Equations of motions of the mean, variance and covariance
(67)
of global variables (R) are obtainable by using Eqs. (41),
where λ stands for the relaxation ratio. Equations (59)-(61)
(51) and (52):
are expressed in the Stratonovich representation by
(53)
(68)
(54)
(69)
In the AMM [54], we define µ, γ and ρ given by
(70)
(55)
2
(56)
(57)
u
where
u
u
and h2 = –(3u/2)/
5/2
(u +1) . The stability of the stationary solution given by Eqs.
(68)-(70) may examined by calculating eigenvalues of their
Jacobian matrix, although actual calculations are tedious.
Figure 8 shows the N dependences of γ and ρ in the stationary
where µ expresses the mean, γ the averaged fluctuations in
state for four sets of parameters: (α, β, w) = (0.0, 0.1, 0.0)
local variables (ri) and ρ fluctuations in global variable (R).
(solid curves), (0.5, 0.1, 0.0) (dashed curves), (0.0, 0.1, 0.5)
Expanding ri in Eqs. (51)-(54) around the average value of µ as
(chain curves) and (0.5, 0.1, 0.5) (double-chain curves), with
β = 0.1, λ = 1.0 and I = 0.1. We note that for all the cases, ρ
− 83 −
Bulletin of Tokyo Gakugei University, Natural Sciences, Vol. 58 (2006)
Figure 8: The N dependence of γ and ρ in the stationary
states for four sets of parameters: (α, β, w) = (0.0, 0.1, 0.0)
(solid curves), (0.5, 0.1, 0.0) (dashed curves), (0.0, 0.1, 0.5)
(chain curves) and (0.5, 0.1, 0.5) (double-chain curves): λ =
1.0, N = 10 and I = 0.1.
is proportional to N–1, which is easily realized in Eq. (70). In
contrast, γ shows a weak N dependence for a small N (< 10).
2. 3. 2 Response to pulse inputs
We have studied the dynamical property of the rate model,
by applying a pulse input given by
(71)
with A = 0.5, t1 = 40, t2 = 50 and I (b) = 0.1 expressing the
background input, where Θ (x) denotes the Heaviside
function: Θ(x) = 1 for x ≥ 0 and 0 otherwise.
Figures 9(a), 9(b) and 9(c) show the time dependence of µ,
Figure 9: Time courses of (a) µ (t), (b) γ (t), (c) ρ (t) and (d)
S(t) for a pulse input I(t) given by Eq. (71) with λ = 1.0, α =
0.5, β = 0.1, N = 10 and w = 0.5, solid and chain curves
denoting results of AMM and dashed curves expressing
those of DM result with 1000 trials.
γ and ρ when the input pulse I(t) given by Eq. (71) is applied:
the contrary, we get P (t) = 2 ( 1 – 1/N) γ ≡ P 0 (t) in the
solid and dashed curves show the results of AMM and DM
asynchronous state where ρ = γ /N [54]. We may define the
averaged over 1000 trials, respectively, with α = 0.5, β = 1.0,
synchronization ratio given by [54]
N = 10 and w = 0.5 [67]. Figures 9(b) and 9(c) show that an
applied input pulse induces changes in γ and ρ. This may be
(73)
understood from 2α2 terms in Eqs. (69) and (70). The results
of AMM shown by solid curves in Figs.9(a)-(c) are in good
which is 0 and 1 for completely asynchronous (P = P0) and
agreement with DS results shown by dashed curves. Figure
synchronous states (P = 0), respectively. Figure 9(d) shows
9(d) will be discussed in the followings.
the synchronization ratio S(t) for γ (t) and ρ(t) plotted in Figs.
It is possible to discuss the synchrony in a neuronal cluster
9(b) and 9(c), respectively, with α = 0.5, β = 1.0, N = 10 and
with the use of γ and ρ defined by Eqs. (56) and (57) [54]. In
w = 0.5. The synchronization at t <40 and t > 60 is 0.15, but
order to quantitatively discuss the synchronization, we first
it is decreased to 0.03 at 40 < t < 50 by an applied pulse.
This is because γ is more increased than ρ by an applied
consider the quantity given by
pulse. The synchronization ratio is vanishes for w = 0, and it
(72)
is increased with increasing the coupling strength [54].
Next we show some results when indices a and b in F(x) =
When all neurons are in the completely synchronous state,
–λxa and G(x) = xb are changed. Figure 10(a) shows the time
we get ri(t) = R(t) for all i, and then P(t) = 0 in Eq. (72). On
dependence of µ for (a, b) = (1, 1) (solid curve) and (a, b) =
− 84 −
HASEGAWA:A Generalized Rate Model for Neuronal Ensembles
Figure 11: Response of µ (t)(solid curves) to sinusoidal input
I(t)(dashed curves) given by Eq. (74) for (a) Tp = 20 and (b)
Tp = 10 with A = 0.5, λ = 1.0, α = 0.5, β = 0.1 and N = 10 (a
= 1 and b = 1).
Figure 10: (a) Response of µ (t) to input pulse I(t) given by
Eq. (71) for (a, b) = (1, 1) (solid curve) and (a, b) = (2, 1)
(dashed curve) with α = 0.0, β = 0.1, N = 10 and λ = 1.0. (b)
Response of µ (t) to input pulse I(t) for (a, b) = (1, 1) (solid
curve) and (a, b) = (1, 0.5) (dashed curve) with α = 0.5, β =
0.001, N = 10, λ = 1.0 and w = 0.0.
3 Multiple neuron clusters
3. 1 AMM
We have assumed a neuronal ensemble consisting of M
(2, 1) (dashed curve) with α = 0.0, β = 0.1, N = 10 and w =
clusters, whose mth cluster includes N m neurons. The
0.0. The saturated magnitude of µ for α = 0.5 is larger than
dynamics of firing rate rmi (≥ 0) of a neuron i in the cluster m
that for α = 0.0. Solid and dashed curves in Fig.10(b) show
is assumed to be described by the Langevin model given by
µ for (a, b) = (1, 1) and (1,0.5), respectively, with α = 0.5, β
= 0.001, N = 10 and w = 0.0. Both results show similar
(75)
responses to an applied pulse although µ for a background
input of I (b) = 0.1 for (a, b) = (1, 0.5) is a little larger than
with
that for (a, b) = (1, 1).
(76)
2. 3. 3 Response to sinusoidal inputs
where F(x) and G(x) are arbitrary functions of x: H(x) is
We have applied also a sinusoidal input given by
given by Eq. (13): Zm (= Nm – 1) denotes the coordination
(74)
number: Im(t) expresses an external input to the cluster m: αm
and βm are the strengths of additive and multiplicative noises,
= 0.1, and Tp = 10 and 20. Time dependences
respectively, in the cluster m given by η mi (t) and ξ mi (t)
of µ for Tp = 20 and Tp = 10 are plotted in Figs.11(a) and
expressing zero-mean Gaussian white noises with
11(b), respectively, with α = 0.5, β = 1.0 and N = 10, solid
correlations given by
with A = 0.2, I
(b)
and dashed curves denoting µ and I, respectively. The delay
time of µ against an input I(t) is about τd ∼ 1.0 independent
of Tp. The magnitude of µ for Tp = 10 is smaller than that for
Tp = 20.
<ηni (t) ηmj (t′)> = δnmδijδ (t – t′ ),
(77)
<ξni (t) ξmj (t′)> = δnmδijδ (t – t′ ),
(78)
<ηni (t) ξmj (t′)> = 0.
(79)
In the AMM [54], we define means, variances and
covariances, µm, γm and ρmn, given by
− 85 −
Bulletin of Tokyo Gakugei University, Natural Sciences, Vol. 58 (2006)
It is diffcult to obtain the stability condition for the
(80)
(81)
(82)
stationary solution of Eqs. (84)-(90) because they are sevendimensional nonlinear equations. We find that equations of
motions of µE and µI are decoupled from the rest of variables
in the cases of G(x) = x and G(x) = x1/2 with F(x) = – λ x, for
which fη,2 = 0 and (gη,1gη,2 + gη,0gη,3) = 0 (η = E, I) in Eqs.
(84) and (85). We will the discuss the stationary solutions for
where the global variable Rm is defined by
these two cases in the followings.
(83)
(A) F(x) = – λ x and G(x) = x
Equations of motions for µE and µI become
Details of deriving equations of motions for µm, γm and ρmn
are given in the Appendix [Eqs. (A10)-(A12)].
(91)
3. 2 Stationary property
(92)
Now we consider an E-I ensemble (M = 2) consisting of
excitatory (E) and inhibitory (I) neuron clusters, for which
The stationary slution is given by
we get equations of motions for µm, γm and ρmn from Eqs.
(93)
(A10)-(A12):
(94)
The stability condition for stationary solutions is given by
(84)
(95)
(85)
(96)
(86)
where T and D denote the trace and determinant,
respectively, of Jacobian matrix of Eqs. (91) and (92).
By solving Eqs. (91) and (92), we get stationary solutions
(87)
of µ E and µ I . Figures 12(a) and 12(b) show µ E and µ I ,
respectively, as a function of wEE for various α (≡ αE = αI)
with λ E = λ I = 1.0, wEI = wIE = wII = 1, IE = II = 0 and NE = NI
(88)
= 10. In the case of α = 0, µE and µI are zero for wEE . wc but
become finite for w EE > w c where w c (=1.5) denotes the
critical couplings for the ordered state. With increasing α,
(89)
the critical value of wc is decreased and magnitudes of µE
and µI in the ordered state are increased. This shows that
multiplicative noise works to create the ordered state [37].
(90)
(B) F(x) = – λ x and G(x) = x 1/2
Here we set – w II ≤ 0 and – w EI ≤ 0 after convention.
Equations of motions for µE and µI become
Equations (84)-(90) correspond to a generalized WilsonCowan (WC) model, because they reduce to WC model [27]
if we adopt F(x) = – λ and G(x) = x (see below), neglecting
all fluctuations of γη and ρηη′ (η, η′ = E, I ).
− 86 −
(97)
(98)
HASEGAWA:A Generalized Rate Model for Neuronal Ensembles
Figure 12: Stationary values of (a) µE and (b) µI in an E-I
ensemble as a function of wEE for various values of α (= αE =
αI ) for the case of G(x) = x with λE = λI = 1.0, wEI = wIE = wII
= 1.0 and IE = II = 0 and NE = NI = 10.
Figure 13: Stationary values of (a) µE and (b) µI in an E-I
ensemble as a function of wEE for various values of α (= αE =
αI ) for the case of G(x) = x1/2 with λE = λI = 1.0, wEI = wIE =
wII = 1.0 and IE = II = 0 and NE = NI = 10.
The stationary solution is given by
(103)
(99)
(100)
with AE = 0.5, AI = 0.3, I E(b) = 0.1, II(b) = 0.05, t1 = 40 and t2 =
50. The time courses of µη, γη and ρηη′ (η, η′ = E, I) are
plotted in Figs.14(a)-14(c), respectively, where solid, chain
where T and D express the trace and determinant, respectively,
and dashed curves show the results of AMM and dotted
of Jacobian matrix of Eps. (99) and (100).
curves those of DM averaged over 1000 trials for αE = αI =
The stability condition for stationary solution is given by
0.5, βE = βI = 0.1, and wEE = wEI = wIE = wII = 1.0. The
results of AMM are in good agreement with DS results.
(101)
(102)
Responses of µE and µI for various sets of couplings are
plotted in Figs.15(a)-15(f): w1001 in Fig.15(b), for example,
Figure 13(a) and 13(b) show the wEE dependences of µE
means that (wEE, wEI, wIE, wII) =(1,0,0,1). Figure 15(a) shows
and µI, respectively, for various α (= αE = αI) with λE = λI =
the result of no couplings (wEE = wEI = wIE = wII = 0). When
1.0, wEI = wIE = wII = 1.0 and NE = NI = 10. Equations (97)
intracluster couplings of wEE = 1.0 and wII = 1.0 are introduced,
and (98) show that multiplicative noise play a role of inputs,
µI in the inhibitory cluster is much suppressed, while the
yielding finite µE and µI even for no external inputs (IE = II =
excitatory cluster is in the ordered state with µE = 0.73 for no
0). With increasing α, magnitudes of µE and µI are increased
input pulses, as shown in Fig.15(b). Figure 15(c) shows that
and the critical value of wc for the ordered state is decreased.
>
It is interesting to note that the behavior of µE and µI at wEE ∼
when only the intercluster coupling of wEI is introduced, the
wc is gradually changed with increasing α.
In contrast, Fig.15(d) shows that an addition of only w IE
magnitude of µE is decreased compared to that in Fig.15(a).
enhances magnitude of µI compared to that in Fig.15(a). We
note in Fig.15(e) that when both intercluster couplings of wEI
3. 3 Dynamical property
We have studied the dynamical property of the rate model,
by applying pulse inputs given by
and wIE are included, magnitude of µE is considerably reduced
while that of µI is slightly increased. When all couplings are
− 87 −
Bulletin of Tokyo Gakugei University, Natural Sciences, Vol. 58 (2006)
Figure 14: Time courses of (a) µE and µI , (b) γE and γI, and
(c) ρEE , ρII and ρEI , for pulse inputs given by Eq. (103) with
αE = αI (= α ) = 0.5: βE = βI (= β ) = 0.1, NE = NI = 10 and wEE
= wEI = wIE = wII = 1.0: solid, dashed and chain curves denote
the results of AMM and dotted curves express those of DM
with 1000 trials.
Figure 15: Responses of µE (solid curves) and µI (dashed
curves) of an E-I ensemble to pulse inputs given by Eq.
(103); (a)(w EE, w EI, w IE, w II)=(0,0,0,0), (b) (1,0,0,1), (c)
(0,1,0,0), (d) (0,0,1,0), (e) (0,1,1,0) and (f) (1,1,1,1): w1001,
for example, expresses the case of (b): αE = αI = 0.5, βE = βI
(b)
= 0.1, AE = 0.5, AI = 0.3, I (b)
E = 0.1, I I = 0.05 and NE = NI = 10.
added, magnitudes of both µE and µI are increased compared
to those for no couplings shown in Fig.15(a). Figure 15(a)15(f) clearly shows that responses of µE and µI to inputs
significantly depend on the couplings.
Figure 16(a) and 16(b) show the synchronization ratios of
SE and SI, respectively, defined by [see Eqs. (72) and (73)]
(104)
for various sets of couplings when inputs given by Eq. (103)
are applied (AE = 0.5, AI = 0.3, IEb = 0.1, IIb = 0.05, αE = αI =
0.5, βE = βI = 0.1, and NE = NI = 10). First we discuss the
synchronization ratio at the period of t < 40 and t > 60 where
the pulse input is not relevant. With no intra- and intercluster couplings (wEE = wEI = wIE = wII = 0), we get SE = SI =
0. When only the intra-cluster couplings of wEE = 1 and wII =
1 are introduced, we get SE = 0.15 and SI = – 0.67, as shown
by dashed curve in Fig.16(b). When inter-cluster coupling
of wEI = 1 is included, the synchronization in the excitatory
cluster is decreased to SE = 0.08 (dotted curves in Fig.16(a).
Figure 16: Synchronization ratios of (a) SE and (b) SI of an
E-I ensemble to pulse inputs given by Eq. (103) with AE =
(b)
0.5, AI = 0.3, I (b)
E = 0.1, I I = 0.05, αE = αI = 0.5, βE = βI = 0.1,
and NE = NI = 10: w1001, for example, denotes (wEE, wEI,
wIE, wII)=(1,0,0,1).
− 88 −
HASEGAWA:A Generalized Rate Model for Neuronal Ensembles
In contrast, when inter-cluster of wIE = 1 is introduced, the
population-averaged rates of Rη(t) obtained by DS with a
synchrony in the inhibitory cluster is increased to SI = 0.06,
single trial. The irregularity shown in Fig.17(c) is reduced
as shown by dotted curve in Fig.16(b). When inter-cluster
compared to that of rη(t) in Fig.17(b), which demonstrates
couplings of wEI = wIE = 1 are included, the synchronization
the advantage of the population code. When DS is repeated
ratios almost vanish (chain curves). Solid curves show that
and the global variable is averaged over 100 trials, we get the
when both intra-and inter-cluster couplings are included, we
result of Rη(t) whose irregularity is much reduced by the
get SE = 0.24 and SI = 0.04. It is noted that the responses of
average over trials, as shown in Fig.17(d). The AMM results
the synchronizations to a pulse applied at 40 ≤ t < 50 are
of µη(t) shown in Fig.17(e) are in good agreement with those
rather complicated. When an input pulse is applied at t = 40,
shown in Fig.17(d).
the synchronization ratios are generally decreased while SE
4 Discussion and conclusion
with w0110 increased: SE with w0100 is once decreased and
then increased. When an applied pulse disappears at t = 50,
the synchronization are increased in the refractory period
We may calculate the stationary distributions in the E-I
though the synchronization ratios for w0100 and w0110 are
ensemble. Figures 18(a)-18(c) show global distributions of
decreased.
PE (R) and PI (R) which are averaged within the excitatory
Figures 17(a)-17(e) show responses of an E-I ensemble
and inhibitory clusters, respectively. We have studied how
with NE = NI = 10 when the input pulse shown in 17(a) is
distributions are varied when couplings are changed: w0110
applied only to the excitatory cluster. Responses of the local
in Fig.18(b), for example, means (wEE, wEI, wIE, wII) = (0, 1,
rate of rη of single neurons in the excitatory (η = E) and
1, 0). Figure 18(a) shows the results without couplings, for
inhibitory clusters (η = I) are shown by solid and dashed
which distributions of PE (R) and PI (R) have peaks at R ∼ 0.1
curves, respectively, in Fig.17(b). Local rates in Fig.17(b)
which are obtained by DS with a single trial, have much
irregularity induced by additive and multiplicative noises
with αE = αI = 0.1 and βE = βI = 0.1. Figure 17(c) shows the
Figure 17: Responses of an E-I ensemble; (a) input signal IE
(t), (b) local rates rη(t) and (c) global rate Rη(t) (η = E and I)
obtained by direct simulation (DS) with a single trial: (d)
global rates Rη (t) calculated by DS with 100 trials: (e) the
result of the AMM µη (t): αE = αI = 0.5, βE = βI = 0.1, wEE =
(b)
wEI = wIE = wII = 1.0, AE = 0.5, AI = 0.0, I (b)
E = 0.1, I I = 0.05
and NE = NI = 10. Solid and dashed curves in (b)-(e) denote
the results for excitatory (E) and inhibitory (I) clusters,
respectively.
Figure 18: Stationary global distributions in E [PE (R)] and I
clusters [PI (R)] for (a) (wEE, wEI, wIE, wII)=(0,0,0,0), (b)
(0,1,1,0) and (c) (1,1,1,1) with IE(b) = 0.1, II(b) = 0.05, αE = αI
(= α ) = 0.5, βE = βI (= β ) = 0.1 and NE = NI = 10, solid and
dashed curves denoting PE (R) and PI (R), respectively.
− 89 −
Bulletin of Tokyo Gakugei University, Natural Sciences, Vol. 58 (2006)
and 0.05, respectively, because of applied background inputs
where P(R | I) is the probability of R for a given I and P(I)
(I E(b)
= 0.05). When intercluster couplings of wEI
the likelihood of I. Since P(RI) is obtainable with the use of
= 1.0 and wIE = 1.0 are introduced, the peak of PE (R) locates
our rate model, as shown in Figs.6(a) and 6(b), we may
at R ∼ 0.02, while that of PI (R) is at R ∼ 0.08: their relative
estimate P ( I | R ) by Eq. (112) if P ( I ) is provided. To
positions are interchanged compared to the case of no
estimate the value of I which yields the maximum of P(I |
couplings shown in Fig.18(a). When all couplings with wEE
R), is known as the maximum a posteriori (MAP) estimate.
= wEI = wIE = wII = 1.0 are included, we get the distributions
In the AMM, the relation between I(t) and the average of
shown in Fig.18(c), where both PE (R) and PI (R) have wider
R(t), µ (t), is given by Eq. (68). From Eqs. (13) and (68), I(t)
distributions than those with no couplings shown in Fig.
is expressed in terms of µ (t) and dµ (t)/dt as
= 0.1 and
II(b)
18(a). Our calculations show that the distributions are much
influenced by the magnitudes of couplings.
(113)
We have proposed the rate model given by Eqs. (8) and
(9), in which the relaxation process is given by a single F(x).
(114)
Instead, when the relaxation process consists of two terms:
(105)
with c1 + c2 = 1, the distribution becomes
The dµ/dt term plays an important role in decoding dynamics
of µ (t). We note that I(t) given by Eq. (113) corresponds to
the center of gravity of P(I | R), which is expected to be
(106)
nearly the same as the maximum value obtained by MAP
estimate. In BMI, the information on hand position and
where pk(r) (k = 1, 2) denotes the distribution only with F(x)
velocity etc. at time t is modeled as a weighted linear
= F1(x) or F(x) = F1(x). In contrast, when multiplicative
combination of neuronal firing rates collected by multi-
noises arise from two independent origins:
electrodes as [20]
(107)
(115)
the distribution for β = H = 0 becomes
(108)
where y(t) denotes a vector expressing position, velocity
etc., r(t – u) expresses a vector of firing rates at time t and
Similarly, when additive noises arise from two independent
timelag u, a(u) stands for a vector of weight at the timelag u,
origins:
and b means a vector of y-intercepts. Equation (115) is
(109)
the distribution for α = H = 0 becomes
transformed to a matrix form, from which a vector a(u) is
obtained by a matrix inversion. It has been reported that an
artificial hand is successfully manipulated by such decoding
(110)
method for firing-rate signals of r(t – u) [19][20].
The structures of neuronal networks have been discussed
Equations (107), (109) and (111) are quite different from the
based on the theory on complex networks [68][69]. The
form given by
neural network of nematode worm C. elegans is reported to
(111)
be small-world network. It has been recently observed by
the functional magnetic resonance imaging (fMRI) that the
which has been conventionally adopted for a fitting of
functional connectivity in human brain has the behavior of
theoretical distributions to that obtained by experiments
the scale-free [70] or small-world network [71][72]. Most of
[45][46].
theoretical studies have assumed the local or all-to-all
It is an interesting subject to estimate the input I from the
coupling in neuron networks, as we have made in our study.
response R of the cluster. By using the Bayesian statistics,
In real neural networks, however, the couplings are neither
the probability of I for a given R, P(I | R), is expressed by
local nor global. A new approach has been proposed to take
(112)
into account the couplings from local to global and/or from
regular to random couplings, extending the AMM [58]. It
− 90 −
HASEGAWA:A Generalized Rate Model for Neuronal Ensembles
has been shown that the synchronization in the small-world
APPENDIX AMM for multiple clusters
networks is worse than that in the regular networks due to
the randomness introduced in the small-world networks.
In recent years, it becomes increasingly popular to study
We will present a detail of an application of the AMM to
the distributed information processing by using cultured
the rate model describing multiple clusters given by Eqs.
neuronal networks which are cultivated in an artificial way
(75) and (76). The Fokker-Planck equation for the
[73]. It is possible that every cell in the cultured network
distribution of p({rmi},t) is given by [65]
may be observed, monitored, stimulated and manipulated
with high temporal and spatial resolutions. The observed ISI
distributions of the cultured networks with 50-106 neurons
are reported to obey the scale-free distribution [74].
Although our AMM study in Sec.3 has been made for an E-I
(A1)
(M = 2) cluster, it would be interesting to investigate the
dynamics of larger ensembles modeling complex networks
where G′(x) = dG(x)/dx, and φ = 1 and 0 in the Stratonovich
and cultured networks, which is left for our future study.
To summarize, we have discussed the stationary and dynamical
and Ito representations, respectively.
properties of the generalized rate model by using the FPE and
When we consider global variables of the cluster m given by
AMM. The proposed rate model is a phenomenological one and
has no biological basis. Nevertheless, the generalized rate
(A2)
model is expected to be useful in discussing various properties
of neuronal ensembles. Indeed, the proposed rate model has
the distribution P(Rm, t) for Rm is given by
an interesting property, yielding various types of stationary
non-Gaussian distributions such as gamma, inverse-Gaussian
(A3)
and log-normal distributions, which have been experimentally
observed [45][46]. The stationary distribution and dynamical
Variances and covariances of local variables are defined by
responses of neuronal clusters have been shown to considerably
depend on the model parameters such as strengths of noises
(A4)
and couplings. A disadvantage of our AMM is that its
applicability is limited to weak-noise cases. On the contrary,
Equations of motions of means, variances and covariances of
an advantage of the AMM is that we can easily discuss
local variables (rmi) are given by
dynamical property of an N-unit neuronal cluster. In DS and
FPE, we have to solve the N-dimensional stochastic Langevin
(A5)
equations and the N-dimensional partial differential equations,
respectively, which are more laborious than the three-dimensional
ordinary differential equations in the AMM. We hope that the
proposed rate model in the AMM is adopted for a wide class
(A6)
of study on neuronal ensembles.
Equations of motions of the mean, variance and covariance
Acknowledgements
of global variables (Rm) are obtainable by using Eqs. (A2),
(A5) and (A6):
This work is partly supported by a Grant-in-Aid for
Scientific Research from the Japanese Ministry of Education,
(A7)
Culture, Sports, Science and Technology.
(A8)
Variances and covariances, µm, γm and ρmn are given by
− 91 −
Bulletin of Tokyo Gakugei University, Natural Sciences, Vol. 58 (2006)
Eqs. (80)-(82). Expanding rmi in Eqs. (A5)-(A8) around the
[7] Adrian, E. D. J. Physiol. (London) 1926, 61, 49-72.
average value of µm as
[8] Ikegaya, Y.; Aaron G.; Cossart, R.; Aronov, D.; Lampl, I.;
Ferster, D.; Yuste, R.; Science 2004, 23, 559-564.
rmi = µm + δrmi,
(A9)
and retaining up to the order of <δrmiδrmj >, we get equations
[9] Thorpe, S.; Fize, D.; Marlot, C. Nature 1996, 381, 520-522.
[10] Keysers, C.; Xiao, D. K.; Foldiak, P.; Perrett, D. I. J. Cognitive
of motions for µm, γm and ρmn given by
Neurosci. 2001, 13, 90-101.
[11] Abbott, L.; Sejnowski,T. J. Neural Codes and Distributed
Representations MIT press, Cambridge, 1998.
(A10)
[12] Pouget, A.; Dayan, P.; Zemel, R. Nat. Rev. Neurosci. 2000, 1,
125-132.
[13] Georgopoulos, A. P.; Schwartz, A. B.; Kettner, R. E. Science
1986, 233, 1416-1419.
(A11)
[14] O’Keefe, J. The hippocampal cognitive map and navigational
strategies, in Brain and Space, Paillard, J. Ed, Oxford Univ. Press.
Oxford, 1991, p273-295.
[15] Treue, S.; Hol, K.; Rauber, H. J. Nature Neurosci. 2000, 3, 270276.
[16] Zemel, R. S.; Dayan, P.; Pouget, A. Neural Comput. 1998, 10,
403-430.
(A12)
[17] Shadlen, M. N.; Newsome, W. T. Current Opinion in
Neurobiology, 1994, 4, 569-579.
where
[18] Abbott, L. F.; Dayan, P. Neural Comput. 1999, 11, 91-101.
[19] Chapin, J. K.; Moxon, K. A.; Markowitz, R. S.; Nicolelis, M.
(A13)
A. L. Nature Neurosci. 1999, 2, 664-670.
[20] Carmena, J. M.; Lebedev, M. A.; Crist, R. E.; O’Doherty, J. E.;
(A14)
Santucci, D. M.; Dimitrov, D. F.; Patil, P. G.; Henriquez, C. S.;
Nicolelis, M. A. L. PLoS Biology 2003, 1, 1-16.
(A15)
[21] Anderson, R. A.; Musallam, S.; Pesaran, B. Curr. Opnion
(A16)
Neurobiol. 2004, 14, 720-726.
[22] Gerstner, W.; Kistler, W. M. Spiking Neuron Models, Cambdridge
For a two-cluster (M = 2) ensemble consisting of excitatory
and inhibitory clusters, equations of motions given by Eqs.
Univ., 2002.
[23] Fourcaud-Trocmé, N.; Brunel, N.; (2005) J. Comput. Neurosci.
(A10)-(A12) reduce to Eqs. (84)-(90).
18, 311-321.
[24] Amit, D. J.; Tsodyks, M. V.; (1991) Network 2, 259-273.
References
[25] Ermentrout, B.; (1994) Neural Comput. 6, 679-695.
[26] Shriki, O.; Hansel, D.; Sompolinsky, H. Neural Comput. 2003,
[1] Kandel, E. R.; Schwartz, J. H.; Jessell, T. M. Essentials of Neural
Science and Behavior, Appleton & Lange, Stanford, 1995.
15, 1809-1841.
[27] Wilson, H. R.; Cowan, J. D. Biophys. J. 1972, 12, 1-24.
[2] Rieke, F.; Warland, D; de Ruyter van Steveninck, R.; Bialek, W.
Spikes-Exploring the neural code; MIT Press; Cambridge, 1996.
[28] Amari, S. IEEE. Trans. 1972, SMC-2, 643-657.
[29] Hopfield, J. J. Proc. Natl. Acad. Sci. (USA) 1982, 79, 2554-
[3] deCharms, R. C. Proc. Natl. Acad. Sci. USA 1998, 95, 1516615168.
2558.
[30] Mainen, Z. F.; Sejnowski, T. J. Science 1995, 268, 1503-1506.
[4] Eggermont, J. J. Neurosci. Biobehav. Rev, 1998, 80, 2743-2764.
[31] Gammaitoni, L.;Hänngi, P.; Jung, P.; Marchesoni, F. Rev.
[5] Ursey, W. M.; Reid, R. C. Annu. Rev. Physiol. 1999, 61, 435-456.
Mod. Phys. 1998, 70, 223-287.
[6] deCharms, R. C.; Zador, A. Ann. Rev. Neurosci. 2000, 23, 613647.
[32] Masuda, N.; Aihara, K. Phys. Rev. Lett. 2002, 88, 248101.
[33] van Rossum, M. C. W.; Turrigiano, G. G.; Nelson, S. B. J.
− 92 −
HASEGAWA:A Generalized Rate Model for Neuronal Ensembles
[53] Mountcasle, V. B. (1975) J. Neurophysiol. 20, 408-434.
Neurosci. 2002, 22, 1956-1966.
[34] Wang, S. W.; Wang, W.; Liu, F.; Phys. Rev. Lett. 2006, 96,
[54] Hasegawa, H. Phys. Rev E 2003, 67, 041903.
[55] Hasegawa, H. Phys. Rev E 2003, 68, 041909.
018103.
[35] Muñoz, M. A. ’Advances in Condensed Matter and Statistical
[56] Hasegawa, H. Phys. Rev. E 2004, 70, 021911.
Mechanics’, Korutcheva, E.; Cuerno, R. Ed.; Nova Science
[57] Hasegawa, H. Phys. Rev. E 2004, 70, 021912.
Publishers, New York, 2004, p. 34.
[58] Hasegawa, H. Phys. Rev. E 2004 70, 066107.
[36] Muñoz, M. A.; Colaiori, F.; Castellano C. Phys. Rev. E 2005,
[59] Hasegawa, H. Phys. Rev. E 2005, 72, 056139.
[60] Hasegawa, H. cond-matt/0603115.
72, 056102.
[37] van den Broeck, C.; Parrondo, J. M. R.; Toral, R. Phys. Rev.
[61] Risken, H. The Fokker-Planck Equation: Methods of Solution
and Applications, Springer Series in Synergetics, Vol. 18,
Lett. 1994, 73, 3395-3398.
Springer Verlag, 1992.
[38] Tsallis, C. J. Stat. Phys. 1988, 52, 479-487.
[39] Tsallis, C.; Mendes, R. S.; Plastino, A. R. Physica A 1998, 261,
[62] Hasegawa, H. cond-mat/0601358.
[63] Hasegawa, H. Phys. Rev E 2000, 61, 718-726.
534-554.
[40] Wilk, G.; Wlodarczyk, Z. Phys. Rev. Lett. 2000, 84, 2770-
[64] Monteiro, L. H. A.; Bussab, M. A.; Berlinck, J. G. C.; J. Theor.
Biol. 2002, 219, 83-91.
2773.
[41] Sakaguchi, H. J. Phys. Soc. Jpn. 2001, 70, 3247-3250.
[65] Haken, H. Advanced Synergetics Springer-Verlag, Berlin, 1983.
[42] Anteneodo, C.; Tsallis, C. J. Math. Phys. 2003, 44, 5194-5203.
[66] Abe, S.; Rajagopal, A. K. cond-mat/0003303.
[43] Hasegawa, H. Physica A 2006, 365, 383-401.
[67] Direct simulations for the rate model given by eqs. (8) and (9)
[44] Hasegawa, H. J. Phys. Soc. Jpn. 2006, 75, 033001.
[Eqs. (75) and (76)] have been performed by using the Heun
[45] Kass, R. E.; Ventura, V.; Brown, E. N. J. Neurophysiol. 2005,
method with a time step of 0.0001. Results shown in the paper
are averages of 100 trials otherwise noticed.
94, 8-25.
[46] Vogels, T. P.; Rajan, K.; Abbott, L. F. Annu. Rev. Neurosci.
[68] Watts, D. J.; Strogatz, S. H. Nature 1998, 393, 440-442.
[69] Barabási, A.; Albert, R. Science 1999, 286, 509-512.
2005, 28, 357-376.
[47] Tuckwell, H. C. Introduction to theoretical Neurobiology and
[70] Chialvo, C. R. Physica A 2004, 340, 756-765.
[71] Stam, C. J. Neurosci. Lett. 2004, 355, 25-28.
stochastic theories, Cambridge, New York, 1988.
[72] Achard, S.; Salvador, R.; Whitcher, B.; Suckling, J.; Bullmore,
[48] Gerstein, G. L.; Mandelbrot, B. Biophys. J 1964, 4, 41-68.
E. J. Neurosci. 2006, 26, 63-72.
[49] McKeegan, D. E. F. Brain Res. 2002, 929, 48-58.
[50] Omurtag, A.; Knight, B. W.; Sirovich, L. J. Comput. 2000, 8,
[73] Porter, S. M. Advances in Neural Population coding Ed. Nicoleilis,
M. A. L., Progress in Brain Research, 2001, 130, 49, Elsevier
51-63.
[51] Nykamp, D. Q.;Tranchina, D. J. Comput. Neurosci. 2000, 8,
19-50.
Science.
[74] Volman, V,; Baruchi, I,; Persi, E.; Ben-Jacob, E. Physica A
[52] Rodriguez, R.; Tuckwell, H. C. Phys. Rev. E 1996, 54, 55855590.
− 93 −
2004, 335, 249-278.