Documentos de Académico
Documentos de Profesional
Documentos de Cultura
Abstract
When inhibitory neurons constitute about 40% of neurons they could have an im-
portant antinociceptive role, as they would easily regulate the level of activity of
other neurons. We consider a simple network of cortical spiking neurons with ax-
onal conduction delays and spike timing dependent plasticity, representative of a
cortical column or hypercolumn with large proportion of inhibitory neurons. Each
neuron fires following a Hodgkin-Huxley like dynamics and it is interconnected ran-
domly to other neurons. The network dynamics is investigated estimating Bandt
and Pompe probability distribution function associated to the interspike intervals
and taking different degrees of inter-connectivity across neurons. More specifically
we take into account the fine temporal “structures” of the complex neuronal signals
not just by using the probability distributions associated to the inter spike intervals,
but instead considering much more subtle measures accounting for their causal in-
formation: the Shannon permutation entropy, Fisher permutation information and
permutation statistical complexity. This allows us to investigate how the informa-
tion of the system might saturate to a finite value as the degree of inter-connectivity
across neurons grows, inferring the emergent dynamical properties of the system.
EEG and fMRI data are two important measures of brain activity. EEG re-
flects the brains electrical activity such as post-synaptic potentials, and fMRI
detects blood flow, motion and equilibrium under the action of external forces.
The action potential (or spike) is an explosion of electrical activity that is
created by a depolarizing current. Single-unit recordings measures spikes re-
sponses of a single neuron, and constitute an important methodology for un-
derstanding mechanisms and functions within the cerebral cortex. The de-
velopment of micro-electrode arrays allowed recording from multiple units at
the same time, and constitutes an important method for studying functional
interactions among neuronal populations. In the field of experimental neu-
rophysiology, one of the most common magnitude used for the study of the
neuronal network dynamics is the variance of the inter spike intervals (ISIs)
[1,2,3,4].
Bandt and Pompe [5] have proposed a robust approach to time series analysis
on the basis of counting ordinal patterns by introducing the concept of permu-
tation entropy for quantifying the complexity of a system behind a time series.
Thus, the ordinal structures of the time series instead of the values themselves
are considered [6]. This methodology has been applied for investigating EEG
and FMRi signals [7,8,9,10,11,12,13,14,15,16,17,18].
The nonlinear dynamic effects of the spiking activity of in vivo neuronal net-
works cannot be fully captured by simple calculations of ISIs. More specifically,
estimating the variance of the ISIs may not always provide reliable informa-
tion about the system dynamics. Importantly, the application of the Bandt
and Pompe methodology [5] to spiking neural data has not been widely inves-
∗ Corresponding author
Email address: fmontani@gmail.com (Fernando Montani).
2
tigated. In the current paper we considered the effective statistical complexity
measure (SCM) introduced by Lamberti et al. [19], (also called MPR com-
plexity) that allows us to detect essential details of the dynamics of a neural
network.
The nervous system can extract only a finite amount of information about
sensory stimuli, and, in subsequent stages of processing, the amount of infor-
mation cannot exceed the amount extracted and therefore information should
saturate as the number of interconnected neurons becomes large enough. This
is, the correlations must behave in such way that information should saturate,
or reach a maximum, as the number of interconnected neurons increases [26].
In this paper we use the ISIs of a very simple spiking model [27] of a population
of neurons to test the performance of the variance of the ISIs in comparison
with more subtle measures accounting for the nonlinear dynamic effects of the
temporal signal: the Shannon entropy [28,29], the MPR statistical complexity
[28,29] and the Fisher information measure [30,31]. We show that simple esti-
mations of the variance of ISIs do not capture the saturation properties of the
information as the number of interconnected neurons increases. Our proposal
therefore is to use the Bandt-Pompe permutation methodology for the evalu-
ation of probability distribution function (PDF) associated with a time series
[5] in order to characterize the dynamics of the spiking neural activity when
simulating a cortical network. By estimating Fisher information versus MPR
statistical complexity/Shannon entropy [32,33,34] of the ISIs signals, we show
that it is possible to quantify the optimal amount of interconnected neurons
that maximizes information transmission within a simple model that we use as
“representative of a cortical hypercolumn with large proportion of inhibitory
neurons”.
Based on the quantification of the ordinal “structures” present in the ISIs and
their local influence on the associated probability distribution, we incorporate
3
the time series own temporal causality through a system of easy algorithm
implementation and computation. We show that estimating the variance of
the ISIs does not help to understand the dynamics of the system, and thus
statistical measures taking into account the time causality of the signal are
needed. Our approach allows us to estimate the saturation properties of the
neuronal network, quantifying the causality of the signal, and inferring the
emergent dynamical properties of the system as the number of interconnected
neurons increases.
Neurons fire spikes when they are near a bifurcation from resting to spiking
activity, and it is the delicate balance between noise, dynamic currents and
initial condition what determines the phase diagram of neural activity. While
there are a huge number of possible ionic mechanisms of excitability and spike
generation, there are just four bifurcation mechanisms that can result in such
a transition. That is, there are many ionic mechanisms of spike generation,
but only four generic bifurcations of equilibrium. These bifurcations divide
neurons into four categories: integrators or resonators, monostable or bistable
[27].
4
threshold) the membrane voltage and the recovery variable are reset according
to Eq. (3). The variable I accounts for the inputs to the neurons [20].
Regular spiking (RS) [20] neurons are the major class of excitatory neurons
(b > 0). RS neurons are the most typical neurons in the cortex. In our current
simple model for the excitatory neurons we take a = 0.02 and b = 0.2 with
fixed values; while c = −65 + 15 · (rde )2 and d = 8 − 6 · (rde )2 , where rde is
random vector (between zero and one) of the size of the number of excitatory
neurons.
Fast spiking (FS) neurons are the ones that fire periodic trains of action po-
tentials without any adaptation. FS neurons are inhibitory neurons that have
a = 0.1 and b = 0.25 − 0.05 · rdi , where rdi is a random vector (between
zero and one) of the size of the number of inhibitory neurons. Furthermore
c = −65 mV and d = 2. Positive synaptic weights are taken for excitatory
neurons and negative, for the inhibitory ones. This simple model reproduces
some biologically plausible phenomena such as patterns of spontaneous corti-
cal activity including brain wave like oscillations.
The network simulation takes into account cortical spiking neurons with ax-
onal conduction delays and spike-timing-dependent plasticity (STDP). The
magnitude of the synaptic weight between pre- and postsynaptic neurons de-
pends on the timing of the spikes according to the STDP rules. That is, the
weight of the synaptic connection from the pre- to postsynaptic neurons grows
as 0.12 · exp (t/t0 ) if the postneuron fires after the presynaptic spike, and if
the order is reversed, it decreases as 0.1 · exp (−t/t0 ), where t0 = 20 ms [22].
Importantly, the interplay between conduction delays and STDP helps the
spiking neurons to produce stable firing patterns that would not be possible
without these assumptions. Each neuron in the network is described by the
simple model of spiking neurons of [20], which has been described above in
Eq. (1) to Eq. (3).
In this simple model we choose to account for the STDP as its interplay
with conduction delays that helps the spiking neurons to spontaneously self-
organize into groups with patterns of time-locked activity. Thus, the STPD
produces stable firing patterns emulating spontaneous brain activity that
5
would not be possible without this assumption. Although the patterns are
random, reflecting connectivity within the cortex, one could implement so-
phisticated anatomy and the two sparse networks shown in this section are
representative of a cortical hypercolumn [22].
To illustrate the simple model that we are using, (a) as a first network case we
consider a network that consists of n = 1000 neurons, with ne = 550 being of
excitatory regular spiking (RS) type, and the remaining ni = 450 of inhibitory
fast spiking (FS) type (n = ne + ni ). (b) As a second network, we take a
network that consists of n = 900 neurons, with ne = 500 being of excitatory
(RS) type, and the remaining ni = 400 of inhibitory (FS) type (n = ne + ni );
(c) Finally, as third network, we consider, a network that consists of n = 800
neurons, with ne = 450 being of excitatory (RS) type, and the remaining
ni = 450 of inhibitory (FS) type (n = ne + ni ). In all the cases considered,
each excitatory regular spiking type neuron, as well as each inhibitory fast
spiking neuron, is connected to m = 2, 4, 6, 8, 10, 20, 30, 40, 60, 80, 100 and
120 random neurons, so that the probability of connection is m/n. In the case
of inhibitory FS neurons, the connections are with excitatory neurons only. In
all cases synaptic connections among neurons have fixed conduction delays,
which are random integers between 1 ms and 10 ms.
In the next sections we show that understanding how neural information sat-
urates as the number of neurons increases requires the development of an
appropriate mathematical framework accounting for the ordinal “structures”
present in the time series. We will also show how to quantify the causal in-
formation of the ISIs. For the sake of simplicity in this paper we will study a
simple row signal of spontaneous neural activity. We will investigate the effect
of increasing the network connectivity of a simulated cortical hypercolumn,
with large percentage of inhibitory neurons, by quantifying the degree of cor-
relation with and without considering the causality information present in the
ISIs.
6
3 Information Theory quantifiers
3.1 Shannon Entropy, Fisher Information Measure and MPR Statistical Com-
plexity
This functional is equal to zero when we are able to predict with full certainty
which of the possible outcomes j, whose probabilities are given by pj , will ac-
tually take place. Our knowledge of the underlying process, described by the
probability distribution, is maximal in this instance. In contrast, this knowl-
edge is commonly minimal for a uniform distribution Pe = {pj = 1/N, ∀j =
1, · · · , N}.
7
operator significantly influences the contribution of minute local f -variations
to the Fisher information value, so that the quantifier is called a “local” one.
Note that Shannon entropy decreases with skewed distribution, while Fisher
information increases in such a case. Local sensitivity is useful in scenarios
whose description necessitates an appeal to a notion of “order” [32,33,34].
The concomitant problem of loss of information due to the discretization has
been thoroughly studied (see, for instance, [39,40,41] and references therein)
and, in particular, it entails the loss of Fisher’s shift-invariance, which is of no
importance for our present purposes.
ℵ
*" #2 +−1
1 X ∂ ln P (nsp |x, ∆T )
= (6)
F i=1 ∂xi
where its inverse is the Cramer-Rao lower bound E[ǫ2 ] ≧ F1 [42,43,31]. Note
that ǫ = ǫ1 + . . . + ǫℵ is the square error in a single trial, and nsp is the number
of spikes. Considering the general case in which the probability distribution is
a function of the mean firing rate and the time windows, then these conditions
are sufficient to show that [43]:
where σ is the variance (or tuning width that can take any positive real value)
and Kφ (f, ∆T, ℵ) denotes the dependence on the ℵ-dimensional space of the
encoded variable, and η is a density factor proportional to the number of
inactive neurons.
1 NX
−1
(pi+1 − pi )2
F [P ] = 2 . (8)
4 i=1 (pi+1 + pi )
8
narrow PDF, we have a Shannon entropy S ∼ 0 and a Fisher information
measure F ∼ Fmax . On the other hand, when the system under study lies
in a very disordered state one gets an almost flat PDF and S ∼ Smax , while
F ∼ 0. Of course, Smax and Fmax are, respectively, the maximum values for
the Shannon entropy and Fisher information measure. One can state that the
general behavior of the Fisher information measure is opposite to that of the
Shannon entropy [44].
It is well known, however, that the ordinal structures present in a process is not
quantified by randomness measures and, consequently, measures of statistical
or structural complexity are necessary for a better understanding (character-
ization) of the system dynamics represented by their time series [45]. The
opposite extremes of perfect order (i.e., a periodic sequence) and maximal
randomness (i.e., a fair coin toss) are very simple to describe because they do
not have any structure. The complexity should be zero in these cases. At a
given distance from these extremes, a wide range of possible ordinal structures
exists. The complexity measure allows one to quantify this array of behavior
[46]. We consider the MPR complexity [19] as it is able quantify critical details
of dynamical processes underlying the data set.
Based on the seminal notion advanced by López-Ruiz et al. [47], this statistical
complexity measure is defined through the product
with
J [P, Pe ] = S[(P + Pe )/2] − S[P ]/2 − S[Pe ]/2 (12)
the above-mentioned Jensen-Shannon divergence and Q0 , a normalization con-
stant (0 ≤ QJ ≤ 1), are equal to the inverse of the maximum possible value
of J [P, Pe ]. This value is obtained when one of the components of P , say pm ,
is equal to one and the remaining pj are equal to zero. The Jensen-Shannon
divergence, which quantifies the difference between two (or more) probability
distributions, is especially useful to compare the symbolic composition be-
tween different sequences [48]. Note that the above introduced SCM depends
on two different probability distributions, the one associated with the system
under analysis, P , and the uniform distribution, Pe . Furthermore, it was shown
that for a given value of HS , the range of possible CJS values varies between
9
a minimum Cmin and a maximum Cmax , restricting the possible values of the
SCM in a given complexity-entropy plane [49]. Thus, it is clear that impor-
tant additional information related to the correlational structure between the
components of the physical system is provided by evaluating the statistical
complexity measure.
The pertinent symbolic data are (i) created by ranking the values of the se-
ries and (ii) defined by reordering the embedded data in ascending order,
which is tantamount to a phase space reconstruction with embedding dimen-
sion (pattern length) D and time lag τ . In this way it is possible to quantify
the diversity of the ordering symbols (patterns) derived from a scalar time
series. Note that the appropriate symbol sequence arises naturally from the
time series and no model-based assumptions are needed. In fact, the necessary
“partitions” are devised by comparing the order of neighboring relative val-
ues rather than by apportioning amplitudes according to different levels. This
technique, as opposed to most of those in current practice, takes into account
the temporal structure of the time series generated by the physical process
under study. This feature allows us to uncover important details concerning
the ordinal structure of the time series [34,50,51] and can also yield informa-
tion about temporal correlation [28,29]. It is clear that this type of analysis
of time series entails losing some details of the original series’ amplitude in-
formation. Nevertheless, by just referring to the series’ intrinsic structure, a
meaningful difficulty reduction has indeed been achieved by Bandt and Pompe
with regard to the description of complex systems. The symbolic representa-
tion of time series by recourse to a comparison of consecutive (τ = 1) or
nonconsecutive (τ > 1) values allows for an accurate empirical reconstruction
of the underlying phase-space, even in the presence of weak (observational
and dynamic) noise [5]. Furthermore, the ordinal patterns associated with
the PDF is invariant with respect to nonlinear monotonous transformations.
Accordingly, nonlinear drifts or scaling artificially introduced by a measure-
ment device will not modify the estimation of quantifiers, a nice property if
one deals with experimental data (see, e.g., [52]). These advantages make the
10
Bandt and Pompe methodology more convenient than conventional methods
based on range partitioning (i.e., PDF based on histograms).
Additional advantages of the method reside in (i) its simplicity, we need few
parameters: the pattern length/embedding dimension D and the embedding
delay τ , and (ii) the extremely fast nature of the pertinent calculation process
[53]. The BP methodology can be applied not only to time series representative
of low dimensional dynamical systems, but also to any type of time series
(regular, chaotic, noisy, or reality based). In fact, the existence of an attractor
in the D-dimensional phase space is not assumed. The only condition for
the applicability of the Bandt-Pompe methodology is a very weak stationary
assumption (that is, for k ≤ D, the probability for xt < xt+k should not
depend on t [5]).
To use the Bandt and Pompe [5] methodology for evaluating the PDF, P ,
associated with the time series (dynamical system) under study, one starts by
considering partitions of the pertinent D-dimensional space that will hopefully
“reveal” relevant details of the ordinal structure of a given one-dimensional
time series X (t) = {xt ; t = 1, · · · , M} with embedding dimension D > 1 (D ∈
N) and embedding time delay τ (τ ∈ N). We are interested in “ordinal pat-
terns” of order (length) D generated by (s) 7→ xs−(D−1)τ , xs−(D−2)τ , · · · , xs−τ , xs ,
which assigns to each time s the D-dimensional vector of values at times
s, s − τ, · · · , s − (D − 1)τ . Clearly, the greater the D−value, is the more in-
formation on the past is incorporated into our vectors. By “ordinal pattern”
related to the time (s) we mean the permutation π = (r0 , r1 , · · · , rD−1 ) of
[0, 1, · · · , D − 1] defined by xs−rD−1 τ ≤ xs−rD−2 τ ≤ · · · ≤ xs−r1 τ ≤ xs−r0 τ .
In order to get a unique result we set ri < ri−1 if xs−ri = xs−ri−1 . This is justi-
fied if the values of xt have a continuous distribution so that equal values are
very unusual. Thus, for all the D! possible permutations π of order D, their
associated relative frequencies can be naturally computed by the number of
times this particular order sequence is found in the time series divided by the
total number of sequences.
Regarding to the selection of the other parameters, Bandt and Pompe sug-
gested working with 4 ≤ D ≤ 6 and specifically considered an embedding
delay τ = 1 in their cornerstone paper [5]. Nevertheless, it is clear that other
11
values of τ could provide additional information. It has been recently shown
that this parameter is strongly related, if it is relevant, to the intrinsic time
scales of the system under analysis [54,55,56].
The question is, which is the arrangement that one could regard as the “proper”
ordering? The answer is straightforward in some cases, histogram-based PDF
constituting a conspicuous example. For such a procedure one first divides the
interval [a, b] (with a and b the minimum and maximum values in the time
series) into a finite number on nonoverlapping sub-intervals (bins). Thus, the
division procedure of the interval [a, b] provides the natural order-sequence for
the evaluation of the PDF gradient involved in Fisher information measure.
12
deterministic systems [50,34]; visualization and characterization of different
dynamical regimes when the system parameters vary [32,33,34]; time dynamic
evolution [57]; identifying periodicities in natural time series [58]; identification
of deterministic dynamics contaminated with noise [59,60] and; estimating in-
trinsic time scales of delayed systems [54,55,56]; among other applications (see
[6] and references therein).
In order to illustrate our method we run the simulation for 20000 ms taking
a time resolution of 1 ms (time windows ∆T = 1 ms), which is enough to
guarantee the condition M ≫ D! will be satisfied. Figures 1 A, C and E show
the spike rasters considered for the total number of neurons N = 1000, 900,
and 800 respectively, for a time window of 1000 ms. Figures 1 B, D and F show
the interspike intervals considering neuronal total numbers N = 1000, 900
and 800 respectively, for the entire time we run the simulation. Note that
the previous plots only confirm that the neurons within the model fire with
millisecond precision and we cannot gain any significant information about
the dynamics.
q
Figures 2 A, C, and E show the variance Rp = V ar(tp )/ < tp > of the ISIs for
three different simulated systems. Note that an increasing degree of intercon-
nectivity m implies greater data dispersion. Thus, this kind of analysis suggests
that information does not saturate as the degree of interconnectivy becomes
higher. More specifically Eq. (7) implies that Fisher information increases as
the network connectivity becomes higher when considering a large population
of neurons of a cortical hypercolumn with large proportion of inhibitory neu-
rons. This analysis performed through a classical tool not accounting for the
13
“structures” present in ISIs is far from the expected neurophysiologic behav-
ior, which suggests that Fisher information should reach a maximum as the
number of interconnected neurons grows [26]. The expected neurophysiologi-
cal result is a consequence of the noise correlation, which causes the amount
of information to have a maximum as the number of interconnected becomes
higher [26,61].
In this section we use the Bandt and Pompe [5] methodology for evaluating
the PDF, P , associated with the time series, considering an embedded dimen-
sionality D = 6 (with τ = 1). This embedded dimensionality is enough to
efficiently capture causality information of the ordinal structure of the time
series [5]. Figures 2 B, D and F show the informational causal plane of entropy
versus complexity, H × C. Note that the MPR statistical complexity grows
as the normalized entropy becomes higher and not much information can be
gained about the system dynamics.
We showed that accounting for the ordinal structures present in the ISIs and
14
their local influence on the associated probability distribution allows us to es-
timate how information saturates when the number of interconnected neurons
increases. These findings are in agreement with the hypothesis of Shamir [61]
and Averbeck [26]. That is, as the number of interconnected neurons increases,
correlations must be such that information reaches a maximum [61,26,63]. The
amount of information that can be processed by the nervous system is finite,
and it cannot be larger than the amount of extracted information. As data
become available for a very large number of neurons, our theoretical approach
could provide an important mathematical tool to investigate how information
of the system saturates to a finite value as the system size grows. Further in-
vestigations should exhaustively account for the problem of finite size effects
in the lattice and different external stimuli. This will allow us to investigate
how quickly information saturates as the number of neurons increases, and
whether it saturates or not at a level well below the amount of information
available in the input.
15
The informational causal plane of Fisher information versus MPR statistical
complexity, C × F (or Fisher versus Shannon entropy H × F ), quantifies the
local versus global features of the dynamics in the system under study. Con-
sequently, when considering a row signal of spontaneous ISI activity, building
the causal plane C × F (or causal plane H × F ) provides us with a useful
tool to detect and quantify the optimal biophysical values that allow the neu-
ral network to transmit information more efficiently. On the other hand, the
causal informational plane complexity versus entropy , H ×C, quantifies global
versus global characteristics of the distribution failing to prove, therefore, in-
formation of the local features of the causal information.
16
tion of inhibitory neurons, as its interplay with conduction delays helps the
spiking neurons to spontaneously self-organize into groups with patterns of
time-locked activity. Thus, it produces stable firing patterns emulating spon-
taneous brain activity that would not be possible without this assumption. We
have shown that an increasing degree of interconnectivity m implies greater
data dispersion. In subsection 3.1 we have explicitly shown that the variance
does not reach a maximum for some given value of m as the interconnectivity
becomes higher. Thus, this kind of classical analysis would suggests that in-
formation does not saturate as the degree of interconnectivy becomes higher.
Eq. (7) implies that Fisher information increases as the network connectivity
becomes higher when considering a large population of neurons. As the esti-
mations through the variance are not accounting for the ordinal “structures”
present in ISIs, they are far from the expected neurophysiological behavior.
In other words, noise correlation causes the amount of information in a popu-
lation of neurons to saturate as the number of interconnected neurons grows
[61,26]. Our approach allows us to capture such behavior by introducing a
simple and robust method that takes into account the time causality of the
ISIs.
17
realistic neural networks.
Acknowledgments
References
[1] T. W. Anderson, The statistical analysis of time series, Wiley Classics Library,
J. Wiley, New York, 1994.
[4] A.-M. M. Oswald, B. Doiron, and L. Maler, Interval Coding (I): Burst Interspike
Intervals as Indicators of Stimulus Intensity. J. Neurophysiol 97 (2007) 2731–
2743.
[6] M. Zanin, L. Zunino, O. A. Rosso, D. Papo. Permutation Entropy and Its Main
Biomedical and Econophysics Applications: A Review. Entropy 14 (2012) 1553–
1577.
18
[10] G. Ouyang, C. Dang, D. A. Richards, X. Li, Ordinal pattern based similarity
analysis for EEG recordings. Clin. Neurophysiol. 121 (2010) 694–703.
[21] E. M. Izhikevich, Which Model to Use for Cortical Spiking Neurons? IEEE
Transactions on Neural Networks 15 (2004) 1063–1070.
[22] E. M. Izhikevich, Polychronization: Computation with spikes. Neural
Computation 18 (2006) 245–282.
19
[24] K. Yuan, J. Y. Shih, J. A. Winer, and C. E. Schreiner, Functional Networks of
Parvalbumin-Immunoreactive Neurons in Cat Auditory Cortex The Journal of
Neuroscience 31 (2011) 13333–13342.
[34] F. Olivares, A. Plastino and O. A. Rosso, Contrasting chaos with noise via local
versus global information quantifiers, Phys. Lett A 376 (2012) 1577–1583.
20
[40] L. Pardo, D. Morales, K. Ferentinos and K. Zografos, Discretization problems
on generalized entropies and R-divergences. Kybernetika 30 (1994) 445–460.
[42] N. Brunel, J-P. Nadal. Mutual information, Fisher information and population
coding. Neural Computation 10 (1998) 1731–57.
[53] K. Keller, M. Sinn. Ordinal Analysis of Time Series, Physica A 356 (2005),
114–120.
21
[55] M. C. Soriano, L. Zunino, O. A. Rosso, I. Fischer, and C. R. Mirasso. Time
Scales of a Chaotic Semiconductor Laser With Optical Feedback Under the
Lens of a Permutation Information Analysis. IEEE J. Quantum Electron 47
(2001) 252–261.
[58] C. Bandt, Ordinal time series analysis. Ecol. Modell. 182 (2005) 229–238.
[62] S. Seung and H. Sompolinsky, Simple models for reading neuronal population
codes. Proc. Natl. Acad. Sci. USA 90 (1993) 10749–10753.
22
[69] G. Dölen, E. Osterweil, B. S. Rao, G. B. Smith, B. D. Auerbach, S. Chattarji,
M. F. Bear, Correction of fragile X syndrome in mice. Neuron. 56 (2007) 955–62.
[73]
http://www.keithschwarz.com/interesting/code/factoradic-permutation/FactoradicPermutation
[74] T. Schreiber, Measuring information transfer. Phys. Rev. Lett. 85 (2000) 461–
464.
23
A B
1000 14000
Number of neurons (n=ne+ni)
900
12000
800
700 10000
# of spikes
600 8000
500
6000
400
300 4000
200
2000
100
0 0
0 200 400 600 800 1000 0 5 10 15
Time (ms) Time (ms)
C D
900 14000
Number of neurons (n=ne+ni)
800
12000
700
10000
600
# of spikes
500 8000
400
6000
300
4000
200
100 2000
0 0
0 200 400 600 800 1000 0 5 10 15
Time (ms) Time (ms)
E 800 F
Number of neurons (n=ne+ni)
14000
700
12000
600
10000
500
# of spikes
8000
400
6000
300
200 4000
100 2000
0
0 200 400 600 800 1000 0
0 5 10 15
Time (ms) Time (ms)
24
A B
1.2 0.5
1 0.45 m=120
n =1000 , ne=550, ni=450
MPR Complexity
0.8 0.4
Variance (ISIs)
0.6 0.35
0.2 0.25
m=2
0
0 20 40 60 80 100 120 0.2
0.2 0.3 0.4 0.5 0.6 0.7
m
Normalized Shannon Entropy
C D
1.2 0.5
m=120
1
n =900 , ne=500, ni=400 0.45
MPR Complexity
0.8
Variance (ISIs)
0.4
0.6
0.2 0.3
0 m=2
0 20 40 60 80 100 120 0.25
m 0.2 0.4 0.5 0.6 0.7
Normalized Shannon Entropy
E F
1.2 0.5
n =800 , ne=450, ni=350
m=120
1 0.45
Variance (ISIs)
MPR Complexity
0.8 0.4
0.6 0.35
n =800 , ne=450, ni=350
0.4 0.3
m=2
0.2 0.25
0 20 40 60 80 100 120 0.2 0.4 0.5 0.6 0.7
m
Normalized Shannon Entropy
25
A B 0.78
0.78
m=40 m=40
Fisher Information (bits)
0.74 0.74
0.72 0.72
0.74 0.74
0.72 0.72
0.76 0.76
Fisher Information (bits)
0.74 0.74
0.68 0.68
m=120 m=120
0.66 0.66
0.25 0.3 0.35 0.4 0.45 0.5 0.3 0.4 0.5 0.6 0.7
MPR Complexity Normalized Shannon Entropy
26