Está en la página 1de 26

Efficiency characterization of a large neuronal

network: a causal information approach

Fernando Montani a,b, Emilia B. Deleglise a,b,


arXiv:1304.0399v2 [q-bio.NC] 26 Jan 2014

Osvaldo A. Rosso d,c.


a Departamentode Fı́sica, Facultad de Ciencias Exactas,
UNLP Calle 49 y 115. C.C. 67 (1900), La Plata, Argentina.
b IFLYSIB, Universidad Nacional de La Plata, La Plata, Argentina
c Laboratorio
de Sistemas Complejos, Facultad de Ingenierı́a,
Universidad de Buenos Aires (UBA).
(1063) Av. Paseo Colón 840, Ciudad Autónoma de Buenos Aires, Argentina.
d Instituto
de Fı́sica, Universidade Federal de Alagoas (UFAL).
BR 104 Norte km 97, 57072-970 Maceió, Alagoas, Brazil.

Abstract

When inhibitory neurons constitute about 40% of neurons they could have an im-
portant antinociceptive role, as they would easily regulate the level of activity of
other neurons. We consider a simple network of cortical spiking neurons with ax-
onal conduction delays and spike timing dependent plasticity, representative of a
cortical column or hypercolumn with large proportion of inhibitory neurons. Each
neuron fires following a Hodgkin-Huxley like dynamics and it is interconnected ran-
domly to other neurons. The network dynamics is investigated estimating Bandt
and Pompe probability distribution function associated to the interspike intervals
and taking different degrees of inter-connectivity across neurons. More specifically
we take into account the fine temporal “structures” of the complex neuronal signals
not just by using the probability distributions associated to the inter spike intervals,
but instead considering much more subtle measures accounting for their causal in-
formation: the Shannon permutation entropy, Fisher permutation information and
permutation statistical complexity. This allows us to investigate how the informa-
tion of the system might saturate to a finite value as the degree of inter-connectivity
across neurons grows, inferring the emergent dynamical properties of the system.

Keywords: Neural dynamics; Permutation Entropy; Complexity


PACS: 02.50.-r; 05.45. Tp;87.19.La.

Preprint submitted to Elsevier 28 January 2014


1 Introduction

The central assumption of theoretical neuroscience is that the brain computes.


That is, in general, it is accepted that the brain is a complex dynamical sys-
tem whose state variables encode information about the outside world. The
computation it is therefore equal to the coding more dynamic. The detection
of subtle changes in brain dynamics it is therefore of importance to investigate
the dynamic of functional interactions across neurons.

EEG and fMRI data are two important measures of brain activity. EEG re-
flects the brains electrical activity such as post-synaptic potentials, and fMRI
detects blood flow, motion and equilibrium under the action of external forces.
The action potential (or spike) is an explosion of electrical activity that is
created by a depolarizing current. Single-unit recordings measures spikes re-
sponses of a single neuron, and constitute an important methodology for un-
derstanding mechanisms and functions within the cerebral cortex. The de-
velopment of micro-electrode arrays allowed recording from multiple units at
the same time, and constitutes an important method for studying functional
interactions among neuronal populations. In the field of experimental neu-
rophysiology, one of the most common magnitude used for the study of the
neuronal network dynamics is the variance of the inter spike intervals (ISIs)
[1,2,3,4].

Detection of dynamic changes in complex systems is one of the most rele-


vant issues in neuroscience. Due to the occurrence of noise and artifacts in
various forms, it is often not easy to get reliable information from a series of
measurements. That is, brain activity measures are characterized by a variety
of dynamic variables which are noisy, nonstationary, nonlinear and rife with
temporal discontinuities.

Bandt and Pompe [5] have proposed a robust approach to time series analysis
on the basis of counting ordinal patterns by introducing the concept of permu-
tation entropy for quantifying the complexity of a system behind a time series.
Thus, the ordinal structures of the time series instead of the values themselves
are considered [6]. This methodology has been applied for investigating EEG
and FMRi signals [7,8,9,10,11,12,13,14,15,16,17,18].

The nonlinear dynamic effects of the spiking activity of in vivo neuronal net-
works cannot be fully captured by simple calculations of ISIs. More specifically,
estimating the variance of the ISIs may not always provide reliable informa-
tion about the system dynamics. Importantly, the application of the Bandt
and Pompe methodology [5] to spiking neural data has not been widely inves-
∗ Corresponding author
Email address: fmontani@gmail.com (Fernando Montani).

2
tigated. In the current paper we considered the effective statistical complexity
measure (SCM) introduced by Lamberti et al. [19], (also called MPR com-
plexity) that allows us to detect essential details of the dynamics of a neural
network.

One of the simplest computational models of neurons is the Simple Model of


Spiking Neurons by Izhikevich [20,21]. This model just uses a two-dimensional
(2-D) system of ordinary differential equations and four parameters to gener-
ate several different types of neural dynamics. In order to reproduce some of
the dynamic properties of a population of neurons, we use a neural network
capable of emulating several of the biological characteristics of brain oscilla-
tion patterns and spontaneous brain activity [21,22]. We consider a population
of neurons with large number inhibitory neurons, as it helps to rapidly phase
lock neural populations and induce synchronization at small time windows and
produces stable firing patterns [23]. In particular, large amounts of inhibitory
neurons can have a crucial role in regulating the level of activity of other
neurons [24]. Thus, inhibitory neurons have an important antinociceptive role
when they constitute about 40% of the population [25].

The nervous system can extract only a finite amount of information about
sensory stimuli, and, in subsequent stages of processing, the amount of infor-
mation cannot exceed the amount extracted and therefore information should
saturate as the number of interconnected neurons becomes large enough. This
is, the correlations must behave in such way that information should saturate,
or reach a maximum, as the number of interconnected neurons increases [26].

In this paper we use the ISIs of a very simple spiking model [27] of a population
of neurons to test the performance of the variance of the ISIs in comparison
with more subtle measures accounting for the nonlinear dynamic effects of the
temporal signal: the Shannon entropy [28,29], the MPR statistical complexity
[28,29] and the Fisher information measure [30,31]. We show that simple esti-
mations of the variance of ISIs do not capture the saturation properties of the
information as the number of interconnected neurons increases. Our proposal
therefore is to use the Bandt-Pompe permutation methodology for the evalu-
ation of probability distribution function (PDF) associated with a time series
[5] in order to characterize the dynamics of the spiking neural activity when
simulating a cortical network. By estimating Fisher information versus MPR
statistical complexity/Shannon entropy [32,33,34] of the ISIs signals, we show
that it is possible to quantify the optimal amount of interconnected neurons
that maximizes information transmission within a simple model that we use as
“representative of a cortical hypercolumn with large proportion of inhibitory
neurons”.

Based on the quantification of the ordinal “structures” present in the ISIs and
their local influence on the associated probability distribution, we incorporate

3
the time series own temporal causality through a system of easy algorithm
implementation and computation. We show that estimating the variance of
the ISIs does not help to understand the dynamics of the system, and thus
statistical measures taking into account the time causality of the signal are
needed. Our approach allows us to estimate the saturation properties of the
neuronal network, quantifying the causality of the signal, and inferring the
emergent dynamical properties of the system as the number of interconnected
neurons increases.

2 Neuronal network simulation model

Neurons fire spikes when they are near a bifurcation from resting to spiking
activity, and it is the delicate balance between noise, dynamic currents and
initial condition what determines the phase diagram of neural activity. While
there are a huge number of possible ionic mechanisms of excitability and spike
generation, there are just four bifurcation mechanisms that can result in such
a transition. That is, there are many ionic mechanisms of spike generation,
but only four generic bifurcations of equilibrium. These bifurcations divide
neurons into four categories: integrators or resonators, monostable or bistable
[27].

Bifurcation methodologies [27] allow us to accurately reproduce the biophys-


ical properties of Hodgkin-Huxley neuronal models by just taking a two-
dimensional system of ordinary differential equations and four different pa-
rameters. This model is named simple model of spiking neurons [20], and the
system of ordinary differential equations reads:
dv
= 0.04 v 2 + 5 v + 140 u + I (1)
dt
du
= a (b v − u) (2)
dt
with the auxiliary after-spike resetting

v ←c
if v ≥ +30 mV, then (3)
u ← u + d.

Where v is the membrane potential of the neuron, and u is a membrane


recovery variable, which accounts for the activation of K+ ionic currents and
inactivation of Na+ ionic currents, and gives negative feedback to v. Thus we
are just considering a two-dimensional (2-D) system of ordinary differential
equations of two variables u and v, and all the known types of neurons can
be reproduced by taking different values of the four parameters a, b, c and d.
After the spike reaches its apex at +30 mV (not to be confused with the firing

4
threshold) the membrane voltage and the recovery variable are reset according
to Eq. (3). The variable I accounts for the inputs to the neurons [20].

Summarizing, each neuron can be described by a simple spiking model that


allows us to reproduce several of the most fundamental neurocomputational
features of biological neurons [21]. Here, we consider a network simulation
model in which the number for interconnected neurons is a parameter under
control, and each neuron can be interconnected randomly with two or more
neurons [20,21,22]. The inhibitory inputs hyperpolarize the potential and move
it away from the threshold. In contrast, excitatory inputs depolarize the mem-
brane potential (i.e., they bring it closer to the “firing threshold”).

Regular spiking (RS) [20] neurons are the major class of excitatory neurons
(b > 0). RS neurons are the most typical neurons in the cortex. In our current
simple model for the excitatory neurons we take a = 0.02 and b = 0.2 with
fixed values; while c = −65 + 15 · (rde )2 and d = 8 − 6 · (rde )2 , where rde is
random vector (between zero and one) of the size of the number of excitatory
neurons.

Fast spiking (FS) neurons are the ones that fire periodic trains of action po-
tentials without any adaptation. FS neurons are inhibitory neurons that have
a = 0.1 and b = 0.25 − 0.05 · rdi , where rdi is a random vector (between
zero and one) of the size of the number of inhibitory neurons. Furthermore
c = −65 mV and d = 2. Positive synaptic weights are taken for excitatory
neurons and negative, for the inhibitory ones. This simple model reproduces
some biologically plausible phenomena such as patterns of spontaneous corti-
cal activity including brain wave like oscillations.

The network simulation takes into account cortical spiking neurons with ax-
onal conduction delays and spike-timing-dependent plasticity (STDP). The
magnitude of the synaptic weight between pre- and postsynaptic neurons de-
pends on the timing of the spikes according to the STDP rules. That is, the
weight of the synaptic connection from the pre- to postsynaptic neurons grows
as 0.12 · exp (t/t0 ) if the postneuron fires after the presynaptic spike, and if
the order is reversed, it decreases as 0.1 · exp (−t/t0 ), where t0 = 20 ms [22].
Importantly, the interplay between conduction delays and STDP helps the
spiking neurons to produce stable firing patterns that would not be possible
without these assumptions. Each neuron in the network is described by the
simple model of spiking neurons of [20], which has been described above in
Eq. (1) to Eq. (3).

In this simple model we choose to account for the STDP as its interplay
with conduction delays that helps the spiking neurons to spontaneously self-
organize into groups with patterns of time-locked activity. Thus, the STPD
produces stable firing patterns emulating spontaneous brain activity that

5
would not be possible without this assumption. Although the patterns are
random, reflecting connectivity within the cortex, one could implement so-
phisticated anatomy and the two sparse networks shown in this section are
representative of a cortical hypercolumn [22].

Since we cannot simulate an infinite-dimensional system on a finite-dimensional


lattice, we choose a large network with a finite-dimensional approximation tak-
ing a time resolution of 1 ms. The main idea of our methodology is to use
this simple model to emulate the spontaneous activity when a large number of
neurons are taken in order to show the advantage of using causal quantifiers
when considering the ISIs.

To illustrate the simple model that we are using, (a) as a first network case we
consider a network that consists of n = 1000 neurons, with ne = 550 being of
excitatory regular spiking (RS) type, and the remaining ni = 450 of inhibitory
fast spiking (FS) type (n = ne + ni ). (b) As a second network, we take a
network that consists of n = 900 neurons, with ne = 500 being of excitatory
(RS) type, and the remaining ni = 400 of inhibitory (FS) type (n = ne + ni );
(c) Finally, as third network, we consider, a network that consists of n = 800
neurons, with ne = 450 being of excitatory (RS) type, and the remaining
ni = 450 of inhibitory (FS) type (n = ne + ni ). In all the cases considered,
each excitatory regular spiking type neuron, as well as each inhibitory fast
spiking neuron, is connected to m = 2, 4, 6, 8, 10, 20, 30, 40, 60, 80, 100 and
120 random neurons, so that the probability of connection is m/n. In the case
of inhibitory FS neurons, the connections are with excitatory neurons only. In
all cases synaptic connections among neurons have fixed conduction delays,
which are random integers between 1 ms and 10 ms.

In the next sections we show that understanding how neural information sat-
urates as the number of neurons increases requires the development of an
appropriate mathematical framework accounting for the ordinal “structures”
present in the time series. We will also show how to quantify the causal in-
formation of the ISIs. For the sake of simplicity in this paper we will study a
simple row signal of spontaneous neural activity. We will investigate the effect
of increasing the network connectivity of a simulated cortical hypercolumn,
with large percentage of inhibitory neurons, by quantifying the degree of cor-
relation with and without considering the causality information present in the
ISIs.

6
3 Information Theory quantifiers

3.1 Shannon Entropy, Fisher Information Measure and MPR Statistical Com-
plexity

Sequences of measurements (or observations) constitute the basic elements


for the study of natural phenomena. In particular, from these sequences, com-
monly called time series, one should judiciously extract information on the
dynamical systems under study. We can define an Information Theory quanti-
fier as a measure that is able to characterize some property of the probability
distribution function associated with these time series of a given row signal
(i.e., ISIs). Entropy, regarded as a measure of uncertainty, is the most paradig-
matic example of these quantifiers.

Given a time series X (t) ≡ {xt ; t = 1, · · · , M}, a set of M measures of the


observable X and the associated PDF, given by P ≡ {pj ; j = 1, · · · , N} with
PN
j=1 pj = 1 and N the number of possible states of the system under study,
the Shannon’s logarithmic information measure [35] is defined by
N
X
S[P ] = − pj ln(pj ) . (4)
j=1

This functional is equal to zero when we are able to predict with full certainty
which of the possible outcomes j, whose probabilities are given by pj , will ac-
tually take place. Our knowledge of the underlying process, described by the
probability distribution, is maximal in this instance. In contrast, this knowl-
edge is commonly minimal for a uniform distribution Pe = {pj = 1/N, ∀j =
1, · · · , N}.

The Shannon entropy S is a measure of “global character” that is not too


sensitive to strong changes in the PDF taking place in small region. Such is
not the case with the Fisher information measure [36,37]
Z ~ (x)|2
|∇f
F [f ] = dx , (5)
f (x)

which constitutes a measure of the gradient content of the distribution f (con-


tinuous PDF), thus being quite sensitive even to tiny localized perturbations.

The Fisher information measure can be variously interpreted as a measure of


the ability to estimate a parameter, as the amount of information that can be
extracted from a set of measurements, and also as a measure of the state of
disorder of a system or phenomenon [37,38], its most important property being
the so-called Cramer-Rao bound. It is important to remark that the gradient

7
operator significantly influences the contribution of minute local f -variations
to the Fisher information value, so that the quantifier is called a “local” one.
Note that Shannon entropy decreases with skewed distribution, while Fisher
information increases in such a case. Local sensitivity is useful in scenarios
whose description necessitates an appeal to a notion of “order” [32,33,34].
The concomitant problem of loss of information due to the discretization has
been thoroughly studied (see, for instance, [39,40,41] and references therein)
and, in particular, it entails the loss of Fisher’s shift-invariance, which is of no
importance for our present purposes.

Let us now consider the activity of a population of neurons represented by an


ℵ-dimensional vector xℵ = (x1 ; x2 ; · · · ; xℵ ) where the randomness of the
spiking activity produces an inherent accuracy due to the trial to trial fluctu-
ations. Fisher information F provides a measure of the encoding accuracy


*" #2 +−1
1 X ∂ ln P (nsp |x, ∆T )
=   (6)
F i=1 ∂xi

where its inverse is the Cramer-Rao lower bound E[ǫ2 ] ≧ F1 [42,43,31]. Note
that ǫ = ǫ1 + . . . + ǫℵ is the square error in a single trial, and nsp is the number
of spikes. Considering the general case in which the probability distribution is
a function of the mean firing rate and the time windows, then these conditions
are sufficient to show that [43]:

F = η σ ℵ−2 Kφ (f, ∆T, ℵ) (7)

where σ is the variance (or tuning width that can take any positive real value)
and Kφ (f, ∆T, ℵ) denotes the dependence on the ℵ-dimensional space of the
encoded variable, and η is a density factor proportional to the number of
inactive neurons.

Fisher information should saturate as the number of neurons, or the network


interconnectivity m, becomes very large [26]. Note that one would expect,
therefore, that the variance should not diverge as the interconnectivity across
neurons increases. That is, the variance should reach a maximum for some
given value of m as the interconnectivity becomes higher.

For Fisher information measure computation (discrete PDF) we follow the


proposal of Ferri and coworkers [30] (among others)

1 NX
−1
(pi+1 − pi )2
F [P ] = 2 . (8)
4 i=1 (pi+1 + pi )

If our system is in a very ordered state and thus is represented by a very

8
narrow PDF, we have a Shannon entropy S ∼ 0 and a Fisher information
measure F ∼ Fmax . On the other hand, when the system under study lies
in a very disordered state one gets an almost flat PDF and S ∼ Smax , while
F ∼ 0. Of course, Smax and Fmax are, respectively, the maximum values for
the Shannon entropy and Fisher information measure. One can state that the
general behavior of the Fisher information measure is opposite to that of the
Shannon entropy [44].

It is well known, however, that the ordinal structures present in a process is not
quantified by randomness measures and, consequently, measures of statistical
or structural complexity are necessary for a better understanding (character-
ization) of the system dynamics represented by their time series [45]. The
opposite extremes of perfect order (i.e., a periodic sequence) and maximal
randomness (i.e., a fair coin toss) are very simple to describe because they do
not have any structure. The complexity should be zero in these cases. At a
given distance from these extremes, a wide range of possible ordinal structures
exists. The complexity measure allows one to quantify this array of behavior
[46]. We consider the MPR complexity [19] as it is able quantify critical details
of dynamical processes underlying the data set.

Based on the seminal notion advanced by López-Ruiz et al. [47], this statistical
complexity measure is defined through the product

CJS [P ] = QJ [P, Pe ] · HS [P ] (9)

of the normalized Shannon entropy

HS [P ] = S[P ]/Smax (10)

with Smax = S[Pe ] = ln N, (0 ≤ HS ≤ 1) and the disequilibrium QJ defined


in terms of the Jensen-Shannon divergence. That is,

QJ [P, Pe ] = Q0 J [P, Pe ] (11)

with
J [P, Pe ] = S[(P + Pe )/2] − S[P ]/2 − S[Pe ]/2 (12)
the above-mentioned Jensen-Shannon divergence and Q0 , a normalization con-
stant (0 ≤ QJ ≤ 1), are equal to the inverse of the maximum possible value
of J [P, Pe ]. This value is obtained when one of the components of P , say pm ,
is equal to one and the remaining pj are equal to zero. The Jensen-Shannon
divergence, which quantifies the difference between two (or more) probability
distributions, is especially useful to compare the symbolic composition be-
tween different sequences [48]. Note that the above introduced SCM depends
on two different probability distributions, the one associated with the system
under analysis, P , and the uniform distribution, Pe . Furthermore, it was shown
that for a given value of HS , the range of possible CJS values varies between

9
a minimum Cmin and a maximum Cmax , restricting the possible values of the
SCM in a given complexity-entropy plane [49]. Thus, it is clear that impor-
tant additional information related to the correlational structure between the
components of the physical system is provided by evaluating the statistical
complexity measure.

3.2 The Bandt-Pompe approach to the PDF determination

The study and characterization of time series X (t) by recourse to Information


Theory tools assume that the underlying PDF is given a priori. In contrast,
part of the concomitant analysis involves extracting the PDF from the data
and there is no univocal procedure with which everyone agrees. Almost ten
years ago Bandt and Pompe (BP) introduced a successful methodology for
the evaluation of the PDF associated with scalar time series data using a
symbolization technique [5]. For a didactic description of the approach, as
well as, its main biomedical and econophysics applications, see [6].

The pertinent symbolic data are (i) created by ranking the values of the se-
ries and (ii) defined by reordering the embedded data in ascending order,
which is tantamount to a phase space reconstruction with embedding dimen-
sion (pattern length) D and time lag τ . In this way it is possible to quantify
the diversity of the ordering symbols (patterns) derived from a scalar time
series. Note that the appropriate symbol sequence arises naturally from the
time series and no model-based assumptions are needed. In fact, the necessary
“partitions” are devised by comparing the order of neighboring relative val-
ues rather than by apportioning amplitudes according to different levels. This
technique, as opposed to most of those in current practice, takes into account
the temporal structure of the time series generated by the physical process
under study. This feature allows us to uncover important details concerning
the ordinal structure of the time series [34,50,51] and can also yield informa-
tion about temporal correlation [28,29]. It is clear that this type of analysis
of time series entails losing some details of the original series’ amplitude in-
formation. Nevertheless, by just referring to the series’ intrinsic structure, a
meaningful difficulty reduction has indeed been achieved by Bandt and Pompe
with regard to the description of complex systems. The symbolic representa-
tion of time series by recourse to a comparison of consecutive (τ = 1) or
nonconsecutive (τ > 1) values allows for an accurate empirical reconstruction
of the underlying phase-space, even in the presence of weak (observational
and dynamic) noise [5]. Furthermore, the ordinal patterns associated with
the PDF is invariant with respect to nonlinear monotonous transformations.
Accordingly, nonlinear drifts or scaling artificially introduced by a measure-
ment device will not modify the estimation of quantifiers, a nice property if
one deals with experimental data (see, e.g., [52]). These advantages make the

10
Bandt and Pompe methodology more convenient than conventional methods
based on range partitioning (i.e., PDF based on histograms).

Additional advantages of the method reside in (i) its simplicity, we need few
parameters: the pattern length/embedding dimension D and the embedding
delay τ , and (ii) the extremely fast nature of the pertinent calculation process
[53]. The BP methodology can be applied not only to time series representative
of low dimensional dynamical systems, but also to any type of time series
(regular, chaotic, noisy, or reality based). In fact, the existence of an attractor
in the D-dimensional phase space is not assumed. The only condition for
the applicability of the Bandt-Pompe methodology is a very weak stationary
assumption (that is, for k ≤ D, the probability for xt < xt+k should not
depend on t [5]).

To use the Bandt and Pompe [5] methodology for evaluating the PDF, P ,
associated with the time series (dynamical system) under study, one starts by
considering partitions of the pertinent D-dimensional space that will hopefully
“reveal” relevant details of the ordinal structure of a given one-dimensional
time series X (t) = {xt ; t = 1, · · · , M} with embedding dimension D > 1 (D ∈
N) and embedding time delay τ (τ ∈ N). We are interested in “ordinal pat- 
terns” of order (length) D generated by (s) 7→ xs−(D−1)τ , xs−(D−2)τ , · · · , xs−τ , xs ,
which assigns to each time s the D-dimensional vector of values at times
s, s − τ, · · · , s − (D − 1)τ . Clearly, the greater the D−value, is the more in-
formation on the past is incorporated into our vectors. By “ordinal pattern”
related to the time (s) we mean the permutation π = (r0 , r1 , · · · , rD−1 ) of
[0, 1, · · · , D − 1] defined by xs−rD−1 τ ≤ xs−rD−2 τ ≤ · · · ≤ xs−r1 τ ≤ xs−r0 τ .
In order to get a unique result we set ri < ri−1 if xs−ri = xs−ri−1 . This is justi-
fied if the values of xt have a continuous distribution so that equal values are
very unusual. Thus, for all the D! possible permutations π of order D, their
associated relative frequencies can be naturally computed by the number of
times this particular order sequence is found in the time series divided by the
total number of sequences.

Consequently, it is possible to quantify the diversity of the ordering symbols


(patterns of length D) derived from a scalar time series, by evaluating the so-
called permutation entropy, the permutation statistical complexity and Fisher
permutation information measure. Of course, the embedding dimension D
plays an important role in the evaluation of the appropriate probability dis-
tribution because D determines the number of accessible states D! and also
conditions the minimum acceptable length M ≫ D! of the time series that
one needs in order to work with reliable statistics [50].

Regarding to the selection of the other parameters, Bandt and Pompe sug-
gested working with 4 ≤ D ≤ 6 and specifically considered an embedding
delay τ = 1 in their cornerstone paper [5]. Nevertheless, it is clear that other

11
values of τ could provide additional information. It has been recently shown
that this parameter is strongly related, if it is relevant, to the intrinsic time
scales of the system under analysis [54,55,56].

The Bandt and Pompe proposal for associating probability distributions to


time series (of an underlying symbolic nature), constitutes a significant ad-
vance in the study of non linear dynamical systems [5]. The method provides
univocal prescription for ordinary, global entropic quantifiers of the Shannon-
kind. However, as was shown by Rosso and coworkers [33,34], ambiguities arise
in applying the Bandt and Pompe technique with reference to the permutation
of ordinal patterns. This happens if one wishes to employ the BP-probability
density to construct local entropic quantifiers, like Fisher information measure,
that would characterize time series generated by nonlinear dynamical systems.
The local sensitivity of Fisher information measure for discrete-PDFs is re-
flected in the fact that the specific “i-ordering” of the discrete values pi must
be seriously taken into account in evaluating the sum in Eq. (8). The pertinent
numerator can be regarded as a kind of “distance” between two contiguous
probabilities. Thus, a different ordering of the pertinent summands would lead
to a different Fisher information value. In fact, if we have a discrete PDF given
by P = {pi , i = 1, · · · , N} we will have N! possibilities.

The question is, which is the arrangement that one could regard as the “proper”
ordering? The answer is straightforward in some cases, histogram-based PDF
constituting a conspicuous example. For such a procedure one first divides the
interval [a, b] (with a and b the minimum and maximum values in the time
series) into a finite number on nonoverlapping sub-intervals (bins). Thus, the
division procedure of the interval [a, b] provides the natural order-sequence for
the evaluation of the PDF gradient involved in Fisher information measure.

3.3 Causal information planes

The above is based on Information Theory quantifiers HS , F and CJS evalu-


ated using Bandt and Pompe’s PDF allow us to define three causality infor-
mation planes: H × C, H × F and C × F . The causality Shannon–complexity
plane, H × C, is based only on global characteristics of the associated time se-
ries Bandt and Pompe PDF (both quantities are defined in terms of Shannon
entropies), while the causality Shannon-Fisher plane, H ×F , and the causality
complexity–Fisher, C × F , are based on global and local characteristics of the
PDF. In the case of H × C the variation range is [0, 1] × [Cmin , Cmax ] (with
Cmin and Cmax the minimum and maximum statistical complexity values, re-
spectively, for a given HS value [49]), while in the causality planes H × F and
C × F the range is [0, 1] × [0, 1] in both cases. These causal information planes
have been profitably used to separate and differentiate amongst chaotic and

12
deterministic systems [50,34]; visualization and characterization of different
dynamical regimes when the system parameters vary [32,33,34]; time dynamic
evolution [57]; identifying periodicities in natural time series [58]; identification
of deterministic dynamics contaminated with noise [59,60] and; estimating in-
trinsic time scales of delayed systems [54,55,56]; among other applications (see
[6] and references therein).

4 Results and Discussion

Cerebral cortex contains many inhibitory neurons of distinct types. Under-


standing their connectivity and dynamics within the cortex is essential to
gain more knowledge of cortical information processing. Recent studies em-
phasize the essential role inhibitory neurons have in the development of proper
circuitry in the cortex [25]. In particular, when inhibitory interneurons consti-
tute about 40% of neurons they have an important antinociceptive role, and
are much more likely to have a role in regulating the level of activity of other
neurons [25]. We choose a network with large proportion of inhibitory neurons
(about 40% of the neurons) as this helps the spiking neurons to spontaneously
self-organize into groups with patterns of time-locked activity, producing sta-
ble firing patterns. The objective is to use an information-theory-based ap-
proach to investigate the ordinal “structures” present in the time series of
the ISIs of a model that emulates some of the characteristics of a cortical
hypercolumn, considering a large proportion of inhibitory neurons.

In order to illustrate our method we run the simulation for 20000 ms taking
a time resolution of 1 ms (time windows ∆T = 1 ms), which is enough to
guarantee the condition M ≫ D! will be satisfied. Figures 1 A, C and E show
the spike rasters considered for the total number of neurons N = 1000, 900,
and 800 respectively, for a time window of 1000 ms. Figures 1 B, D and F show
the interspike intervals considering neuronal total numbers N = 1000, 900
and 800 respectively, for the entire time we run the simulation. Note that
the previous plots only confirm that the neurons within the model fire with
millisecond precision and we cannot gain any significant information about
the dynamics.
q
Figures 2 A, C, and E show the variance Rp = V ar(tp )/ < tp > of the ISIs for
three different simulated systems. Note that an increasing degree of intercon-
nectivity m implies greater data dispersion. Thus, this kind of analysis suggests
that information does not saturate as the degree of interconnectivy becomes
higher. More specifically Eq. (7) implies that Fisher information increases as
the network connectivity becomes higher when considering a large population
of neurons of a cortical hypercolumn with large proportion of inhibitory neu-
rons. This analysis performed through a classical tool not accounting for the

13
“structures” present in ISIs is far from the expected neurophysiologic behav-
ior, which suggests that Fisher information should reach a maximum as the
number of interconnected neurons grows [26]. The expected neurophysiologi-
cal result is a consequence of the noise correlation, which causes the amount
of information to have a maximum as the number of interconnected becomes
higher [26,61].

In this section we use the Bandt and Pompe [5] methodology for evaluating
the PDF, P , associated with the time series, considering an embedded dimen-
sionality D = 6 (with τ = 1). This embedded dimensionality is enough to
efficiently capture causality information of the ordinal structure of the time
series [5]. Figures 2 B, D and F show the informational causal plane of entropy
versus complexity, H × C. Note that the MPR statistical complexity grows
as the normalized entropy becomes higher and not much information can be
gained about the system dynamics.

Figures 3 A, C and E show Fisher permutation information, Eq.(8), versus the


permutation MPR statistical complexity Eq.(11), i.e., the causal information
plane F × C, for the same biophysical parameters used above. Notice that
Figures 3 A, C and E present a maximum at m = 40, m = 30 and m = 20, 30,
respectively, for n = 1000, n = 900 and n = 800. A similar behavior can be
observed in Figures 3 B, D and F, which show Fisher permutation information,
Eq.(8), versus permutation entropy, Eq.(10), i.e., the causal plane F × H.
Our estimations using information quantifiers in the causal plane C × F (or
causal plane H × F ) are in agreement with the idea that Fisher information
saturates as the size of the system grows. Overall our findings show that Fisher
information reach a maximum as the number of interconnected neurons grows.

But if we considered the case of uncorrelated neurons, then Fisher informa-


tion would increase linearly with the number of neurons in the population
[62]. When accounting the efficiency of coding information in the second or-
der statistics of the population responses, it has been theoretically argued
that Fisher information of this system would grow linearly with its size [61].
Figure 2 A, C, and E show that the variance grows as the number of intercon-
nected neurons becomes higher. Thus the Eq. (7) implies that Fisher informa-
tion increases as well, considering the encoding variable is strictly larger than
two [43]. This would leads to a very odd and unexpected neurophysiological
behavior as when evaluating the computational capacity of cortical popula-
tion in sensory areas the total information should saturate at a level below the
amount of information available in the input [26]. Importantly, as positive cor-
relations vary smoothly with space they drastically suppress the information
in the mean responses, and Fisher information of the system should saturate
to a finite value as the system size grows [61,26].

We showed that accounting for the ordinal structures present in the ISIs and

14
their local influence on the associated probability distribution allows us to es-
timate how information saturates when the number of interconnected neurons
increases. These findings are in agreement with the hypothesis of Shamir [61]
and Averbeck [26]. That is, as the number of interconnected neurons increases,
correlations must be such that information reaches a maximum [61,26,63]. The
amount of information that can be processed by the nervous system is finite,
and it cannot be larger than the amount of extracted information. As data
become available for a very large number of neurons, our theoretical approach
could provide an important mathematical tool to investigate how information
of the system saturates to a finite value as the system size grows. Further in-
vestigations should exhaustively account for the problem of finite size effects
in the lattice and different external stimuli. This will allow us to investigate
how quickly information saturates as the number of neurons increases, and
whether it saturates or not at a level well below the amount of information
available in the input.

Understanding of the mechanisms of cortical processing of sensory informa-


tion requires the investigation of the dynamics between different excitatory
and inhibitory cortical circuitry [64]. When large numbers of inhibitory neu-
rons are considered, typically about 40% of the population, they are likely to
have differing roles in inhibiting pain or itch [65]. Here, we use an alterna-
tive methodology when considering a large population of neurons, we showed
that by building the phase space of Fisher information versus MPR statistical
complexity/permutation entropy it is possible to characterize the dynamics of
the system, and to find the optimal values that would maximize information
transmission within a neuronal network. Our approach could be an useful tool
to investigate how the loss of inhibitory neurons marks the dynamics of autism
in mouse, and to study the essential role that inhibitory neurons could play
on the development of proper circuitry in the cortex [66,67,68,69,70]. More-
over it can be of help to examine neuronal avalanches as the balance between
excitation and inhibition controls its temporal dynamics [71].

Shannon developed the Information Theory, as a measure of the uncertainty


in a message while essentially inventing what became known as the dominant
form of “information theory” [72]. As we consider a time series X (t) ≡ {xt ; t =
1, · · · , D}, if xt attain a finite number D of values, the classical Shannon
entropy measures the mean conditional uncertainty of the future xt+1 given the
whole past xt−D , . . . , xt . Thus, we have 0 ≤ S ≤ ln(D), with S = 0 if the series
is perfectly predictable from the past and S = ln(D) if and only if all values are
independent and uniformly distributed. Fisher information, F [P ], constitutes
a very useful local measure to detect changes in the dynamic behavior. By
applying causal Fisher information it is possible to quantify changes in the
dynamics of a system, thanks to the sensitivity of this measure to alterations
in the associated probability distribution.

15
The informational causal plane of Fisher information versus MPR statistical
complexity, C × F (or Fisher versus Shannon entropy H × F ), quantifies the
local versus global features of the dynamics in the system under study. Con-
sequently, when considering a row signal of spontaneous ISI activity, building
the causal plane C × F (or causal plane H × F ) provides us with a useful
tool to detect and quantify the optimal biophysical values that allow the neu-
ral network to transmit information more efficiently. On the other hand, the
causal informational plane complexity versus entropy , H ×C, quantifies global
versus global characteristics of the distribution failing to prove, therefore, in-
formation of the local features of the causal information.

Regarding the potential limitations of the methodology, note that in order


to avoid the bias deviation problem and to have reliable statistic it has been
established that the condition M >> D! must be satisfied [5,50]. We have
chosen an embedding delay τ = 1, but other values of τ might also provide
further information [54,55,56]. The Bandt and Pompe procedure does not
specify which order for generating ordinal patterns should be privileged. That
is, how to specify the ordering of index series i and, consequently, for a pattern
length D, we have D!! possible i-sequences. We face therefore an ambiguity
when local information measures are evaluated. The issue does not affect at all
the evaluation of global entropic quantifiers (like Shannon entropy). Different
orders would lead, though, to different “local” information contents. As in the
case of the PDF-histogram, we can reduce drastically the number of available
D!! possibilities if we proceed to form patterns of length D starting from
those of length D − 1. However, some lack of precise definition remains in
the assignation of the pattern indices i. In our current paper, we used the
lexicographic ordering given by the algorithm of Lehmer [73], amongst other
possibilities, due to it provide the better distinction of different dynamics in
the Fisher vs Shannon plane (see [33,34]).

5 Conclusions and perspectives

Recording the spiking activity of a very large number of neurons is extremely


difficult, and no one has recorded yet the activity of 1000 (even less, 500)
neurons simultaneously. Consequently, when considering a large number of
neurons the computational models developed by Izhikevich are the relevant
ones [20,21,22,27]. Our paper presents a methodology to characterize the ef-
ficiency of large ensembles of neurons. It is important to point out that the
interplay between conduction delays and STDP helps the spiking neurons to
produce stable firing patterns that would not be possible without these as-
sumptions. Each neuron in the network is described by the simple model of
spiking neurons of [20] which has been described above in Eq. (1) to Eq. (3).
In this simple model we choose to account for the STDP, with large propor-

16
tion of inhibitory neurons, as its interplay with conduction delays helps the
spiking neurons to spontaneously self-organize into groups with patterns of
time-locked activity. Thus, it produces stable firing patterns emulating spon-
taneous brain activity that would not be possible without this assumption. We
have shown that an increasing degree of interconnectivity m implies greater
data dispersion. In subsection 3.1 we have explicitly shown that the variance
does not reach a maximum for some given value of m as the interconnectivity
becomes higher. Thus, this kind of classical analysis would suggests that in-
formation does not saturate as the degree of interconnectivy becomes higher.
Eq. (7) implies that Fisher information increases as the network connectivity
becomes higher when considering a large population of neurons. As the esti-
mations through the variance are not accounting for the ordinal “structures”
present in ISIs, they are far from the expected neurophysiological behavior.
In other words, noise correlation causes the amount of information in a popu-
lation of neurons to saturate as the number of interconnected neurons grows
[61,26]. Our approach allows us to capture such behavior by introducing a
simple and robust method that takes into account the time causality of the
ISIs.

In this paper, we show an application of complexity measures based on order


statistics to simulation results that resemble neuronal activity when consid-
ering a very large number of neurons. This approach provides us detailed
insights into the dynamics of the neuronal networks. The choices of the dif-
ferent lattice sizes of N = 800, N = 900 and N = 1000 are made to resemble
cortical hypercolumns having a percentage of about 40% of inhibitory neurons.
The dynamic of the network is then investigated choosing different degrees of
network interconnectivity.

Summing up, the current approach provides us a novel methodology to investi-


gate the causal information structure of the ISIs when considering a row signal
of spontaneous activity of a very simple neuronal network. We are currently
working to include different network structures in the inputs to account for
external stimuli within the simulated data, and to investigate the finite size
effects considering different grid sizes. We also plan to extend the current ap-
proach to cases in which external stimuli such as different discrete conductivity
values are considered. This would require extending the current approach cur-
rently designed for entropy like quantities to entropy transfer quantities [74].
As non-causal mutual information fails to distinguish information that is ac-
tually exchanged from shared information due to common history and input
signals, the current approach could be very useful when extended to transfer
entropy-like quantities. This is not only important from a theoretical point of
view, it might help determine which areas of the cortex could have a higher
level of information, and to evaluate how causal interactions in neural dy-
namics would be modulated by behavior. We believe that this will become
an important tool for future research on the encoding capacity of biologically

17
realistic neural networks.

Acknowledgments

Research supported by PIP 0255/11 CONICET, Argentina (FM). F. Montani


and O. A. Rosso acknowledge support by CONICET, Argentina. O. A. Rosso
gratefully acknowledges support by FAPEAL, fellowship, Brazil.

References

[1] T. W. Anderson, The statistical analysis of time series, Wiley Classics Library,
J. Wiley, New York, 1994.

[2] G. Yadid and A. Friedman, Dynamics of the dopaminergic system as a key


component to the understanding of depression, Progress in Brain Research 172
(2008) 265–286.

[3] B. N. Lundstrom and A. L. Fairhall, Decoding Stimulus Variance from a


Distributional Neural Code of Interspike Intervals. J. Neuroscie., 26 (2006)
9030–9037.

[4] A.-M. M. Oswald, B. Doiron, and L. Maler, Interval Coding (I): Burst Interspike
Intervals as Indicators of Stimulus Intensity. J. Neurophysiol 97 (2007) 2731–
2743.

[5] C. Bandt and B. Pompe, Permutation entropy: a natural complexity measure


for time series. Phys. Rev. Lett. 88 (2002) 174102.

[6] M. Zanin, L. Zunino, O. A. Rosso, D. Papo. Permutation Entropy and Its Main
Biomedical and Econophysics Applications: A Review. Entropy 14 (2012) 1553–
1577.

[7] K. Schindler, H. Gast, L. Stieglitz, A. Stibal, M. Hauf, R. Wiest, L. Mariani,


C. Rummel, Forbidden ordinal patterns of periictal intracranial EEG indicate
deterministic dynamics in human epileptic seizures. Epilepsia 52 (2011) 1771–
1780.

[8] I. Veisi, N. Pariz, A. Karimpour, Fast and Robust Detection of Epilepsy in


Noisy EEG Signals Using Permutation Entropy. In Proceedings of the 7th IEEE
International Conference on Bioinformatics and Bioengineering, Boston, MA,
USA, 1417 October 2007; pp. 200–203.

[9] Y. Cao, W. Tung, J. B. Gao, V. A. Protopopescu, L. M. Hively, Detecting


dynamical changes in time series using the permutation entropy. Phys. Rev. E
70 (2004) 046217.

18
[10] G. Ouyang, C. Dang, D. A. Richards, X. Li, Ordinal pattern based similarity
analysis for EEG recordings. Clin. Neurophysiol. 121 (2010) 694–703.

[11] A. A. Bruzzo, B. Gesierich, M. Santi, C. Tassinari, N. Birbaumer, G. Rubboli,


Permutation entropy to detect vigilance changes and preictal states from scalp
EEG in epileptic patients: A preliminary study. Neurol. Sci. 29 (2008) 3–9.

[12] X. Li, S. M. E. Cui, L. J. Voss, Using permutation entropy to measure the


electroencephalographic effect of sevoflurane. Anesthesiology 109 (2007) 448–
456.
[13] E. Olofsen, J. W. Sleigh, A. Dahan, Permutation entropy of the
electroencephalogram: A measure of anaesthetic drug effect. Br. J. Anaesth.
101 (2008) 810–821.

[14] D. Jordan, G. Stockmanns, E. F. Kochs, S. Pilge, G. Schneider,


Electroencephalographic order pattern analysis for the separation of
consciousness and unconsciousness: An analysis of approximate entropy,
permutation entropy, recurrence rate, and phase coupling of order recurrence
plots. Anesthesiology 109 (2008) 1014–1022.
[15] N. Nicolaou, J. Georgiou, Detection of epileptic electroencephalogram based
on Permutation, Entropy and Support Vector Machines. Expert Syst. Appl. 39
(2012) 202–209.

[16] S. E. Robinson, A. J. Mandell, and R. Coppola, Spatiotemporal imaging of


complexity. Frontiers in Computational Neurosci. 6, 101 (2013) 1–14.
[17] M. S. Schröter, V. I. Spoormaker, A. Schorer, A. Wohlschläger, M. Czisch,
E. F. Kochs, C. Zimmer, B. Hemmer, G. Schneider, D. Jordan, and R.
Ilg. Spatiotemporal Reconfiguration of Large-Scale Brain Functional Networks
during Propofol-Induced Loss of Consciousness. J. Neurosci. 32 (2012) 12832–
12840.
[18] C. Rummel, E. Abela, M. Hauf, R. Wiest, K. Schindler, Ordinal patterns in
epileptic brains: Analysis of intracranial EEG and simultaneous EEG-fMRI.
The European Physical Journal Special Topics 222 (2013) 569–585

[19] P. W. Lamberti, M. T. Martı́n, A. Plastino, and O. A. Rosso, Intensive entropic


non-triviality measure. Physica A 334 (2004) 119–131.
[20] E. M. Izhikevich. Simple Model of Spiking Neurons, IEEE Transactions on
Neural Networks 14 (2003) 1569–1572.

[21] E. M. Izhikevich, Which Model to Use for Cortical Spiking Neurons? IEEE
Transactions on Neural Networks 15 (2004) 1063–1070.
[22] E. M. Izhikevich, Polychronization: Computation with spikes. Neural
Computation 18 (2006) 245–282.

[23] W. W. Lytton, T. J. Sejnowski, Inhibitory Interneurons can Rapidly Phase-Lock


Neural Populations. Induced Rhythms in the Brain, Brain Dynamics 357–366.
Sringer Science + Business Media, New York, 1992.

19
[24] K. Yuan, J. Y. Shih, J. A. Winer, and C. E. Schreiner, Functional Networks of
Parvalbumin-Immunoreactive Neurons in Cat Auditory Cortex The Journal of
Neuroscience 31 (2011) 13333–13342.

[25] S. Y. X. Tiong, E. Polgár, J.C. van Kralingen, M. Watanabe and A. J.


Todd, Galanin-immunoreactivity identifies a distinct population of inhibitory
interneurons in laminae I-III of the rat spinal cord. Molecular Pain 7 (2011) 36.

[26] B. B. Averbeck, P. E. Latham, and A. Pouget, Neural correlations, population


coding and computation. Nat. Rev. Neurosci. 7 (2006) 358–66.

[27] E. M. Izhikevich, Dynamical Systems in Neuroscience: The Geometry of


Excitability and Bursting. The MIT press, Cambridge, Massachusetts, 2007.

[28] O. A. Rosso and C. Masoller, Detecting and quantifying stochastic and


coherence resonances via information-theory complexity measurements. Phys.
Rev. E 79 (2009) 040106(R) .

[29] O. A. Rosso and C. Masoller, Detecting and quantifying temporal correlations


in stochastic resonance via information theory measures. European Phys. J. B
69 (2009) 37–43.

[30] G. I. Ferri, F. Pennini and A. Plastino. LMC-complexity and various chaotic


regimes. Phys. Lett. A 373 (2009) 2210–2214.

[31] R. Quian Quiroga and S. Panzeri, Extracting information from neural


populations: Information theory and decoding approaches. Nature Reviews
Neuroscience 10 (2009) 173–185.

[32] O. A. Rosso, L. De Micco. A. Plastino and H. Larrondo, Info-quantifiers’ map-


characterization revisited. Physica A 389 (2010) 249–262.

[33] F. Olivares, A. Plastino and O. A. Rosso, Ambiguities in the Bandt-Pompe’s


methodology for local entropic quantifiers. Physica A 391 (2012) 2518–2526.

[34] F. Olivares, A. Plastino and O. A. Rosso, Contrasting chaos with noise via local
versus global information quantifiers, Phys. Lett A 376 (2012) 1577–1583.

[35] C. Shannon and W. Weaver, The Mathematical theory of communication,


University of Illinois Press, Champaign, Illinois, 1949.

[36] R. A. Fisher, On the mathematical foundations of theoretical statistics. Philos.


Trans. R. Soc. Lond. Ser. A 222 (1922) 309–368.

[37] B. Roy Frieden, Science from Fisher information: A Unification. Cambridge


University Press, Cambridge, 2004.

[38] A. L. Mayer, C. W. Pawlowski and H. Cabezas, Fisher Information and dinamic


regime changes in ecological systems. Ecological Modelling 195 (2006) 72–82.

[39] K. Zografos, K. Ferentinos, T. Papaioannou, Discrete approximations to the


Csiszár, Renyi, and Fisher measures of information. Canad. J. Stat. 14 (1986)
355–366.

20
[40] L. Pardo, D. Morales, K. Ferentinos and K. Zografos, Discretization problems
on generalized entropies and R-divergences. Kybernetika 30 (1994) 445–460.

[41] M. Madiman, O. Johnson and I. Kontoyiannis, Fisher Information, compound


Poisson approximation, and the Poisson channel. IEEE Int. Symp. Inform. Th.,
Nice, June 2007.

[42] N. Brunel, J-P. Nadal. Mutual information, Fisher information and population
coding. Neural Computation 10 (1998) 1731–57.

[43] K. Zhang, T. J. Sejnowski. Neuronal Tuning: To Sharpen or Broaden? Neural


Computation 11 (1999) 75–84.

[44] F. Pennini, A. Plastino, Reciprocity relations between ordinary temperature


and the Frieden-Soffer Fisher temperature. Phys. Rev. E 71 (2005) 047102.

[45] D. P. Feldman and J. P. Crutchfield, Measures of Statistical Complexity: Why?


Phys. Lett. A 238 (1998) 244–252.

[46] D. P. Feldman, C. S. McTague, and J. P. Crutchfield, The organization


of intrinsic computation: Complexity-entropy diagrams and the diversity of
natural information processing. Chaos 18 (2008) 043106.

[47] R. López-Ruiz, H. L.Mancini, and X. Calbet, A statistical measure of


complexity. Phys. Lett. A 209 (1995) 321–326.

[48] I. Grosse, P. Bernaola-Galván, P. Carpena, R. Román-Roldán, J. Oliver,


and H. E. Stanley, Analysis of symbolic sequences using the Jensen-Shannon
divergence. Phys. Rev. E 65 (2002) 041905.

[49] M. T. Martı́n, A. Plastino, and O. A. Rosso. Generalized statistical complexity


measures: Geometrical and analytical properties. Physica A 369 (2006) 439-462.

[50] O. A. Rosso, H. A. Larrondo, M. T. Martı́n, A. Plastino, M. A. Fuentes,


Distinguishing noise from chaos. Phys. Rev. Lett. 99 (2007) 154102.

[51] O. A. Rosso, F. Olivares, L.Zunino, L. De Micco, A. L. L. Aquino, A. Plastino,


and H. A. Larrondo, Characterization of chaotic maps using the permutation
Bandt-Pompe probability distribution. European Physics Journal B 86 (2012)
116.

[52] P. M. Saco, L. C. Carpi, A. Figliola, E. Serrano and O. A. Rosso, Entropy


analysis of the dynamics of El Niño/Southern Oscillation during the Holocene.
Physica A 389 (2010) 5022–5027.

[53] K. Keller, M. Sinn. Ordinal Analysis of Time Series, Physica A 356 (2005),
114–120.

[54] L. Zunino, M. C. Soriano, I. Fischer, O. A. Rosso, and C. R. Mirasso,


Permutation-information-theory approach to unveil delay dynamics from time-
series analysis. Physical Review E, 82 (2010) 046212.

21
[55] M. C. Soriano, L. Zunino, O. A. Rosso, I. Fischer, and C. R. Mirasso. Time
Scales of a Chaotic Semiconductor Laser With Optical Feedback Under the
Lens of a Permutation Information Analysis. IEEE J. Quantum Electron 47
(2001) 252–261.

[56] L. Zunino, M. C. Soriano and O. A. Rosso, Distinguishing chaotic and stochastic


dynamics from time series by using a multiscale symbolic approach. Phys. Rev.
E 86 (2012) 046210.

[57] A. Kowalski, M.T. Martı́n, A. Plastino, O. A. Rosso. Bandt-Pompe approach


to the classical-quantum transition. Physica D (2007) 233, 21–31.

[58] C. Bandt, Ordinal time series analysis. Ecol. Modell. 182 (2005) 229–238.

[59] O. A. Rosso, L. C. Carpi, P. M. Saco, M. G. Ravetti, A. Plastino and H.


Larrondo, Causality and the entropycomplexity plane: Robustness and missing
ordinal patterns Physica A 391 (2012) 42–55.

[60] O. A. Rosso, L. C. Carpi, P. M. Saco, M. G. Ravetti, H. Larrondo and


A. Plastino. The Amigó paradigm of forbidden/missing patterns: a detailed
analysis. Eur. Phys. J B 85 (2012) 419.

[61] M. Shamir and H. Sompolinsky, Correlation codes in neuronal populations. In:


Advances in Neural Information Processing Systems 14, 277–284. MIT Press,
2001.

[62] S. Seung and H. Sompolinsky, Simple models for reading neuronal population
codes. Proc. Natl. Acad. Sci. USA 90 (1993) 10749–10753.

[63] F. Montani, E. Phoka, M. Portesi, S. R. Schultz, Statistical modelling of higher-


order correlations in pools of neural activity. Physica A 392 (2013) 3066–3086.

[64] X. Xuand and E. M. Callaway, Laminar Specificity of Functional Input to


Distinct Types of Inhibitory Cortical Neurons. Journal Neurosci. 29 (2009) 70–
85.

[65] E. Polgár, C. Durrieux, D. I. Hughes, A. J. Todd, A Quantitative Study of


Inhibitory Interneurons in Laminae I-III of the Mouse Spinal Dorsal Horn.
PLoS ONE 8 (2010): e78309.

[66] D. W. Meechan, E. S. Tucker, T. M. Maynard, A. S. LaMantia, Diminished


dosage of 22q11 genes disrupts neurogenesis and cortical development in a
mouse model of 22q11 deletion/DiGeorge syndrome. Proc. Natl. Acad. Sci.
USA. 106 (2009) 16434–16445.

[67] Y. Gonchar, Q. Wang, A. Burkhalter, Multiple distinct subtypes of GABAergic


neurons in mouse visual cortex identified by triple immunostaining. Frontier
Neuroanatomy 1 (2007) 3. d

[68] M. A. Just, V. L. Cherkassky, T. A. Keller, N. J. Minshew, Cortical activation


and synchronization during sentence comprehension in high-functioning autism:
evidence of underconnectivity. Brain 127 (2004) 1811–1821.

22
[69] G. Dölen, E. Osterweil, B. S. Rao, G. B. Smith, B. D. Auerbach, S. Chattarji,
M. F. Bear, Correction of fragile X syndrome in mice. Neuron. 56 (2007) 955–62.

[70] Y. Gonchar, Q. Wang, A. Burkhalter, Multiple distinct subtypes of GABAergic


neurons in mouse visual cortex identified by triple immunostaining. Frontier
Neuroanatomy 1 (2008) 3.

[71] F. Lombardi, H. J. Herrmann, C. Perrone-Capano, D. Plenz, L. de


Arcangelis, The balance between excitation and inhibition controls the temporal
organization of neuronal avalanches. Phys. Rev. Lett. 108 (2012) 228703.

[72] C. E. Shannon, A mathematical theory of communication. Bell System


Technical Journal 27 (1948) 379–423.

[73]
http://www.keithschwarz.com/interesting/code/factoradic-permutation/FactoradicPermutation

[74] T. Schreiber, Measuring information transfer. Phys. Rev. Lett. 85 (2000) 461–
464.

23
A B
1000 14000
Number of neurons (n=ne+ni)

900
12000
800
700 10000

# of spikes
600 8000
500
6000
400
300 4000
200
2000
100
0 0
0 200 400 600 800 1000 0 5 10 15
Time (ms) Time (ms)
C D
900 14000
Number of neurons (n=ne+ni)

800
12000
700
10000
600
# of spikes

500 8000
400
6000
300
4000
200

100 2000

0 0
0 200 400 600 800 1000 0 5 10 15
Time (ms) Time (ms)
E 800 F
Number of neurons (n=ne+ni)

14000
700
12000
600
10000
500
# of spikes

8000
400
6000
300

200 4000

100 2000

0
0 200 400 600 800 1000 0
0 5 10 15
Time (ms) Time (ms)

Fig. 1. A) Spike rasters, n = 1000, ne = 550, ni = 450, m = 40, taken in a time


windows of 100 ms. B) Histogram of the interspike intervals (ISI) for n = 1000,
ne = 550, ni = 450 and m = 40. C ) Same as in A but considering n = 900, ne = 500,
ni = 400 and m = 30. D) Same as in B but considering n = 900, ne = 500, ni = 400
and m = 30. E ) Same as in A but considering n = 800, ne = 450, ni = 350 and
m = 20. F ) Same as in B but considering n = 800, ne = 450, ni = 350 and m = 20.

24
A B
1.2 0.5

1 0.45 m=120
n =1000 , ne=550, ni=450

MPR Complexity
0.8 0.4
Variance (ISIs)

0.6 0.35

0.4 0.3 n =1000 , ne=550, ni=450

0.2 0.25
m=2
0
0 20 40 60 80 100 120 0.2
0.2 0.3 0.4 0.5 0.6 0.7
m
Normalized Shannon Entropy
C D
1.2 0.5

m=120
1
n =900 , ne=500, ni=400 0.45
MPR Complexity

0.8
Variance (ISIs)

0.4
0.6

0.35 n =900 , ne=500, ni=400


0.4

0.2 0.3

0 m=2
0 20 40 60 80 100 120 0.25
m 0.2 0.4 0.5 0.6 0.7
Normalized Shannon Entropy
E F
1.2 0.5
n =800 , ne=450, ni=350
m=120
1 0.45
Variance (ISIs)

MPR Complexity

0.8 0.4

0.6 0.35
n =800 , ne=450, ni=350
0.4 0.3
m=2
0.2 0.25
0 20 40 60 80 100 120 0.2 0.4 0.5 0.6 0.7
m
Normalized Shannon Entropy

Fig. 2. A) Variance of the ISIs versus the number of interconnected neurons m


considering n = 1000, ne = 550, ni = 450. B) Causal MPR complexity versus nor-
malized Shannon entropy (H ×C plane), embedded dimensionality D = 6. n = 1000,
ne = 550, ni = 450. C ) Same as in A but considering n = 900, ne = 500, ni = 400.
D) Same as in B but considering n = 900, ne = 500, ni = 400. E ) Same as in
A but considering n = 800, ne = 450, ni = 350. F ) Same as in B but consider-
ing n = 800, ne = 450, ni = 350. The grey triangles correspond to the different
interconnectivities m = 2, 4, 6, 8, 10, 20, 30, 40, 60, 80, 100 and 120.

25
A B 0.78
0.78
m=40 m=40
Fisher Information (bits)

Fisher Information (bits)


0.76 0.76

0.74 0.74

0.72 0.72

0.7 n =1000 , ne=550, ni=450 0.7 n =1000 , ne=550, ni=450


m=120 m=120
0.68 0.68
m=2 m=2
0.66 0.66
0.2 0.3 0.4 0.5 0.2 0.3 0.4 0.5 0.6 0.7
MPR Complexity Normalized Shannon Entropy
C D 0.78
0.78
m=30 m=30

Fisher Information (bits)


0.76
0.76
Fisher Information (bits)

0.74 0.74

0.72 0.72

n =900 , ne=500, ni=400 n =900 , ne=500, ni=400


0.7 0.7
m=40 m=2
0.68 0.68
m=120 m=120
0.66 0.66
0.2 0.3 0.4 0.5 0.2 0.3 0.4 0.5 0.6 0.7
E MPR Complexity F Normalized Shannon Entropy
0.78 E 0.78
m=20,30 m=20,30
Fisher Information (bits)

0.76 0.76
Fisher Information (bits)

0.74 0.74

0.72 n =800 , ne=450, ni=350 0.72 n =800 , ne=450, ni=350


m=2 m=2
0.7 0.7

0.68 0.68
m=120 m=120
0.66 0.66
0.25 0.3 0.35 0.4 0.45 0.5 0.3 0.4 0.5 0.6 0.7
MPR Complexity Normalized Shannon Entropy

Fig. 3. A) Causal Fisher information versus MPR complexity (F × C plane), em-


bedded dimensionality D = 6, n = 1000, ne = 550, ni = 450. The maximum value
occurs at m = 40. B) Causal Fisher information versus normalized Shannon per-
mutation entropy (F × H plane), embedded dimension D = 6, n = 1000, ne = 550,
ni = 450. The maximum value occurs at m = 40. C ) Same as in A but considering
n = 900, ne = 500, ni = 400. The maximum value occurs at m = 30. D) Same
as in B but considering n = 900, ne = 500, ni = 400. The maximum value occurs
at m = 30. E ) Same as in A but considering n = 800, ne = 450, ni = 350. The
maximum values occur at m = 20 and 30. F ) Same as in B but considering n = 800,
ne = 450, ni = 350. The maximum values occur at m = 20 and 30. The grey trian-
gles correspond to the different interconnectivities m = 2, 4, 6, 8, 10, 20, 30, 40, 60,
80, 100 and 120. When m is raised, MPR complexity and entropy increase.

26

También podría gustarte