Está en la página 1de 21

Literature Study

Methods for quantifying eective nonlinear connectivity in


the sensorimotor network
Bekir Guliyev
bekirdec@gmail.com

February 11, 2016

Contents
1 Introduction

1.1

Sensorimotor network

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1.2

Sensorimotor dysfunction: Stroke . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1.3

Functional network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1.4

Construction of functional network . . . . . . . . . . . . . . . . . . . . . . . . . . .

1.5

Linear vs Nonlinear connectivity measures . . . . . . . . . . . . . . . . . . . . . . .

1.6

Problem denition

1.7

Aim and outline of this report

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2 Eective nonlinear connectivity measures


2.1

Notation

2.2

Phase synchronization and MSPC

2.3

5
6

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2.2.1

The background

2.2.2

Multispectral phase coherence (MSPC)

. . . . . . . . . . . . . . . . . . . .

Granger causality: from the denition to the nonlinear extension

. . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . .

2.3.1

Linear Granger causality

2.3.2

Nonlinear Granger Causality

. . . . . . . . . . . . . . . . . . . . . . . . . .

2.4

Error reduction ratio causality (ERRC)

. . . . . . . . . . . . . . . . . . . . . . . .

11

2.5

Dynamic Causal Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

12

3 Discussion

13

3.1

Parametric vs Nonparametric techniques . . . . . . . . . . . . . . . . . . . . . . . .

3.2

Exogenous inputs considerations

3.3

Nonstationarity considerations

3.4

Multivariate modeling considerations

. . . . . . . . . . . . . . . . . . . . . . . . .

14

3.5

Time delay and eective connectivity . . . . . . . . . . . . . . . . . . . . . . . . . .

14

3.6

Application of eective nonlinear methods in the functional and sensorimotor networks

13

. . . . . . . . . . . . . . . . . . . . . . . . . . . .

14

. . . . . . . . . . . . . . . . . . . . . . . . . . . . .

14

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

15

4 Literature search methodology

16

5 Conclusion

17

1 Introduction
In this introductory chapter, some background and concepts are described that provide the basis
for the review of connectivity methods presented in this report. This chapter is concluded by the
aims of literature review and an outline of this report.

1.1 Sensorimotor network


The control of movement (motor control) involves our Central nervous system (CNS), muscles,
skeleton and proprioceptors. The CNS, consisting of spinal cord and brain, sends motor commands
via motor neurons (eerent pathways) to muscles. In response, the muscles contract, generate force
and this results in a change in joint position. Sensors in the muscles and tendons (proprioceptors)
sense changes in muscle stretch, stretch velocity and force. Then the sensory feedback signals are
sent back to the CNS via aerent pathway where they close the loop and adjust the motor control.
Motor control (MC) relies on a circular ow of information within aerent and eerent pathways
and forms a so-called closed loop feedback system (see Figure 1)
Several parts of the CNS are involved in motor control and they form multiple nested and
interacting loops which dene sensorimotor networks [42]. Figure 1 depicts the parts of the CNS
that are involved in formation of sensorimotor network (SMN). The parts of CNS are hierarchically
organized into three main levels : The lower level entails spinal cord, the middle level involves brain
stem regions such as the reticular formation (RF) and vestibular nuclei (VN), and the highest level
of control is provided by cerebral cortex [42].

Figure 1: Indication of brain regions involved in formation of sensorimotor network: At the lowest
hierarchical level there is the spinal cord; at a higher level there are brain stem region.

The

cortex provides the highest level of motor control; Multiple cortical areas and sub-cortical neurons
interact in motor control. In this gure red lines represent the eerent pathways, blue lines the
aerent pathways.

Green lines indicate the visual pathways and black lines the local cortical

and subcortical pathways.

Abbreviations:

RF: reticular formation, VN: vestibularnuclei, M1:

primarymotor cortex, S1: primary sensory cortex, 5: parietal cortex area 5, dPM: dorsal premotor
cortex, SMA: supplementarymotor area, PF: prefrontal cortex, V1:

primary visual cortex, 7:

posterior parietal cortex area 7, BG: basal ganglia, RN: red nucleus, C: cerebellum

Neuroscience (Scott, 2004), copyright 2004

Nature Reviews

1.2 Sensorimotor dysfunction: Stroke


A stroke is a sudden death of cortical brain areas due to reduced blood supply to the brain tissues
due to an obstructed blood vessel (ischemic stroke) or blood bleeding. As a consequence, brain
tissue in the aected vascular territories becomes dysfunctional, and ultimately necrotic.

This

dysfunctionality of brain areas cause an impairment in anatomical (and functional) connections


between neural assemblies. As a result, sensorimotor network loosing its normal functionality.
One of the prominent example to motor impairments is upper limb paralysis, which is the
major form of disability in more than 80% of the patients [5]. Current rehabilitation procedures
aim to restore motor functions, however it is still unknown why certain patients show limited
motor recovery while others regain full functionality. With existing rehabilitation therapies, 30%
of those patients are able to regain some dexterity of upper limb, leaving others with impairment
[27]. Therefore, to increase our understanding of motor recovery it is essential to understand the
role of neural mechanisms enabling recovery of motor function after stroke. It has been shown that
brain's both sensory and motor areas have intrinsic ability to compensate for structural damage
through reorganizing (rewiring) of survival network during functional recovery after stroke [20, 39].
Thus, monitoring the eerent and aerent pathways during recovery can provide valuable insight
into how reorganized network structures are linked to the functioning of the sensorimotor network.
In general, connectivity can give a greater insight into the functional network rewiring during
early and late post stroke recovery period and can improve prediction of clinical outcome by
employing predictive algorithms and creating a valid predictive theoretical model [7].

1.3 Functional network


The anatomical connections are physically hardwired connections formed between areas of the
CNS. The functional connections represent the actual information exchange across these anatomical
connections [14]. A group of neurons linked by those connections form a network, which can be
dened as neural assemblies (Figure 2).

It is believed that the communication among neural

assemblies underlie the corresponding cognitive behavior or sensorimotor activity [47, 14].

Figure 2: Schematic representation of distributed neural assemblies. Group of neurons and established connections between them are the main attributes of functional network emerged from ow
of information.

Nature Review Neuroscience (Valera et al.,2011),copyright 2001

An important concept for understanding communication in brain networks is synchronization.

The synchronous oscillations between neural assemblies are thought to play a crucial role

in formation of functional network . For example, neural assemblies involved in a certain motor
task synchronize their oscillations promoting the transmission of information between the neurons
within closed loop motor control [47, 46, 44, 14]. The formation of functional connections during
motor control has been shown in [26, 40].
Analysis of functional network can be performed on dataset recorded using such functional
imaging techniques as fMRI (functional magnetic resonance imaging), MEG (Magnetoencephalography) and EEG (Electroencephalography). The main advantage of using EEG/MEG over fMRI
is the ability to capture the temporal dynamics of neural assemblies that occur at typical time
scales on the order of tens of milliseconds. fMRI, however, can provide superior spatial resolution
of 1-3 mm (Box 1)

Box 1: Functional Imaging Techniques

Hemodynamic measurement
fMRI Functional magnetic resonance imaging (fMRI) detect the dierences in local magnetic eld inhomogeneities caused by changes in blood ow (hemodynamics) and deoxyhemoglobin content in the activated neural tissue (metabolic consumption). Its main advantage is that it provides high spatial resolution (1-3mm). However, due to relatively slow
hemodynamic response,when compared to electrical neural activity, its temporal resolution
is limited to approximately 1s. In addition, fMRI provides an indirect relationship between
hemodynamics and neural activity.

Electrophysiological measurement
EEG and MEG The electroencephalography (EEG) and magnetoencephalography (MEG)
are two complementary techniques that record, respectively, the scalp electric potentials
produced by electric activity in neural assemblies and the magnetic induction outside of the
head. They directly measure the electric brain activity and oer superior temporal resolution on the order of milliseconds allowing studies of the dynamics of functional networks
.

1.4 Construction of functional network


Connectivity between neural assemblies can be comprehensively described using the mathematical
framework, known as graph theory. According to this framework, network is dened by a collection
of nodes linked by connections, mathematically described as graphs.

In our context, the nodes

represent neural assemblies,while edges represent functional connections between them.

Graphs

can be undirected or directed.


Consequently, there are two types of connectivity that can be captured in functional network :
1.

Functional connectivity

- captures statistical interdependency between distributed states of

a set of nodes of functional network. Statistical dependencies can be thought as undirected


relationship between the nodes of functional network (Figure 3). Functional connectivity is
'model-free', which means that it is not required to a priori specify the model [16, 15].
2.

Eective connectivity

- denes causal inuence of one node exerted to another (Figure 3)

[16]. The measures can be either 'model based' or 'model free'.

Figure 3: Graph model showing three dierent forms of connectivity between nodes (A,B ,C ) of
functional network namely functional (bidirectional solid line),eective (unidirectional solid line)
and indirect (dotted line).
connectivity between them.

The nodes corresponds to neural assemblies, while edges represent


Whereas there is a undirected connectivity between the nodes

andC , there is an indirect inuence of

on

on

(Indirect connectivity) and direct inuence of node

(eective connectivity)

The notions of functional connectivity and eective connectivity can be dened, respectively, in
terms of coupling (presence of interactions) and of causality (presence of cause and eect relation).
Though, in this report the terms functional connectivity and eective connectivity will be used.

1.5 Linear vs Nonlinear connectivity measures


In neuroscience, many studies investigating brain connectivity have focused on linear dependency
between the nodes at the same frequency (within frequency coupling) [2, 18]. In this linear framework, connectivity is often assessed using multivariate autoregressive (MVAR) representation of
the recorded fMRI/EEG/EMG dataset. Linear functional connectivity is traditionaly investigated
by means of correlation in time domain and coherence in frequency domain.

The most widely

applied technique in experimental studies on connectivity in the context of sensorimotor control is

corticomuscular connectivity,

measure of cortico-spinal synchronization (i.e., synchronization be-

tween cortical and spinal neural assemblies) between the recorded EEG and EMG signals [46].
Generally, corticomuscular connectivity is measured by coherence between cortical activity (EEG)
and muscle activity (EMG):

corticomuscular coherence (CMC) [4, 21, 36].

When it comes to detection of eective connectivity, the are three main reasons why scientists favor linear eective connectivity over nonlinear eective connectivity. First, almost all linear
methods used in multivariate analysis of neurophysiological data can perfectly provide frequencydomain information [25].

Second, they are generally much simpler than nonlinear techniques.

Third, they have been extensively studied and have a solid theoretical framework [13]. Nevertheless, the components of a sensorimotor network as well as physiological processes between neural
assemblies rarely display linear interdependencies and are highly nonlinear [10, 9, 43].

Identi-

cation and quantication of nonlinear relationships between the nodes of functional network (i.e.
neural assemblies) provides crucial insights into the dynamics of the sensorimotor network.
In general, it worth to mention that nonlinear approaches should not be considered as replacement for linear ones.

Sensorimotor network often has regions where their behavior can be

considered as linear, so by using a good approximation it will be possible to detect a reliable linear

1 dynamics

interaction [6]. However, by using only linear methods one can fail to detect the hidden
of the network [38, 30].

1.6 Problem denition


Many studies investigating eective connectivity have focused only on the linear interaction between the components of functional and sensorimotor networks. However, recent evidence suggest
that the interaction among the areas of a network can be highly nonlinear.

As a consequence,

there is a need to examine existing nonlinear methods enabling a detection of eective connectivity
in the sensorimotor network and providing additional information hidden in linear approaches. In
the present review I aimed to examine and compare measures for quantifying eective nonlinear
connectivity.

Before starting the comparative analysis, lets formulate the criteria that eective

nonlinear connectivity measures should meet. I split them into two classes : 1) 'Essential' criteria
and 2) 'Good to have' criteria

The essential criterion:

The measure should account for exogenous or experimental input. This is because a pure
eective connectivity in sensorimotor network can only be captured using the exogenous (experimental) input, a measure of eective connectivity should explicitly account for exogenous
input signal.

Good to have criteria:

The measure should be able to detect time delay between the measurements, since eective
connectivity in the nervous system is usually associated with a time delay, which describes
the dynamic relationship between the nodes of functional network.
The measure should be able to provide multivariate modeling. This is because usually several
(more than two) neurophysiological signals are simultaneously recorded, and the assessment
of the interdependence between those signals can give new insights into the functioning of
the systems that produce them.

1 Since

usually we record a set of observations (e.g EEG and MEG) and don't have direct access to the dynamics

of the biophysical process underlying the system, the states of the system are called hidden

The measure should not require the a priori denition of the type of interaction.

This is

important since the complete model structure of a real systems is often unknown.
Th measure should be able to detect time-varying structure, because signals sampled from
the real world are rarely stationary

1.7 Aim and outline of this report


Most of the connections in the sensorimotor network has causal and nonlinear nature. Thus, the
main focus of this literature study is to review some recently introduced measures for inferring the
most suitable eective nonlinear connectivity method from electrophysiological data. Therefore,
the literature question can be formulated as follows:

What is the most suitable method to detect an eective nonlinear connectivity in the sensorimotor network?

The report is organized as follows:

In the next section, I review eective nonlinear connectiv-

ity measures such as phase synchronization concept and its causal extension, Kernelized Granger
causality (KGC) and Nonlinear AutoRegressive with eXogenous inputs (NARX) algorithms - nonlinear Granger causality methods based on nonlinear AR model, Transfer entropy (TE) - nonlinear Granger causality based on probabilistic approach, Error-reduction-ratio causality approach
(ERRC), and dynamic causality modeling (DCM) technique. Merits, limitations and underlying
assumptions of these techniques are discussed in section 3. Section 4 illustrates the literature search
methodology. Finally, the measures most suitable for analyzing eective nonlinear connectivity in
the sensorimotor network are summarized.

2 Eective nonlinear connectivity measures


Developing methods to eciently and accurately quantify brain connectivity has been, and still
remains, a challenging problem. In this section, I provide an overview of the most widely used and
promising techniques to estimate eective connectivity. In-depth technical details of each method
are provided in relevant references.

2.1 Notation
Electrophysiological signals can be represented as multivariate random process, but for simplicity,
let us consider the bivariate case.

Let

X = (x1 ,x2 , ..., xN )

dimensional signals whose time observations are denoted by


nodes

and

and

xt

Y = (y1 ,y, ..., yN ) be two N


yt an which observed from

and

respectively. These two time series can only access a limited number of variables

which usually do not have a one to one correspondence with the system variables we are interested
in.

Nevertheless, causality hypotheses are formulate in terms of the underlying systems rather

than on the signals being measured.

The procedure of embedding allows us to overcome this

issue by approximately reconstructing the full state space of a dynamical system from

yt observations.

xt

and

With embedding one time series or a few simultaneous time series are converted

to a series or sequence of vectors in an m-dimensional embedding space. Two dierent embedding


procedures exist:

(i) time-delay embedding; (ii) spatial embedding.

In this report time-delay

embedding method is used, to map our scalar time series into trajectories in a state space. The
mapping uses

s consecutive values of the time series as the values for thes coordinates of the vector.
s values of the time series we obtain the series of vectors

By repeating this procedure for the next

or points in the state space of the system.


The state space

xs,

corresponding to the

xt observation

formed according to

xs, = (xt , xt , xt2 , ..., xt(s1) )T

(1)

where s is the embedding dimension, - time lag between successive elements of the vector, and
t = 1, 2, ..., N - discrete valued time index.
Further in this report we take = 1

xs = xs,1

(2)

Similarly, we can form the delay embedding vector for the

yt

time series as

y s, = (yt , yt , yt2 , ..., yt(s1) )T


Note that state vectors

s,

and

s,

(3)

represent the total activity of nodes

and

respectively.

2.2 Phase synchronization and MSPC


2.2.1

The background

A growing amount of literature suggest that, in contrast to the amplitude of node oscillations the
phase contains all the information about the temporal structure (i.e., relative timing) of functional
connections [47]. So this suggest that phase synchronization plays a crucial role in coordination of
communication between nodes of functional network. The classical notion of phase synchronization
is dened as a locking of the phases at two frequencies between two time series ( i.e. ideally the
phase dierence stays constant) :

|| = |n(fm ) m(fz )| = const

(4)

n(fm ) and m(fn ) are the phases of two time series at frequencies fm
nfm = mfn , and is the phase dierence between them.

where
relation

and

fn ,

with the

Most of the studies investigating neural synchronization have focused on the linear interaction
between nodes using linear connectivity estimators like coherence, which refers to neural synchronization at the same frequency ( e.g

f1 f1 ).

However, there is growing evidence that neural

connectivity can be highly nonlinear [10, 9], which refers to synchronization between harmonic
(e.g.

f1 3f1 )

and/or intermodulation (e.g.

{f2 , f3 } 2f2 f3 )

frequencies.

Many studies have shown nonlinear phase synchronization beyond the the second order by
capturing neural responses to periodic stimuli at higher order (>2) harmonic and intermodulation
frequencies, indicating presence of higher order nonlinear interactions [22, 37].

dth order, such as y = xd .


Applying multisine input signal with R dierent frequencies f1 , f2 , ..., fR , the system can show dth
In general case of a nonlinear system with a nonlinearity of he

order nonlinear phase synchronization between the set of input frequencies and a single output

frequency. The latter is intermodulation frequency and can be expressed as

f = ar fr ,
r=1

where

R
|ar | = d. As a result, a
r=1
generalized notion of high order nonlinear phase synchronization can be written as :

ar are

weights of the input frequencies to the output frequency and

|| = | ar (fr ) (f )| = const

(5)

r=1

This denition is applicable to all possible harmonic and intermodulation synchronization at


any order of nonlinearity.

2.2.2

Multispectral phase coherence (MSPC)

For two recorded time series in the frequency domain

X(f )

and

Y (f ), the MSPC
XY as :

[50] at the

dth

order is dened as the magnitude of multispectral phase coherency

XY (f1 , f2 , ..., fR ; a1 , a2 , ..., aR )d =


where

X(fr )

K is the total number of segments


Y (fr ) at kth segment respectively.

and

R
1 K
exp(j( ar Xk (fr ) Yk (f )))
K k=1
r=1
of signals,

Xk (fr )

and

Yk (fr )

(6)

are the phases of

Computational details of the MSPC can be found

in Y.Yang et al [50].
Thus, MSPC (XY
ence over

= ||)

reects the consistency of nonlinear cross-frequency phase dier-

segments and reects phase relationship between two signals, independently of their

amplitudes. The value of MSPC varies between 1 and 0, where 1 indicates the perfectly consistent

nonlinear synchronization over the segments, and 0 indicates the complete randomness of nonlinear
phase interaction.
It is worth to note that MSPC can also indicate the direction of interaction by estimating the
time delay (nonzero phase lag) between the two signals. Therefore,

XY

can be used to quantify

nonlinear eective connectivity between two nodes of functional network [50].

2.3 Granger causality: from the denition to the nonlinear extension


2.3.1

Linear Granger causality

To understand nonlinear approaches to Granger causality (GC) as generalizations of the linear


case, we rst review the linear case briey.Granger causality is an approach that measures the
causal association and eective connectivity and can provide information about the dynamics
and directionality on both EEG and fMRI data. The main idea behind GC is examining if the
prediction of one time series can be improved by incorporating information about the past terms
of other.

This idea was originally introduced by Norbert Wiener (1956).

However, he lacked

a practical implementation of his idea and such an implementation was later proposed by Clive
Granger (1969) in the context of linear autoregressive (AR) models of stochastic processes.
stated that if the variance of the prediction error for the current value of
by incorporating past observations of

xt ,

then it can be said that

xt

yt

cause

He

is signicantly reduced

yt .

Linear Granger Causality (LGC) can be inferred using various methods, but the most widely
used techniques are AutoRegressive (AR) and AutoRegressive with eXogenus input (ARX) based
methods [19].

Lets consider the following AR model predicting the values

weighted sum of

yt

and

xt

based on

lagged (past) observations

s
yt = Ayt1
+ y

(7)

s
yt = Ayt1
+ Bxst1 + y|x

(8)

where A and B contain model coecients to be estimated, and

y

and

y|x

are the 'prediction

errors' of the model. The magnitude of prediction error can be assessed by their variances, showing
the error variability. So if var(y|x ) < var(y ), then there is an improvement in the prediction of

yt

due to

xt .

Thus

xt

LGCXY = ln
where

var(Y, Y )

yt .
Y can then be evaluated as

var(Y |Y )
var(Y |, Y , X )

(9)

denotes the variance of the prediction error depending only on its own past

Y , and var(Y |, Y , X )
Y and X together

information, denoted by
depending on

is said to to have a casual inuence on

Thus, the level of linear causality dened by Granger X

denotes the variance of the prediction error

Other widely used multivariate causality measures, based on the notion of Granger causality,
include Partial Directed Coherence (PDC) [3] and Direct Transfer Function (DTF) [25]. Both DTF
and PDC are spectral (frequency domain) implementations of GC and based on AR model tted
to the EEG signal.

2.3.2

Nonlinear Granger Causality

In its standard application, GC is limited to modeling only the linear eects and so, absence of
nonlinear terms in AR and ARX models leads to the drawback that they fail to detect nonlinear
causal interaction. More generally, GC is not tied to linear AR or ARX models and its possible
nonlinear extensions are presented in the following sections.

Kernelized Granger causality (KGC)

Recently, specic implementation of nonlinear extension of

GC based on kernel-based method suggested to be suitable for capturing nonlinearitires in EEG


data and showed a good statistical power [33, 1, 34]. Kernel algorithms work by embedding data
into a kernel hilbert space (KHS), and searching for linear relations in that space. Hilbert spaces
here are spaces of kernel functions, where these functions (e.g., Gaussian or polynomial) can be
thought of as correlation or covariance functions.
A straightforward extension of Eq. (8) to evaluate nonlinear granger causality can be written
as follows:

s
yt = AT (yt1
) + B T (xst1 ) + y|x
where

centered at

and

yps

(10)

are P-dimensional real vectors whose elements are nonlinear kernel functions

(1pP). A gaussian kernel with

apriori

xed variance

can be dened as:

|
yt |2
1

exp( 2 )
G(
yt , 2 ) =
2
( 2)2

(11)

Thus nonlinearity of the regression model can be controlled by choosing the Gaussian kernel
function.

Nonlinear prediction in vector space is equivalent to performing a linear prediction

(GC) in the feature space of the kernel function, which is in (11) is Gaussian kernel space. This
implies that we can handle the nonlinear prediction using convenient tools from linear algebra and
functional analysis.

NARX-Based Granger Causality (NARXGC)

Another nonlinear extension of GC is based on

Nonlinear AutoRegressive with eXogenous inputs (NARX) model, which can describe wide range
of nonlinear processes [53]. NARX model for

can be expressed as

s
s
s
s
yt = bs1 yt1
+ bq2 xqt1 + bs3 yt1
yt1
+ bq4 xqt1 xqt1 + bq5 xqt1 yt1
+ ey
where

inuence of

and

are NARX model orders; the coecient

with itself;

b2

b1

is associated with the linear causal

is associated with the linear causal inuence from

associated with the nonlinear causal inuence of


nonlinear causal inuence from

to

Y ; ey

(12)

with itself;

b4

and

b5

to

Y ; b3

is

is associated with the

is zero mean white noise sequence. For simplicity, only

second degree nonlinear terms are shown.


Lets denote

Yn

the set of nonlinear terms from past information of

nonlinear terms from past information of


by past information of

and

Y.

and

(XY )
n

Y , Xn

denote the set of

denote the set of nonlinear terms coupled

So the nonlinear causal inuence of

to

can be measured by

the index [53].

n
N ARXGCXY
= ln
where

var()

var(Y |Yl , Yn , Yl , Xl )
var(Y |Yl Yn , Xl , Xl , (XY )
n)

(13)

is the variance of the prediction error.

Before modeling, an initial set of candidate terms have to be chosen. It is important to include
as much correct terms as possible.
reliable parametric estimate.

The ordinary least-squares algorithm may fail to produce

For most nonlinear system identication problems, only relatively

small number of terms are required in the nal regression model.

Therefore, an ecient model

selection algorithm such as Adaptive-Forward Orthogonal Least-Squares (OLS) method has been
proposed recently to detect and select the most signicant regressors in Eq. (12). The working
principle of OLS is given in section 2.4.
Thus, using eq.

(13) we can detect nonlinear eective connectivity (i.e.

between nodes of functional network.

information ow)

This model can be easily extended to time- varying and

multivariate case [53], considering more than two signals and time-varying interaction between
them. In paricular, since DTF and PDC are spectral measures based on AR model, their nonlinear
extension will be based on nonlinear AR or ARX model. Thus, nonlinear DTF and PDC can be
considered as spectral implementations of NARXGC.

Transfer Entropy (TE)

Transfer entropy is is a reformulation of Granger and Wiener's principle

in the framework of information theory (IT) [41].

It consist of real valued and phase transfer

entropy.

Real valued transfer entropy (rTE)

Real valued transfer entropy (rTE) is a measure of eec-

tive connectivity between two nodes and can be seen as specic version of the mutual information
(MI) for conditional probabilities. Whereas MI is a symmetric measure which can not detect directional ow of information, rTE is using a Markov process and is designed to detect the directed
exchange of information between two systems [30].
First, consider a Markov process of order s.
The conditional probability state xt+1 given
s
s
xst (xt , ..., xt(s1) )T P
is p(xt+1 |xt ), where xt is the delay embedding vector. Then the entropy
rate is given by hX =
p(xt+1 ,xst )logp(xt+1 |xst ) = HX(s+1) HX(s) ,i.e., hX measures the number
s
of additional bits required to specify xt+1 given xt [41].
TE is a generalization of the entropy rate among two signals X and Y , and the transition
probability is equal to:

p(xt+1 |xst ) = p(xt+1 |xst , ytl )

(14)

If the deviation for the Markov process is small, then we can assume that
relevance on transition probabilities of the vectors of space

X.

has no (or little)

Otherwise, the assumption of

Markov process is not valid. The incorrect assumption can be quantied by the transfer entropy

p(xt+1 , xst ) and p(xt+1 |xst , yts ), which


transition probabilities of state X :

which can be formulated as Kullback-Leibler entropy between


provides a measure of the inuence of state

TY X =
where

p(xt+1 , xst , ytl )

on the

p(xt+1 , xst , ytl ) log

p(xt+1 |xst , ytl )


p(xt+1 |x)

(15)

is joint probability used to further compute the conditional probabilities.

Phase transfer entropy (pTE)

In section 2.2 we already introduced MSPC method, that able

to detect eective connectivity between the two nodes using only phase of the signals. There is
also another phase based method - Phase transfer entropy (pTE), which based on entropy rather
than synchronization.

It is a recently introduced as a phase-based rTE metric, which aims at

quantication of transfer entropy between phase time series observed from functional network [29].
In frequency domain the time domain signals

xt

and

yt can

be written in complex form as :

xst,f = At exp(i(f t + tx,s ))

(16)

l
yt,f
= At exp(i(f t + ty,l ))

(17)

x,s
y,l
where t
and t
are s and l dimensional instantaneous phase delay vectors of corresponding
s
l
xt,f and yt,f delay vectors in frequency domain.
The pTE for observed phase time-series can be written as:

pT EY X =
where

x
p(t+v
, tx,s , ty,l ) log

denotes the prediction time and

x
t+v

x
p(t+u
|tx,s , ty,l )
x
p(t+u |tx,s )

(18)

is a future state of the instaneous phase to be

predicted by pTE. In order to estimate the phase transfer entropy between

and

X,

lets rewrite

Eq.18 as the sum of four entropies according to :


x
pT EY X = Ht+v
,tx,s + H x,s , y,l Htx,s H x
t

x,s y,l
t+v ,t ,t

(19)

Thus, the estimation problem involves computing dierent joint and marginal probability distributions implicated in Eq. (19). There are many ways to estimate such probabilities and their
performance strongly depends on the data characteristics to be analyzed. See Hlavackova-Schindler
et al [24] for an in-depth review of possible estimation techniques.

10

2.4 Error reduction ratio causality (ERRC)


Error reduction ration-causality (ERRC) test is causality detection method which can detect timevarying linear and nonlinear causalities between two nodes without the identication of a full model
[52, 51], which is dierent with the methods introduced in Sec. 2.3
Lets asssume that
combination of

x(t)

and

y(t)

can be represented by the

lth

order Volterra model with a

lagged inputs:

yt = bs1 (k1 )xs (tk1 )+bs2 (k1 , k2 )xs (tk1 )xs (tk2 )+ +bsl (k1 , , kl )xs (tk1 ) xs (tkl )
where

b(k1 , , kl ) is the lth-order Volterra kernel, which is the one of the most popular.

(20)

Linear-

in- parameters form of eq. (7) can be expressed as linear regression function:

yt = M M
t
where are unknown parameters to be estimated,
(i.e.regressors) involved, and

(21)
is the number of total potential model terms

are model terms generated from the past information oft ,which

can be written as :

t = {xt , xt1 , , x(t d)}T

(22)

As with NARXGC, adaptive-forward orthogonal least squares (OLS) algorithm is also a main
part of ERRC, where it searches through all the possible candidate model terms to select the most
signicant model terms, which are then included to build the model term by term.
Eq. (8) can be written as

(1 (t), ..., 2 (t)).The

Y = P ,

where

and

are

1xN ,

is

1xM

vectors and

P T (t) =

signicance of each of the selected model terms is measured by an index,

called the error reduction ratio (ERR), which indicates how much (in percentage) of the increment
in output variance changes as each new model term is added.

To get a better insight into the

working principle behind ERRC, lets go through the mathematic behind OLS algorithm which
was introduced in NARXGC section. Matrix P can be decomposed as

P = AW,

where

is an

orthogonal matrix

w1 (1)

wN (1)

.
.
.

.
.
.

.
.
.

W =

w1 (M )
and

(23)

wN (M )

is an upper triangular matrix with unity diagonal elements

A=

Therefore,

Y = P

a12
1

a13
a23
1

..

can be rewritten as

a13
a2N

aN 1N
1

Y = W G,

where

(24)

G = A = [g1 , ..., gN ]T .

The

estimation of the original parameters can be computed from

N = gN

l =

gi

N
P

aik k , i = N 1, ..., 1

(25)

k=i+1

So estimation of parameters are based on search procedure to determine the number of signicant terms. To monitor the regressor search procedure, penalized error-to-signal ratio (PESR)
index is used :

P ESRn =

X
1
(1
[ERR]i )
2
(1 n/M )
i=1

11

(26)

where
data and

n denotes the number of selected terms and M denotes the total number of sampled
ERR is an indication of the signicance of each regressor term towards the reduction in

the total mean squared error, which is dened as

ERRi =

gi E[wi2 (t)]
E[y 2 (t)]

(27)

The larger the value of ERR, the more signicant the term will be in the nal model. Using this
order of the signicant terms, the nal regressors are selected. The search procedure stops when

P ESRn

arrives at a minimum.

Assume

i (i = 1, ..., Nl ) are the linear terms and j (j = Nl+1 , ..., NNl +Nn ) are the nonlinear
Nl , Nn (Nl + Nn = N ) denote the number of linear and nonlinear terms, respectively.

terms, where

The linear and nonlinear ERR causality or ERRC from X to Y can be dened as

(l)

ERRCXY =

Nl
X

ERRi

(28)

i=1
(n)
ERRCXY

NX
l +Nn

ERRi

(29)

ERRi

(30)

i=Nl +1
and the total ERRC from X to Y is dened as

(t)

ERRCXY =

NX
l +Nn
i=1

2.5 Dynamic Causal Modeling


DCM is based on so-called generative models, i.e., a set of equations that quantify the mechanisms
by which observed data (e.g., fMRI, EEG/MEG) arising from a network of nodes, are generated.
The core idea behind DCM is that it assumes a functional network as a state-space model, where
eective connectivity is mediated by hidden (unobservable)

2 states of nodes [15]. This is also the

main distinguishing feature of DCM from other methods.


The DCM framework has two main components:

biophysical modeling and statistical data

analysis. Biophysical modeling, in turn, encompasses: state and observation equation [12]

State equation - describes how hidden states of nodes

are inuenced by exogenous input

x = f (x, u, )

(31)

wherex is the rate of change (evolution) of the system's states


exogenous input to nodal statesx, and
evolution function

using ODE, i. e. , describes the temporal evolution of hidden nodal states.

is

x, f

describe the mapping from

a set of unknown state parameters. The structure of

determines presence/absence of edges between the nodes.

The observation equation species how nodal states

produce observable data

y = z(x, )
where

(32)

z is the mapping from system states to observations and is a set of unknown observation

parameters. We will refer to Eqs. (31) and (32) as the modeling component of DCM.
Statistical analysis entails a model inversion technique using Bayesian framework :

Combining the likelihood function showing how likely it is to observe a particular set of
observations

y,

given parameters

= (, )

and priors on the model parameters

,which

reects knowledge about their likely range of values we can derive both the marginal likelihood
of the model (model evidence) :

2 Since

usually we record a set of observations (e.g EEG and MEG) and don't have direct access to the dynamics

of the biophysical process underlying the system, the states of the system are called hidden

12


p(y) =

p(y|)p()d

(33)

p() is prior assumption about .

and the estimator of model parameters , using the posterior probability densityp(|y) over
where

p(y|)

is likelihood function and

=
where the estimator

is

p(|y)d;p(|y) =

p(y|)p()
p(y)

(34)

a mean of posterior probability density . The model evidence is used

for model comparison (e.g., dierent network structures embedded in dierent evolution functions
). The posterior density is used for inference on model parameters (e.g., context-dependent mod-

ulation of eective connectivity). Eqs (33) and (34) correspond to the statistical (model inversion)
part of DCM, allowing to compare dierent models and enables inferences about the parameters
of the best model.
Thus, the key outputs of DCM framework are the evidence for dierent models and the posterior
parameter estimates of the most plausible (best) model, especially those describing the eective
connectivity among nodes of functional network.

For details on DCM, see [17] and for critical

analysis of DCM, see [12].

3 Discussion
This section illustrates the dierent underlying assumptions and limitations of each family of
methods based on the criteria listed in introduction section. It will help reader to decide upon the
most suitable methods in capturing eective nonlinear connectivity in the sensorimotor network.

3.1 Parametric vs Nonparametric techniques


The dierent underlying assumptions of both parametric and nonparametric techniques need to
be considered when selecting one of these models for detecting eective nonlinear connectivity in
sensorimotor network. Parametric (model based) techniques such as KGC, NARXGC and DCM
are heavily dependent on the full knowledge or estimation of a complete system model, which
is often unknown for real systems.

This can lead to complex modeling procedures.

example, is based on well-dened biophysical models of neural dynamic [17].

DCM, for

In this case, we

should choose the best model (or set of interacting models) and experiment with a large number
of dierent parameters in order to test a present hypothesis.

In case of nonlinear GC methods

(e.g KGC and NARXGC), the main challenges are: 1) xing the order of nonlinear autoregressive
model 2) dealing adequately with the overtting of the parameters to the short data length, that
the introduction of more parameters due to nonlinearity results in a minimal loss of statistical
power.
On the other hand, nonparametric methods such as MSPC, rTE and pTE don't need an
assumption regarding the form of a model and completely data driven. This leds to the fact that,
in general, nonparametric methods tend to require longer data sets or averaging over multiple
realizations to mitigate the eect of noise, while the parametric methods will typically work well
for much shorter data length. However, in general, it is preferred to avoid having to assume a-priori
complete model structure and full parameter estimation.
ERRC method requires special attention when it comes in specifying the model. It is parametric
method but at the same time it does not suer from the main drawbacks of parametric modeling
described above. For example, in comparison with the traditional Granger causality methods, this
method produces causality in terms of a ranked ERR values without depending on a full knowledge
or estimation of a complete and unbiased system model. This is a signicant advantage when full
nonlinear models including noise models would normally be required and which are complex to
t. The ERRC-causality is therefore in between parametric and nonparametric methods, and this

3 This

.
)]

particular estimator (Minimum mean square error estimator) minimizes mean square error (MSE)

E[(

But other estimators (e.g., maximum a posteori estimator) can be based on other decision theoretic cost function

13

make it more powerful and robust eective connectivity detection method in comparison to other
model-based methods [51].

3.2 Exogenous inputs considerations


Since we aim to detect the eective connectivity in the sensorimotor network, it is important that
connectivity measures are taking into account the exogenous or experimental input (i.e. stimulus
functions in conventional neuroimaging experiments). As we have seen in Section 2, measures such
as MSPC, NARX, ERRC and DCM are explicitly take into account the external input, while KGC
and TE based methods (rTE and pTE) are neglecting the exogenous input [33],[6]. Thus, whereas
KGC and TE are able to infer nonlinear eective connectivity between the nodes of functional
network, they will not be suitable in quantication of eective connectivity in the sensorimotor
network.

3.3 Nonstationarity considerations


In general, most of the methods presented assume stationarity, which means that mean, variance
and autocorellation structure can not change in time. In general, an EEG distribution is considered
as a multivariate gaussian process, even if the mean and covariance properties change from segment
to segment. Thus, it can be said that an EEG signal is stationary only within the short intervals
(i.e quasi-stationary). However, signals sampled from the real world are rarely stationary and well
behaved, and eective connectivity may be weaker or stronger over time. This is especially true
during the mental and physical activities, where the state of the brain can change in alertness and
wakefulness.

Among all analyzed measures, only ERRC and NARXGC can track time varying

nonlinear eective connectivity between two signals [52, 53]]. On the other hand, in conditions in
which the stationarity assumption is violated, a stationarity independent measure such as MSPC
can also be used. In addition, a promising technique capable of decomposing a multivariate time
series into its stationary and nonstationary part known as stationarity subspace can be utilized to
overcome the implicit stationarity constraints [49].

3.4 Multivariate modeling considerations


Most of the methods discussed so far are dened for two signals only: a functional relationship is
obtained by pairwise analysis of bivariate signals. However, an increasing number of experiments
are being carried out in which several neurophysiological signals are simultaneously recorded, and
the assessment of the interdependence between signals can give new insights into the functioning
of the systems that produce them. In addition, a bivariate method for each pair of signals from a
multichannel set of signals does not account for all the covariance structure information from the
full data set [38]. Therefore, it is necessary to make use of the multivariate analysis.
Most of the techniques can be easily extended to the multivariate case, such as KGC, NARXGC,
ERRC, DCM and TE. However, MSPC can detect the eective connectivity only in bivariate case,
which means that further research is required to extend that algorithm to multivariate case.

3.5 Time delay and eective connectivity


Eective connectivity in the nervous sytem is usually associated with a time delay, which describes
the dynamic relationship between the nodes of functional network. Among the measures presented,
only MSPC and ERRC able to provide quantitative insight into the time shift between the two
nodes. In previous studies, it was shown that if there is a time delay between the two signals, then
it can be an indication of eective connectivity between those signals [50]. However, F.S Matias
et al [35]showed that relative time delay does not always indicate the direction of causal inuence
i.e.eective connectivity. He argued that it is possible to detect GC, even when the time delay or
relative phase lag is negative. This statement contradict the original denition given by Granger,
where cause should temporarily precede the eect. Thus, further research is needed to get better
understanding on the relationship between time or phase lag and eective connectivity.

14

3.6 Application of eective nonlinear methods in the functional and sensorimotor networks
In this section I present the studies providing empirical evidence for eective connectivity in the
functional and sensorimotor network by incorporating the connectivity methods discussed in this
paper. As will become evident below, there is no single optimum method for assessing eective
connectivity. Eciency greatly depends on the application and the underlying assumptions of each
connectivity method.
Nonlinear Granger causality methods such as KGC and NARXGC, are parametric methods
and mostly been applied to EEG data to infer eective connectivity between the neural signals.
Y Zhao et al, demonstrated the eectiveness of NARXGC in prediction of onset of the epileptic
seizures from EEG data [53].

Practical usefulness of NARXGC in detecting nonlinear eective

connectivity in fMRI data was shown in [28]. However, there have been almost no studies showing
the reliability of these methods in detection of nonlinear eective connectivity in the sensorimotor
network, where connectivity is inferred between EEG and EMG data. It should be noted that the
main dierences between KGC and NARXGC is the fact that KGC assumes stationarity of signals
and neglects the experimental input causing neural responses of the network, whereas NARXGC
can track time-varying eective connectivity and explicitly take experiential design into account.
DCM is probably the most popular parametric method used in detecting and mapping the
eective connectivity. It requires a priori knowledge on the exogenous input to the system. Strong
experimental evidence on practical reliability of applying DCM for identifying eective connectivity
in the human motor system was shown in [23, 11, 10]. Of course, there are also several limitation
regarding the practical implementation of DCM, such as thinking about the ways how to set up an
experiment design optimized for DCM study and choosing the best model based on Bayesian model
comparison. In contrast to KGC and NARXGC, DCM aims at identifying eective connectivity
among hidden (unobservable) neural states generating the induced responses. In other words, apart
from detecting the eective connectivity, DCM can reconstruct the underlying neurophysiological
inuence between the nodes from observed data [17].

This is a key feature of DCM. For more

similarities and dierences among GC and DCM see [45, 15].


Unlike the previous parametric methods, TE based methods are completely data driven and
their practical usefulness in detecting eective nonlinear connectivity was empirically shown in
[24, 8, 29].

In the previous section it was stated that TE assumes that the experimental input

is unknown and therefore it is not able to provide a truly causal description of functional and
sensorimotor networks. However, the usefulness of TE in detecting eective connectivity in sensorimotor network was justied in [48, 32, 31]. The main reason for such surprising results is that,
the practical usefulness of TE based methods in sensorimotor network is highly depends on design
of experimental setup and the selection of the observables and the state space [32].
MSPC was recently introduced by Y.Yuan et al [50] as a novel phase synchronization method.
He demonstrated the application of MSPC in quantifying nonlinear connectivity between the periphery (the sensory stimulus and muscular activity) and the central nervous sytem (the brain
activity). It was concluded that MSPC is eective and reliable method in detecting and mapping
the nonlinear eective connectivity from a period stimulus to the brain in the aerent and eerent
pathways.

So far, the main limitation of this method is not being able to detect true eective

connectivity in multichannel data.


EERC is another novel method, which is in between of parametric and nonparametric approaches.

Application of ERRC to EEG data recorded from epilepsy patients was provided in

[52, 51]. Nevertheless, no attempt was made to validate this model in identication of eective
connectivity in sensorimotor network. But, considering the main advantages of this method which
totally fulll the requirements stated in introduction section, it is promising in providing reliable
results in the sensorimotor network.
Table 1 summarizes all seven reviewed methods discussed in this report with the criteria that
are applied on them.

15

Eective nonlinear connectivity methods

Criteria
Essential

MSPC

KGC

NARXGC

rTE

pTE

ERRC

DCM

X
X
X
X
X

X
X
X
X
X

X
X
X
X
X

X
X
X
X
X

X
X
X
X
X

X
X
X
X
X

X
X
X
X
X

Takes into account exogenous input


Multivariate

Good to have

Can detect time delay


Non-parametric
Can track time-varying structure

Table 1: Comparison of eective nonlinear connectivity methods based on dened criteria.

Actually

ERRC is in between of parametric and non-parametric approaches. But since it is,

originally, model based method, I decided to assign it to parametric group of methods.


MSPC: Multispectral phase coherence;
KGC : Kernelized Granger Causality;
NARXGC: Nonlinear Autoregressive with Exogenous input;
rTE: Real Valued Transfer Entropy;
pTE: Phase Trasfer Entropy;
ERRC: Error Reduction Ration Causality;
DCM: Dynamic Causality Modeling;

- represent the answer

NO

and

X-

represent the answer

Y ES

4 Literature search methodology


In order to nd the most relevant papers in this eld, the following approach was applied. The
literature question was divided into two main parts namely Nonlinear eective connectivity and
 Sensorimotor network.

To extract more search inquires from the literature question, several

synonyms terms were created for each of the part.

Nonlinear eective connectivity Sensorimotor network


Nonlinear connectivity

motor task

Nonlinear causality

task induced

Nonlinear brain connectivity

task specic

Nonlinear coupling

experimental input

Eective connectivity

task dependent

Table 2: Search queries for Nonlinear eective connectivity and  Sensorimotor network parts of
literature question. The items related to the same part were connected with OR and the items
related to dierent parts were connected with AND.

I have created multiple variations from the terms shown in Table 3 (e.g., task induced nonlinear connectivity) and performed search aiming at minimizing the chance of missing relevant or
important papers. If the outcome of the search was not satisfactory, then the search terms were
rened and the whole process started again.
The following search criteria were applied for exclusion of irrelevant papers:

Papers related mainly to the EEG or electrophysiological studies in general.


Papers related to the nonlinear measures
Papers related to the eective connectivity (i.e causality) measures

So these 3 main search criteria were applied to nd the most important papers related to the
topic, e.g, the oldest, and newest research/review papers.

As a result, search queries such as

Task induced AND eective connectivity, Task specic AND nonlinear causality and motor
task AND nonlinear coupling provided the maximum number of hits, and was the rst step in

16

collecting the most relevant papers. However, it should be mentioned that, those papers accounted
for only 30 % of all found important papers. Rest of the papers (around 70 % of all papers) were
found from their reference list and citations to those papers by other authors.
I mainly used the following search engines: Web of science, Scopus and Google scholar. The
most important journals in this topic are: Neuroimage, Journal of Neuroscience Method, Physical
Review Letters and Clinical Neurophysiology.

5 Conclusion
The nodes (i.e., neural assemblies) of a functional network, among which we aim to nd causal links,
rarely display linear interdependencies.

So the identication of eective nonlinear connectivity

between these nodes can provide crucial insight into the dynamics of the sensorimotor network. In
this report I present a survey of the most widely used and promising measures of eective nonlinear
connectivity. The use of parametric/nonparametric, bivariate/multivariate, endogenous/exogenous
input and stationary/time-varying techniques allows the analysis of complex cortical interactions
from dierent, novel perspectives. Nevertheless, the accuracy of the results highly depends on the
underlying assumptions of each approach, as well as the application under consideration. As we
have seen, there is no single optimum method to universally assess eective connectivity. Though,
based on our essential requirement, it can be concluded that MSPC, NARXGC, ERRC and DCM
are the most suitable methods for detecting and mapping experimentally induced eective nonlinear
connectivity. Although the majority if these techniques are currently research-based, they maybe
clinically useful in the near future for evaluating cortical dysfunctions (e.g. stroke).

17

References
[1] Nicola Ancona, Daniele Marinazzo, and Sebastiano Stramaglia. Radial basis function approach
to nonlinear Granger causality of time series.

Physical Review E, 70(5):17, 2004.

[2] Colin Andrew and Gert Pfurtscheller. Event-related coherence as a tool for studying dynamic
interaction of brain regions.

Electroencephalography and Clinical Neurophysiology, 98(2):144

148, feb 1996.


[3] L a Baccal and K Sameshima. Partial directed coherence: a new concept in neural structure
determination.

Biological cybernetics, 84(6):463474, 2001.

[4] Stuart N Baker.

Oscillatory interactions between sensorimotor cortex and the periphery.

Current opinion in neurobiology, 17(6):64955, dec 2007.

[5] R. Bonita and R. Beaglehole. Recovery of motor function after stroke.

Stroke,

19(12):1497

1500, dec 1988.


[6] Steven L. Bressler and Anil K. Seth. Wiener-Granger Causality: A well established methodology.

NeuroImage, 58(2):323329, 2011.

[7] Alex R. Carter, Gordon L. Shulman, and Maurizio Corbetta. Why use a connectivity-based
approach to study stroke and recovery of function?

NeuroImage, 62(4):22712280, 2012.

[8] Mario Chvez, Jacques Martinerie, and Michel Le Van Quyen. Statistical assessment of nonlinear causality:

Application to epileptic EEG signals.

Journal of Neuroscience Methods,

124(2):113128, 2003.
[9] C C Chen, R N Henson, K E Stephan, J M Kilner, and K J Friston. Forward and backward
connections in the brain: a DCM study of functional asymmetries.

NeuroImage, 45(2):45362,

apr 2009.
[10] C C Chen, J M Kilner, K J Friston, S J Kiebel, R K Jolly, and N S Ward. Nonlinear coupling
in the human motor system.

J Neuroscience, 30(25):83938399, 2010.

[11] C.C. Chen, S.J. Kiebel, and K.J. Friston. Dynamic causal modelling of induced responses.

NeuroImage, 41(4):12931312, jul 2008.

[12] J. Daunizeau, O. David, and K. E. Stephan. Dynamic causal modelling: A critical review of
the biophysical and statistical foundations.

NeuroImage, 58(2):312322, 2011.

[13] Luca Faes, Silvia Erla, and Giandomenico Nollo. Measuring connectivity in linear multivariate
processes: Denitions, interpretation, and practical analysis.

Methods in Medicine, 2012, 2012.

Computational and Mathematical

[14] Pascal Fries. A mechanism for cognitive dynamics: neuronal communication through neuronal
coherence.

Trends in cognitive sciences, 9(10):47480, oct 2005.

[15] Karl Friston, Rosalyn Moran, and Anil K Seth. Analysing connectivity with Granger causality
and dynamic causal modelling.

Current opinion in neurobiology, 23(2):1728, apr 2013.

[16] Karl J. Friston. Functional and eective connectivity in neuroimaging: A synthesis.

Brain Mapping, 2(1-2):5678, 1994.

[17] K.J. Friston, L. Harrison, and W. Penny. Dynamic causal modelling.

Human

NeuroImage, 19(4):1273

1302, aug 2003.


[18] Christian Gerlo, Jacob Richard, Jordan Hadley, Andrew E. Schulman, Manabu Honda, and
Mark Hallett.

Functional coupling and regional activation of human cortical motor areas

during simple, internally paced and externally paced nger movements.


1531, 1998.

18

Brain, 121(8):1513

[19] R E Greenblatt, M E Pieger, and A E Ossadtchi. Connectivity measures applied to human


brain electrophysiological data.
[20] C. Grefkes and N. S. Ward.
Functional?

Journal of neuroscience methods, 207(1):116, may 2012.

Cortical Reorganization After Stroke:

The Neuroscientist, 20(1):5670, 2013.

How Much and How

[21] David M Halliday, Bernard A Conway, Simon F Farmer, and Jay R Rosenberg. Using electroencephalography to study functional coupling between cortical activity and electromyograms during voluntary contractions in humans.

Neuroscience Letters, 241(1):58, jan 1998.

[22] C. S. Herrmann. Human EEG responses to 1-100 Hz icker: Resonance phenomena in visual
cortex and their potential correlation to cognitive phenomena.

Experimental Brain Research,

137(3-4):346353, 2001.
[23] Damian M. Herz, Mark S. Christensen, Christiane Reck, Esther Florin, Michael T. Barbe,
Carsten Stahlhut, Amande K M Pauls, Marc Tittgemeyer, Hartwig R. Siebner, and Lars
Timmermann. Task-specic modulation of eective connectivity during two simple unimanual
motor tasks: A 122-channel EEG study.

NeuroImage, 59(4):31873193, 2012.

[24] Katerina Hlavkov-Schindler, Milan Palu, Martin Vejmelka, and Joydeep Bhattacharya.
Causality detection based on information-theoretic approaches in time series analysis.

Reports, 441(1):146, 2007.

Physics

[25] M J Kaminski and K J Blinowska. A new method of the description of the information ow
in the brain structures.

Biological cybernetics, 65(3):203210, 1991.

[26] Rumyana Kristeva-Feige, Christoph Fritsch, Jens Timmer, and Carl-Hermann Lcking. Eects
of attention and precision of exerted force on beta range EEG-EMG synchronization during
a maintained motor contraction task.

Clinical Neurophysiology, 113(1):124131, jan 2002.

[27] Gert Kwakkel, Boudewijn J. Kollen, Jeroen V. Van der Grond, and Arie J H Prevo. Probability
of regaining dexterity in the accid upper limb: Impact of severity of paresis and time since
onset in acute stroke.

Stroke, 34(9):21812186, 2003.

[28] Xingfeng Li, Guillaume Marrelec, Robert F Hess, and Habib Benali. A nonlinear identication
method to study eective connectivity in functional MRI.

Medical image analysis, 14(1):308,

feb 2010.
[29] Muriel Lobier, Felix Siebenhhner, Satu Palva, and J. Matias Palva. Phase transfer entropy:
A novel phase-based measure for directed connectivity in networks coupled by oscillatory
interactions.

NeuroImage, 85:853872, 2014.

[30] M. LUNGARELLA, K. ISHIGURO, Y. KUNIYOSHI, and N. OTSU.

METHODS FOR

QUANTIFYING THE CAUSAL STRUCTURE OF BIVARIATE TIME SERIES.

tional Journal of Bifurcation and Chaos, 17(03):903921, mar 2007.

Interna-

[31] Max Lungarella, Teresa Pegors, Daniel Bulwinkle, and Olaf Sporns. Methods for quantifying
the informational structure of sensory and motor data.

Neuroinformatics,

3(3):24362, jan

2005.
[32] Max Lungarella and Olaf Sporns. Mapping Information Flow in Sensorimotor Networks.

Computational Biology, 2(10):e144, 2006.

PLoS

[33] Daniele Marinazzo, Wei Liao, Huafu Chen, and Sebastiano Stramaglia. Nonlinear connectivity
by Granger causality.

NeuroImage, 58(2):330338, 2011.

[34] Daniele Marinazzo, Mario Pellicoro, and Sebastiano Stramaglia. Kernel method for nonlinear
Granger causality.

Physical Review Letters, 100(14):14, 2008.

[35] Fernanda S. Matias, Leonardo L. Gollo, Pedro V. Carelli, Steven L. Bressler, Mauro Copelli,
and Claudio R. Mirasso. Modeling positive Granger causality and negative phase lag between
cortical areas.

NeuroImage, 99:411418, oct 2014.

19

[36] Tatsuya Mima, Takahiro Matsuoka, and Mark Hallett. Information ow from the sensorimotor
cortex to muscle in humans.

Clinical Neurophysiology, 112(1):122126, jan 2001.

[37] J. Muthuswamy, D. L. Sherman, and N. V. Thakor. Higher-order spectral analysis of burst


patterns in EEG.

IEEE Transactions on Biomedical Engineering, 46(1):9299, 1999.

[38] Ernesto Pereda, Rodrigo Quian Quiroga, and Joydeep Bhattacharya. Nonlinear multivariate
analysis of neurophysiological signals.

Progress in Neurobiology, 77(1-2):137, 2005.

[39] Kristina Roiha, Erika Kirveskari, Markku Kaste, Satu Mustanoja, Jyrki P. Mkel, Oili Salonen, Turgut Tatlisumak, and Nina Forss. Reorganization of the primary somatosensory cortex
during stroke recovery.

Clinical Neurophysiology, 122(2):339345, 2011.

[40] Jan-Mathijs Schoelen, Robert Oostenveld, and Pascal Fries. Neuronal coherence as a mech-

Science (New York, N.Y.),

anism of eective corticospinal interaction.

308(5718):111113,

2005.
[41] T Schreiber. Measuring information transfer.

Physical review letters, 85(2):4614, 2000.

[42] Stephen H Scott. Optimal feedback control and the neural basis of volitional motor control.

Nature reviews. Neuroscience, 5(7):532546, 2004.

[43] C. J. Stam. Nonlinear dynamical analysis of EEG and MEG: Review of an emerging eld.

Clinical Neurophysiology, 116(10):22662301, 2005.

Clinical
neurophysiology : ocial journal of the International Federation of Clinical Neurophysiology,

[44] C J Stam and E C W van Straaten. The organization of physiological brain networks.
123(6):106787, jun 2012.

[45] Pedro a. Valdes-Sosa, Alard Roebroeck, Jean Daunizeau, and Karl Friston. Eective connectivity: Inuence, causality and biophysical modeling.

NeuroImage, 58(2):339361, 2011.

[46] Bernadette C M van Wijk, Peter J Beek, and Andreas Daertshofer.


within the motor system: what have we learned so far?

Neural synchrony

Frontiers in human neuroscience,

6(SEPTEMBER):252, jan 2012.


[47] Francisco Varela, Jean-philippe Lachaux, Eugenio Rodriguez, and Jacques Martinerie. The
BRAINWEB : Phase synchronization and large-sale integration.

Nature Reviews Neuroscience,

2(April), 2001.
[48] Raul Vicente, Michael Wibral, Michael Lindner, and Gordon Pipa. Transfer entropya modelfree measure of eective connectivity for the neurosciences.

science, 30(1):4567, feb 2011.

Journal of computational neuro-

[49] Paul Von B??nau, Frank C. Meinecke, Franz C. Kir??ly, and Klaus Robert M??ller. Finding
Stationary Subspaces in Multivariate Time Series.

Physical Review Letters, 103(21):14, 2009.

[50] Yuan Yang, Teodoro Solis-Escalante, Jun Yao, Andreas Daertshofer, Alfred C Schouten,
and Frans C T van der Helm. A General Approach for Quantifying Nonlinear Connectivity
in the Nervous System Based on Phase Coupling.

International journal of neural systems,

25(8):1550031, 2015.
[51] Yifan Zhao, Steve A Billings, Hua-liang Wei, and Ptolemaios G Sarrigiannis. A Parametric
Method to Measure Time-Varying Linear and Nonlinear Causality With Applications to EEG
Data. 60(11):31413148, 2013.
[52] Yifan Zhao, Steve A Billings, and Hualiang Wei. Tracking time-varying causality and directionality of information ow using an error reduction ratio test with applications to electroencephalography data. 051919:111, 2012.
[53] Yifan Zhao, Steve A Billings, Hualiang Wei, Fei He, and Ptolemaios G Sarrigiannis. Computational Neuroscience A new NARX-based Granger linear and nonlinear casual inuence detection method with applications to EEG data.

Journal of Neuroscience Methods, 212(1):7986,

2013.

20

También podría gustarte