Documentos de Académico
Documentos de Profesional
Documentos de Cultura
1. Gutachter:
2. Gutachter:
D 386
Acknowledgments
Firstly, I would like to thank my supervisor Professor Oleg Iliev for his support during the entire
time of my PhD. Furthermore, I would like to thank Professor Yalchin Efendiev for his support
and many fruitful discussions and Frdric Legoll for the great collaboration.
e e
Additionally, I am grateful for the pleasant working environment provided by the Fraunhofer
ITWM. Thank you to all my colleagues of the department SMS.
Last but not least, I would like to thank the Fraunhofer ITWM, the TU Kaiserslautern, the
Deutsche Forschungsgemeinschaft (DFG Project IL 55/1 2) and the Innovationszentrum
Applied System Modeling for the nancial support.
Contents
1. Introduction
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
I.
problem
. . . . .
. . . . .
. . . . .
. . . . .
2
2
2
2
2
3
3
3
3
3
4
5
5
5
6
6
6
7
8
9
9
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
10
10
10
12
14
15
17
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
17
18
18
18
18
19
27
28
29
30
30
31
31
33
35
35
36
40
9. Numerical results
9.1. Finite volume test . . . . . . . . . . . . . . . .
9.2. Deterministic equations . . . . . . . . . . . . .
9.3. Stochastic equations . . . . . . . . . . . . . . .
9.3.1. Gaussian distributed random variables .
9.3.2. Lognormal distributed random variables
9.4. H-matrix results . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
40
40
41
41
42
44
44
47
10.Preliminaries
10.1. Multi-level Monte Carlo method . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10.2. Remark on same or independent samples . . . . . . . . . . . . . . . . . . . . . . . .
10.3. Denition of meshes and representative volume sizes . . . . . . . . . . . . . . . . .
48
48
49
51
11.Multi-level Monte Carlo for the upscaled coecients and their properties
11.1. Various RVE sizes and xed ne mesh . . . . . . . . . . . . . . . . . . . . . . . . .
11.2. Coarse and ne meshes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
52
52
57
59
60
60
61
64
14.Numerical results
14.1. Numerical results of the homogenized coecient
14.1.1. Numerical study of the convergence rate .
14.1.2. One dimensional examples . . . . . . . . .
14.1.3. Two dimensional example . . . . . . . . .
14.2. Numerical results of the homogenized solution . .
14.2.1. One dimensional example . . . . . . . . .
14.2.2. Two dimensional example . . . . . . . . .
69
70
70
72
82
87
87
88
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
93
15.Preliminaries
15.1. Physical quantities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15.2. Two-phase ow and transport model . . . . . . . . . . . . . . . . . . . . . . . . . .
93
93
94
ii
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
97
98
99
99
100
101
102
102
106
129
A. Notation
A.1. Notation Part I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
A.2. Notation Part II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
A.3. Notation Part III . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
131
133
135
137
B. Tables
141
iii
1. Introduction
Multiple spatial scales occur in most real life problems (e.g., groundwater ow, ltration processes).
A direct numerical simulation of these problems is dicult, since one has to deal with huge
problems if one resolves the nest scale. In order to overcome these diculties the idea is to
approximate the solutions on a coarser grid with the help of homogenization or multiscale methods.
In the homogenization approach eective properties of the media -e.g., the eective permeability
of the medium- which contain small scale features are constructed in each coarse grid block. In a
multiscale method the ne scale features can come into play due to an appropriate choice of the
basis functions. These approaches reduce the computational costs.
Another diculty arises since these multiscale problems are not purely deterministic. For example, the properties of the soil are known on some snapshots only, where samples of the soil
have been pulled by drilling. Due to this incomplete knowledge stochasticity has to be taken
into account which increases the complexity of the considered problems. A standard approach to
deal with uncertainties is the Monte Carlo method [32]. Here one solves the problems for many
realizations of the stochastic quantity -e.g., the permeability. For each realization the stochastic
problem reduces to a deterministic one, where the techniques for deterministic equations can be
applied. Another popular approach is the polynomial chaos method [56]. In this approach one
tries to approximate a random function by orthogonal polynomials in a nite dimensional random
space.
Our objective is to compute the expectation of some functionals of the eective coecient or
of the macroscopic solution. The goal is to combine approaches to construct eective properties
or multiscale basis functions with stochastic algorithms to increase the accuracy in comparison to
the Monte Carlo method by keeping the numerical costs xed.
As a rst step we focus on numerical homogenization. In this case we consider a stationary
diusion equation with a stochastic coecient. Here we deal with the uncertainties with the
Karhunen-Lo`ve expansion in combination with a polynomial chaos expansion or a multi-level
e
Monte Carlo method. Secondly, we use mixed multiscale nite element methods to cope with the
hierarchy of spatial scales and a multi-level Monte Carlo method to handle the stochasticity. We
apply it for multi-phase ow and transport equations.
The work is organized as follows. After an overview of numerical methods for stochastic computations (cf. Section 2) we state a stationary diusion problem with a stochastic coecient (cf.
Section 3) which we consider in Part I and Part II and introduce the homogenization theory (cf.
Section 4).
In Part I we combine numerical homogenization with the Karhunen-Lo`ve expansion in come
bination with a polynomial chaos expansion. With the help of the Karhunen-Lo`ve expansion
e
in combination with a polynomial chaos expansion we reduce the stochastic problem to a high
dimensional deterministic one. By choosing the basis of the stochastic space appropriately we are
able to decouple the high dimensional problem to many d-dimensional deterministic problems,
where d denotes the space dimension. This idea was introduced in [33]. We combine this idea
with homogenization and construct an approximation of the expectation of the eective coecient
with solutions of the decoupled deterministic equations. To construct the Karhunen-Lo`ve expane
sion of a random eld it is essential to compute the eigenpairs of a given covariance operator. In
general this results in a large problem. In Section 5.3 we use hierarchical matrices (cf. [37]) to
approximate the eigenpairs similar to the approach in [44]. So we can reduce the computational
time. To solve the equations we use a cell-centered nite volume method, which we describe in
Section 7. We close the rst Part and as well the other parts with numerical results.
In Part II we use a multi-level Monte Carlo approach (cf. [38]) to deal with the randomness of
the problem. We apply it for both to approximate the expectation of the eective coecient (cf.
Section 11) and of the homogenized coarse solution (cf. Section 13). To approximate the solution
we introduce a weighted multi-level Monte Carlo method in Section 12. The idea of the multi-level
Monte Carlo approach is to consider the quantity of interest -in our case the eective coecient
or the coarse solution- on dierent levels. As levels we consider dierent coarse mesh and domain
sizes. The largest level (e.g., the nest mesh or the largest computational domain) is the level where
we want to approximate the expectation of the quantity of interest. It is the computationally most
expensive and accurate one. All levels are used to approximate the expectation of the quantity of
interest at the largest level. In multi-level Monte Carlo methods the choice of realizations per level
is essential. One straightforward condition is that the number of realizations must decrease if the
level increases to solve only a few problems if they are computationally expensive and compensate
the high stochastic error with many problem solves in less accurate cases (lower levels).
In Part III we apply the in Part II introduced multi-level Monte Carlo method to multi-phase
ow and transport problems (cf. Section 15.2). In this case we are interested in the expectation
of the water saturation. To solve the system of equations we use a mixed multiscale nite element
method (cf.[29]) for the pressure equation and a standard implicit scheme to solve for the saturation
(cf. Section 15.4). To deal with the uncertainties -here we consider a random permeability- we use
ensemble level mixed multiscale nite element approaches (cf. Section 16). Here a dierent level
denotes a dierent velocity approximation space in the mixed multiscale nite element method.
The higher the level the more accurate is the approximation space.
2.2. Setting
In this section we present a more general setting in which we give an introduction to the generalized
polynomial chaos method. We consider the following partial dierential equations
L(x, u; W ) = 0, in D,
(2.1)
B(x, u; W ) = 0, on D,
N
W = (W1 , . . . , WN ) with the support = i=1 i R. So we can replace the innite space
R. Thus the key issue is to parameterize the input
and seek a solution u(x, W ) : D
uncertainty by a set of nite number random variables.
i = 1, . . . , N,
(2.2)
2
i (Wi )m (Wi )n (Wi ) dWi = Hm mn
and
2
Hm =
2
i (Wi )m (Wi ) dWi .
In the following we assume, that the polynomials are normalized by an appropriate scaling, i.e., we
2
have Hm = 1 for all m. The scaling depends on the probability density function i . For example
Distribution
Support
Continuous
Gaussian
Gamma
Beta
Uniform
Hermite
Laguerre
Jacobi
Legendre
(, )
[0, )
[a, b]
[a, b]
Discrete
Poisson
Binomial
Negative Binomial
Hypergeometric
Charlier
Krawtchouk
Meixner
Hahn
{0, 1, 2, . . . }
{0, 1, . . . , N }
{0, 1, 2, . . . }
{0, 1, . . . , N }
Table 1: Correspondence between the probability distribution and the type of gPC polynomial
basis.
for Gaussian distributed random variable Wi we have Hermite polynomials as basis which is the
classical polynomial chaos method. In Table 1 we present dierent probability distributions with
their corresponding gPC basis polynomials.
|r|R
W i,ri ,
where the tensor product is over all possible combinations of the multi index r = (r1 , . . . , rN )
R
NN . We denote the N-variate orthonormal polynomials from WN with {m (W )}, they are con0
structed as products of a sequence of one-dimensional orthonormal polynomials in each direction,
i.e.,
m (W ) = m1 (W1 ) mN (Wn ),
m1 + + mn R,
where mi denotes the order of the univariate polynomials of (Wi ) in the Wi direction for 1 i
N . The number of basis functions is
R
dim WN =
N +R
.
N
From the ortho-normality conditions of the univariate polynomials we get for the expected value
E[m (W )n (W )] =
m (W )n (W )(W ) dW = mn ,
R
1 m, n dim WN .
PR u := uR (x, W ) =
N
N
um (x)m (W ),
m=1
M=
N +R
,
N
(2.3)
R
where PR u denotes the orthogonal projection operator from L2 () onto WN and {m } are the
u
n
um :=
1 m M.
(2.4)
R
From classical approximation arguments follows that PR u is the best-approximation in PN , the
N
linear polynomial space of N-variate polynomials of degree up to R, i.e., for any x D and
u L2 ()
u PR u L2 () = inf
u L2 () .
N
PN
L2 () =
E[ u(x, W ) uR (x, W ) ]
N
1
2
x D.
E[uR ]
N
M
um m (W ) (W ) dW
=
m=1
u1
1 (W )(W ) dW +
um
m (W )(W ) dW
m=2
u1
=1, (1 1, density)
=0
um (x1 )n (x2 )
u
m,n=1
u
u
=mn
um (x1 )m (x2 ).
u
m=2
u2 (x).
m
m=2
expansion. Thereby the denition via Fourier coecients (2.4) is not helpful, since it requires
the exact unknown solution u(x, W ). Consequently, we need an alternative way to estimate the
coecients. A typical approach is to employ a stochastic Galerkin approach. Here we seek an
approximate gPC solution in the form of (2.3). The coecients {m } are obtained by satisfying
u
R
the following weak form, for all v WN ,
N
This is a set of coupled deterministic PDEs for {m }, and standard numerical techniques can be
u
applied. However one should keep in mind that if the equation (2.1) takes a complicated form,
the derivation of Galerkin equations for the coecients can become highly nontrivial, sometimes
impossible.
polynomials and
uk = u(x, W (k) ),
1 k Q,
Iu(x, W ) =
uk (x)Lk (W ),
k=1
x D.
in D
on D
Thus we have to solve Q deterministic problems with realizations of the random vector. There
one can apply existing deterministic solvers in comparison with the stochastic Galerkin method,
where the resulting equations are generally coupled. Again we obtain statistics of the solution,
e.g.,
Q
E[u(x, W )] E[Iu(x, W )] =
uk (x)
Lk (W )(W ) dW.
k=1
random spaces these weights are not readily available and the formula is of little use.
2.5.2. Pseudo-spectral gPC approach
In this approach we seek an approximate solution of (2.1) in the form of gPC expansion, i.e., for
any x D
M
N +R
R
IR u := vN =
vm (x)m (W ), M =
(2.5)
N
N
m=1
R
where IR u is another projector from L2 to WN and the expansion coecients are determined as
N
vm (x) =
m = 1, . . . , M
j=1
where {W (j) , (j) }Q are a set of nodes and weights. These nodes and weights should be chosen
j=1
in such a way that
Q
j=1
f (W (j) )(j)
f (W )(W ) dW = E[f (W )]
(2.6)
vm (x) =
j=1
L2 ()
= E[ IR PR u]2
N
N
1
2
Galerkin Method
Pros
Collocation Method
the gPC method oers the most accurate solutions involving least number of
equations in multi-dimensional random
spaces
Cons
the aliasing error can be signicant, especially for higher dimensional random
spaces
Table 2: Advantages and drawbacks of the stochastic collocation method and the stochastic
Galerkin method.
we call aliasing error. It is caused by the integration error from (2.6). This error can become a
signicant source of errors in multi-dimensional random spaces.
The pseudo-spectral gPC method also requires only repetitive deterministic solutions with xed
realizations of the random inputs. In comparison to the gPC Galerkin method the evaluation of
the approximate gPC expansion coecients is completely independent. This approach has not got
the drawback of unavailable weights like the Lagrange interpolation approach mentioned before.
in D(W )
B(x, u) = 0,
in D(W ).
(2.7)
For simplicity we assume the only source of uncertainty to be in the denition of the boundary.
The idea is to use an one-to-one mapping to transform the random domain into a deterministic
one. Let
= (x, W ),
x = x(, W ),
W ,
be this mapping and its inverse, such that the random domain D(W ) is transformed into a
deterministic domain Ddet Rd with the coordinates = (1 , . . . , d ). Then (2.7) is tranformed
into the following problem: for all W , nd u = u(, W ); Ddet R such that
L(, u; W ) = 0,
B(, u; W ) = 0,
in Ddet
(2.8)
in Ddet ,
where the operators L and B are transformed to L and B, respectively. The problem (2.8) is a
stochastic PDE in a xed domain and we can apply all the techniques mentioned before.
(3.1)
where K (x, ) is a homogeneous random eld, f and g are non-random functions and is a small
parameter. We dene, for all ,
kmin () = min K (x, )
xD
and
xD
Here we denote as in [18] with C t (D) the space of Hlder continuous functions and with H t (D)
o
t
the dual space of the Sobolev space H0 (D), for 0 < t 1. We assume K (x, ) Lp (, C t (D)),
for some 0 < t < 1 and for all p (0, ), g H 1 (D) and f H t1 (D). Furthermore we assume
1
kmin > 0 almost surely and kmin , kmax Lp (), for all p (0, ). Then (3.1) has a unique
1
solution u , which belongs to Lp (, H0 (D)), for all p (cf.[18]). Note that the assumptions on the
coecient are fullled for a log-normal or Gaussian random eld.
4. Homogenization
4.1. Homogenization for a deterministic diusion problem
Let D be a bounded domain in Rd with smooth boundary D. Then a stationary diusion problem
reads
div K
x
u = f (x),
u = g(x),
xD
(4.1)
x D
where the coercive and bounded coecient K is Y-periodic in Rd with periodicity cell
Y = {y = (y1 , . . . , yd ) : 0 < yi < 1 for i = 1, . . . , d}
and a scale parameter. If decreases, the oscillation rate of the coecient increases. Thus it is
natural to ask about the behavior of the solution u if 0.
4.1.1. Formal asymptotic expansion
To derive the limit problem in a formal way, one starts from the ansatz (cf.[26, 42]) that the
unknown function u has an asymptotic expansion with respect to of the form
u (x) =
i ui (x, y)
i=0
where the coecient functions ui (x, y) are Y-periodic with respect to the variable y = x . The
where the subscripts denote the partial derivatives with respect to x and y, respectively. If we
plug in the asymptotic expansion in equation (4.1) and use the law above, we get the formula
2 divy (K(y)y u0 (x, y))
0 [divy (K(y)[y u2 (x, y) + x u1 (x, y)]) + divx (K(y)[y u1 (x, y) + x u0 (x, y)])]
i=1
= f (x)
The next step consists of comparing the coecients of the dierent powers on both sides of this
equation. The term with 2 gives
divy (K(y)y u0 (x, y)) = 0 for (x, y) D Y.
Because of the Y-periodicity the weak formulation with u0 (x, y) as test function reads as follows
10
y u0 (x, y)
and therefore u0 (x, y) = u0 (x) is a function of x alone, independent of y. Using this and the term
with 1 , we obtain
divy (K(y)y u1 (x, y)) = divy (K(y)x u0 (x)) for y Y.
If we use the identity
d
x u0 (x) =
ej
j=1
u0
(x)
xj
we can write
d
j=1
K
u0
(y)
(x) for y Y.
yj
xj
Now, for j = 1, . . . , d, we introduce the cell problem with the Y-periodic solution j
divy (K(y)(ej + y j (y))) = 0 in Y.
(4.2)
u1 (x, y) =
j=1
u0
(x)j (y) + u1 (x),
xj
y u1 (x, y) =
j=1
u0
(x)y j (y)
xj
(4.3)
and
divy (K(y)y u1 (x, y))
divy K(y)
d
=
j=1
eq.(4.2)
j=1
d
j=1
u0
(x)y j (y)
xj
u0
(x)divy (K(y)y j (y))
xj
j=1
u0
(x)divy (K(y)ej )
xj
u0
K
(x)
(y).
xj
yj
K(y)divx y u1 (x, y) dy
K(y) dy
x u0 (x) = f (x)
because the volume over Y is 1. The divergence theorem applied to the rst integral leads to
=
Y
where is the normal vector on Y . This boundary integral vanishes because of the Y-periodicity
of the functions K(y), u1 (x, y) and u2 (x, y). For the second term we use equation (4.3) and get
d
divx y u1 (x, y) =
j
2 u0 (x)
(y)
(x).
yi
xi xj
i,j=1
K(y)
i,j=1
j
2 u0
(y)
(x)
yi
xi xj
Kij =
K(y) ij +
Y
K(y) dy
x u0 (x) = f (x).
j
(y)
yi
dy
Kij
ij
2 u0
(x) = f (x).
xi xj
This elliptic dierential equation is the homogenized limit of the equation (4.1). We simplify the
notation to
d
d ;
12
is also
(4.4)
d .
u = 0 on D
(4.5)
exists K such that if u (x) is the solution of the deterministic Dirichlet problem
div (K u ) = f in D
u = g on D
(4.6)
E|u u |2 dx 0 as 0
(4.7)
then
D
where E denotes the expectation. In this case the eective coecient is dened as
[K ]ij = E (ei + i (y, ))T K(y, )ej .
Hereby i is the (unique up to a random constant) solution of
div (K(y, ) (p (y, ) + p)) = 0 in Rd
p is stationary
E [p ] = 0
(4.8)
13
div(Ai) = 0
x+
RVE
div(K(x, y, )i ) = 0, in Yx , i = 1, ..., d,
(4.9)
is solved subject to some boundary conditions. The boundary conditions are not very essential if
there is a scale separation. For simplicity we will consider Dirichlet boundary conditions for the
local problem
i = yi on Yx .
Additionally it is possible to use Neumann or periodic boundary conditions (see [16, 43]). Then
the homogenized coecients are computed as
K (x, )ei =
1
d
K(x, y, )i ,
(4.10)
x
Y
where ei (i = 1, ..., d) is a unit vector in the direction i. This procedure is repeated at every
macroscopic point (see Figure 1 for illustration). It is known ([53]) that
ej K (x, )ei =
1
d
x
Y
j K(x, y, )i .
This is a practical way to obtain a converging approximation of the homogenized matrix, in the
sense that
(4.12)
with a rescaled domain Yx for = . For a xed > 0 the approximated coecient K converges
for 0 to K a.s. and for analogously. We denote the local homogenization procedure
by H , i.e.,
y
14
ej K (x, )ei =
1
d
x
Y
y
j K(x, , )i .
(4.13)
To give a precise convergence rate another condition besides the ergodicity is needed. If one
assumes that the matrix K (x, ) decorrelates at large distances, then one can obtain a convergence
rate. This is briey mentioned in the next section.
(4.14)
where the random quantity is (Aj ) measurable and is (Ai ) measurable, and q = inf{|x
y|, x Aj , y Ai } for i = j. Note we denote the Euclidian norm with | |, as well as the absolute
value. In the following we assume in addition to ergodicity that this condition is fullled for the
coecient K(x, ) with
C
(4.15)
b(q) k
q
for some k > 0. That means that the coecient decorrelates at large distances. We note that
(4.14) and (4.15) imply strong mixing for the coecients K. It has been shown in [16] that
E( K K
)C
(4.16)
for some > 0 and C > 0 that depend on the correlation rate, but are independent of and ,
and where |||||| denotes any norm of the d d matrices.
To proof this one introduces a penalized or disturbed problem in d
(4.17)
1
with > 0. In comparison to (4.8) this problem has a unique solution (Hloc ( d ))d which is
,
stationary not only in its gradient. With we denote the solution of (4.17) in Y = (0, )d and
dene
1
K(y, )( + Id)
K =
d Y
K,
1
d
E( K K
) E(|||K K ||| ) + E( K K,
) + E( K K,
we have (4.16).
E( K K
) C
15
Part I.
x
div K( , )u (x, ) = f (x),
u = g,
xD
x D
for P a.e. , with a -nite probability space (, F, P ). We assume that the known information about the coecient includes its expected value (y = x )
K(y, )dP ()
EK (y) := E[K(y, )] =
17
yY
MC,n = y ei ,
i
(5.1)
y Y.
ij
=
Y
MC
MC,n (y) Kn (y)MC,n (y) dy.
j
i
(5.2)
1
N
N
MC
Kn ,
n=1
m m (y)Xm (),
(5.3)
m=1
where the random variables Xm are centered at 0 and pairwise uncorrelated. In addition we
assume independence of the random variables and Xm L (,dP ) cx , m . With
(m , m )1m we denote the sequence of eigenpairs of the covariance operator (K) (y) =
cov(y, y )m (y )v(y) dy dy = m
Y Y
m (y)v(y) dy
Y
v L2 (Y ).
Y Y
18
cov(y, y )h (y )v(y) dy dy = h
m
m
h (y)v(y) dy
m
v Sh .
(5.4)
cov(y, y )h (y )n (y) dy dy = h
h (y)n (y) dy, 1 n N .
m
m
m
Y Y
Additionally we write
h
m
h (y)
m
j j (y).
m
=
j=1
j
m
j=1
cov(y, y )j (y )n (y) dy dy =
Y Y
j
m
h
m
j=1
j (y)n (y) dy 1 n N ,
(5.5)
with
(K)ij
(M)ij
=
=
Y Y
These matrices are symmetric and positive semi-denite. If we choose an orthogonal basis n Sh ,
M is diagonal. Then we achieve by multiplying with M1/2
M1/2 Km = h M1/2 m ,
m
and nally
Km = h m ,
m
m m (y)Xm ()
= EK (y) +
(5.6)
m=1
KL
we denote the truncated Karhunen-Lo`ve expansion of the stochastic coecient. With KM we
e
associate the (M, 1) polynomial chaos expansion
M
KL
KL
KM (y, z) = KM (y, z1 , z2 , . . . , zM ) = EK (y) +
m m (y)zm .
(5.7)
m=1
1 1
y Y, z I M = ,
2 2
KL = y ei ,
i
(5.8)
y Y.
19
KL
i
div KM (y, z) KL (y, z) + ei
KL
i
1 1
y Y, z I M = ,
2 2
= 0,
y Y
(5.9)
with KL = KL y ei .
i
i
This is a high dimensional deterministic problem where we can calculate the upscaled coecient.
However, instead of nite elements or volumes we approximate the solution w.r.t. z via polynomials. Therefore we dene, for r N0 , the space of polynomials of degree at most r,
Pr := span{1, t, t2 , . . . , tr } L2 (I)
and, for r = (r1 , r2 , . . . , rM ) NM , a polynomial space by
0
Pr := Pr1 Pr2 PrM L2 (I M ).
This semi-discretization can be solved with each polynomial basis of Pr . Because in general it is
a coupled system of deterministic equations the computational costs are high. In order to reduce
these we make the following ansatz to decouple the problem: We denote for 1 m M and
rm N0 , by rm , Pjrm 0jr the eigenpairs of the symmetric bilinear form
j
m
1/2
(u, v)
u(t)v(t)t dm (t)
1/2
over Prm := span{1, t, t2 , . . . , trm }. Let the eigenpairs be orthonormal w.r.t. to the weight m .
Thereby we denote by m the law of the random variable Xm ,
m (B) := P (Xm B) for any Borel set B I
and, for all M 1, we dene a probability measure on I M by
:= 1 2 M
The eigenproblem reads
P (t)j (t)t dm (t) =
I
0 j rm ,
pi i (t),
P (t) =
i=0
we obtain
rm
rm
pi
i=0
pi
i=0
20
0 j rm .
(5.10)
(A)ij :=
and (B)ij :=
I
1 m M,
(5.11)
for j r.
m
Pjrm is a polynomial in zm , so Pjr is a polynomial in z = (z1 , z2 , . . . , zM ) and
Pr = span{Pjr | j r}.
Then (Pjr )jr is the basis of Pr we use to decouple the semi discrete problem.
We nd (cf. [33], proposition 4.17 ):
Theorem 5.1
For a given r NM , let KL be the solution of (5.8) (polynomial approximation w.r.t. second
0
i
variable z). For every multi index j r we denote by KL the solution of the deterministic problem
i,j
KL,j
KL
div KM (y)KL (y) =fi,j (y),
i,j
KL =0,
i,j
y Y,
(5.12)
y Y.
with
M
KL,j
KM (y) :=
m
m m (y)rm
j
EK (y) +
(5.13)
m=1
M
KL,j
KM (y) :=
m m (y)
EK (y) +
m
Pjrm (zm )zm dm (zm )
m=1
m
Pjrm (zm ) dm (zm )
(5.14)
Pjr
:=
IM
KL
fi,j (y) :=
m
Pjrm (zm ) dm (zm )
KL,j
Pjr div ei KM (y)
(5.15)
(5.16)
Then
KL (y, z) =
i
jr
KL (y)Pjr (z) + y ei .
i,j
(5.17)
So we have to solve a deterministic cell problem in each direction for a given r NM for every
0
multi index j r.
Next we determine the expected value of the upscaled coecient
K KL (z)
lk
=
Y
KL
KL KM (y, z)KL dy :
l
k
21
E[ K KL (z)
lk
el K KL (z)ek d(z)
IM
=
IM
IM
eq.(5.17)
KL
KL KM (y, z)KL dy d(z)
l
k
jr
KL (y)Pjr (z) + y el
l,j
KL
KM (y, z)
=
IM
jr
=
IM
KL (y)Pjr (z) + el
l,j
KL
KM (y, z)
j r
j r
jr
KL
KL (y)Pjr (z) KM (y, z)
l,j
KL
+el KM (y, z)ek +
KL
+ el KM (y, z)
=:
jr
I1 + I2 + I3 + I4 .
j r
jr
KL (y)Pjr (z)
k,j
KL
KL (y)Pjr (z) KM (y, z)ek
l,j
In the following we consider each integral separately. We start with the second one.
I2
=
IM
KL
el KM (y, z)ek dy d(z)
M
=
Y
=
Y
=
Y
22
el
IM
el EK (y)
m=1
el EK (y)ek dy
IM
m=1
1 dm (zm ) +
I
m
zm d(z) ek dy
m m (y)
1 d(z) +
EK (y)
m=1
=1,
density on I
m m (y) zm dm (zm ) ek dy
=0,
Xm centered at 0
lk
EK (y) dy.
Y
m
For the rst term I1 we use the ortho-normality of Pjrm , i.e.,
m
Pjrm (zm )Pirm (zm ) dm (zm ) = im jm
m
(5.18)
m
and the equation of the corresponding eigenproblem with Pjrm as test function, i.e.,
m
m
m
Pjrm (zm )Pjrm (zm )zm dm (zm ) = rm
m
m
m
(5.19)
:=
IM
KL
Pjr (z)KM (y, z)Pjr (z) d(z)
M
r
=
I M m =1
m
Pjm (zm ) EK (y) +
M
r
Pj m (zm ) d(z)
m m (y)zm
m =1
m=1
EK (y)
I
m =1
m
Pjm (zm )Pj m (zm ) dm (zm )
m
5.18)
=
eq.(
jm jm
IM
m=1
m
m
Pjm (zm )j Pj m (zm ) d(z)
zm
m m (y)
m =1
m m (y)
EK (y)jj +
m=1
m
m
Pjrm (zm )Pjrm (zm )zm dm (zm )
M
r
m =1
m =m
m
Pjm (zm )Pj m (zm ) dm (zm )
m
5.18)
=
eq.(
j j
m m
EK (y)jj +
m m (y)
I
m=1
m
m
Pjrm (zm )Pjrm (zm )zm dm (zm )
5.19)rm
=
jm jm
m =1
m =m
eq.(
jm
jm jm
m
m m (y)rm
j
EK (y) +
jj
m=1
eq.(5.13)
KL,j
KM (y)jj .
Therefore for I1 we nd
I1
IM
jr
KL
KL (y)Pjr (z) KM (y, z)
l,j
j r
23
=
Y
jr
jr
=
Y
jr
IM
KL
KL (y)Pjr (z) KM (y, z)
l,j
j r
KL (y)
l,j
j r
KL (y)
l,j
IM
KL
k,j
Pjr (z) KM (y, z)Pjr (z) d(z) KL (y) dy
=I5
KL,j
KM (y)KL (y) dy.
k,j
The last two integrals are nearly the same; only the indices dier. That is why we consider I3
only.
I3
IM
jr
KL
KL (y)Pjr (z) KM (y, z)ek dy d(z)
l,j
KL (y)
l,j
KL (y)
l,j
jr
KL
Pjr (z)KM (y, z) d(z) ek dy
KL (y) Pjr
l,j
IM
eq.(5.7)
jr
IM
m m (y)zm
eq.(5.15)
jr
eq.(5.14)
Pjr
jr
d(z) ek dy
m=1
m m (y)
EK (y) +
m
Pjrm (zm )zm dm (zm )
m=1
m
Pjrm (zm ) dm (zm )
KL,j
KL (y) KM (y) ek dy.
l,j
IM
el K KL (z)ek d(z)
=
jr
KL,j
KL (y) KM (y)KL (y) dy + lk
l,j
k,j
Pjr
+
jr
=
jr
jr
KL,j
KL (y) KM (y) ek dy +
l,j
Pjr
jr
KL,j
KL (y) KM (y)KL (y) dy + lk
l,j
k,j
Pjr
EK (y) dy
Y
KL,j
KL (y) KM (y) el dy
k,j
EK (y) dy
Y
KL,j
KM (y) KL (y) ek + KL (y) el dy.
l,j
k,j
KL,j
KL (y) KM (y)KL (y) dy
l,j
k,j
24
ek dy
el K KL (z)ek d(z)
(5.20)
IM
KL
Kdet + lk EK +
Pjr
jr
KL,j
KM (y) KL (y) ek + KL (y) el dy.
l,j
k,j
(5.21)
we get
IM
el K KL (z)ek d(z)
=
IM
eq.(5.17)+(5.21)
IM
KL
l KM (y, z)k dy d(z)
jr
KL
KM (y, z)
=
IM
j r
jr
=
IM
j r
jr
jr
jr
el Pjr (z) + el
KL (y)Pjr (z)
k,j
jr
jr
j r
j r
KL
KL (y)Pjr (z) KM (y, z)
l,j
KL (y)Pjr (z)
k,j
ek Pjr (z)
j r
KL
KL (y)Pjr (z) KM (y, z)ek
l,j
KL
el Pjr (z) KM (y, z)
el Pjr (z)
jr
KL
KL (y)Pjr (z) KM (y, z)
l,j
jr
KL (y)Pjr (z)
l,j
KL
KM (y, z)
j r
KL
KM (y, z)
KL (y)Pjr (z)
k,j
ek Pjr (z)
j r
KL
el Pjr (z) KM (y, z)ek
KL
+el KM (y, z)
KL
el KM (y, z)
j r
j r
KL (y)Pjr (z)
k,j
KL
ek Pjr (z) + el KM (y, z)ek dy d(z).
25
With our previous notation (Ii means dependency on l,j instead of l,j ) we get
IM
el K KL (z)ek d(z)
I1
IM
+I3
Y jr
IM
Y jr
IM
Y jr
IM
Y jr
IM
KL
el Pjr (z) KM (y, z)
KL
el Pjr (z) KM (y, z)
KL
KL (y)Pjr (z) KM (y, z)
l,j
KL
el KM (y, z)
jr
j r
jr j r
jr
j r
KL (y)
l,j
j r
IM
KL
Pjr (z)KM (y, z)Pjr (z) d(z) ek dy
=I5
el
IM
KL
Pjr (z)KM (y, z)Pjr (z) d(z) KL (y) dy
k,j
=I5
+
jr j r
KL
I1 + I2 + I3 + I4
j r
j r
el
IM
KL
Pjr (z)KM (y, z)Pjr (z) d(z) ek dy
=I5
el
IM
KL
Pjr (z)KM (y, z)Pjr (z) d(z) ek dy
KL,j
=Pjr KM (y)
I1 + I2 + I3 + I4
jr
KL,j
KL (y) KM (y)ek dy
l,j
KL,j
el KM (y)ek dy 2
+
jr
jr
jr
KL,j
el KM (y)KL (y) dy
k,j
KL,j
el Pjr KM (y)ek dy
we obtain
26
KL,j
KL (y) KM (y)KL (y) dy,
l,j
k,j
IM
el K KL (z)ek d(z)
KL
= Kdet + lk EK +
Pjr
jr
KL,j
KM (y) KL (y) ek + KL (y) el dy
l,j
k,j
jr
KL,j
KL (y) KM (y)ek dy
l,j
jr
KL,j
el KM (y)ek dy 2
jr
jr
KL,j
el KM (y)KL (y) dy
k,j
KL,j
el Pjr KM (y)ek dy
KL
= Kdet + lk EK
+
Y
jr
KL,j
KL,j
KL (y) Pjr KM (y) KM (y) ek dy
l,j
KL,j
KL,j
el Pjr KM (y) KM (y) KL (y) dy
k,j
KL,j
KL,j
el KM (y) 2Pjr KM (y) ek dy .
+
+
Alternatively, a dierent ansatz is to consider for a given r NM for every multi index j r the
0
deterministic cell problem
KL,j
KL
div KM (y) i,j (y) + ei
=0, y Y,
i,j =0, y Y
KL
(5.22)
lk
=
Y
KL,j
KL
l,j + y el KM (y) k,j + y ek dy
KL
KKL =
jr
j
KKL
|r|
(5.23)
=
=
Y Y
27
The covariance matrix K is usually dense and requires O(n2 ) units of memory for storage. In
this section we show how to approximate a general covariance matrix with hierarchical matrices
([37]) to reduce these costs. A similar approach can be found in ([44]). In this H-matrix technique
we divide the matrix into sub blocks and determine low-rank approximations if an appropriate
admissibility condition is fullled. We compute the low-rank approximations in linear complexity
with the ACA algorithm [37].
5.3.1. Cluster theory
For the denition of hierarchical matrices we have to dene trees.
Denition 5.2 (Tree[37])
Let N = be a nite set, let b N and S : N P (N ) be a mapping from N into subsets of N .
For t N , a sequence t0 , . . . , tm N with t0 = r, tm = t and ti+1 S(ti ) for all i {0, . . . m 1}
is called sequence of ancestors of t.
T := (N , r, S) is called a tree if there is exactly one sequence of ancestors for each t N .
If T is a tree, the elements of N are called nodes, the element r is called the root node or root and
denoted by root(T ), and the set sons(T, t) := S(t) is called set of sons.
A tree has the following properties
Lemma 5.3
Let T = (N , r, S) be a tree.
1. Let t N , and let t0 , . . . tm N be its sequence of ancestors. For all i, j {0, . . . m} with
i = j, we have ti = tj .
2. There is no t N with r sons(t).
3. For each t N \ {r}, there is a unique t+ N with t sons(t+ ). This node is called father
of t and denoted by father(t) := t+ .
Proof: [37]
A vertex t N is a leaf if sons(t) = holds and we dene the set of leaves
L(T ) := {t N : sons(t) = }.
(5.24)
for all l N0 .
If and only if t = root(T ), we have level(t) = 0. The maximal level is called the depth of T and
denoted by
depth(T ) := max{level(t) : t N }.
Denition 5.5 (Labeled tree, [37])
Let N , L = be nite sets, let r N , let S : N P (N ) and m : N L be mappings.
T := (N , r, S, m, L) is a labeled tree if (N , r, S) is a tree. L is called the label set of T and for
28
t=
s.
ssons(t)
The vertices t N of a cluster tree are called clusters. A cluster tree for I is usually denoted by
TI . We will use the abbreviation t TI for t N .
Properties of a cluster tree are stated in the next lemma.
Lemma 5.7
Let TI be a cluster tree.
1. For all t, s TI with t = s and level(t) = level(s), we have t s = .
2. For all t, s TI with level(t) level(s) and t s = , we have s sons (t).
tL(TI )
t.
{t}
{t}
if sons(t) = ,
ssons(t) sons (s) otherwise.
(5.25)
Proof: [37].
The next step is to assign suitable methods to construct such a cluster tree. In the following we
describe two dierent algorithms.
5.3.2. Geometric bisection
For each index i I we choose, for simplicity, one point xi of the support of the corresponding
basis function. We start with the full index set I, which is the root of the cluster tree by denition.
To split the index set we dene
29
Bt = [a1 , b1 ] [ad , bd ] with xi Bt for all i t and a splitting direction jt {1, . . . , d} are
given. We construct new boxes Bt0 and Bt1 by setting cj := (aj + bj )/2 and
Bt0 = [a1 , b1 ] [aj , cj ] [ad , bd ] and Bt1 = [a1 , b1 ] [cj , bj ] [ad , bd ].
We set jt0 = jt1 = (jt mod d) + 1. Since xi Bt0 for i t0 and xi Bt1 for i t1 hold by
construction, we can repeat the procedure for t0 and t1 .
5.3.4. Block cluster tree
With two cluster trees we can derive a hierarchy of block partitions of I J corresponding to the
matrix, the block cluster tree.
Denition 5.8 (Block cluster tree, [37])
Let TI and TJ be cluster trees for index sets I and J . A nite tree T is a block cluster tree for
TI and TJ if the following conditions hold:
1. root(T ) = (root(TI ), root(TJ )).
2. Each node b T has the form b = (t, s) for clusters t TI and s TJ .
3. For each node b = (t, s) T with sons(b) = , we have
{(t, s ) : s sons(s)}
sons(b) =
{(t , s) : t sons(t)}
if sons(t) = and
sons(s) = ,
if sons(t) = and
sons(s) = ,
otherwise.
(5.26)
30
5.3.5. Admissibility
We generalize the supports of the basis functions i to clusters t TI
Qt :=
supp(i ),
it
i.e., Qt is the minimal subset of Rd that contains the supports of all basis functions i with i t.
One possible admissibility condition is
min{diam(Qt ), diam(Qs )} dist(Qt , Qs ),
(5.27)
where diam() is the Euclidian diameter of a set and dist() is the Euclidian distance of two sets.
Denition 5.10 (Admissible block cluster tree, [37])
A block cluster tree TIJ for I and J is called admissible with respect to an admissibility condition
if
(t, s) is admissible or sons(t) = or sons(s) =
holds for all leaves (t, s) L(TIJ ).
We construct an admissible block cluster tree recursively. For two given clusters t TI and
s TJ we check the admissibility. If they are admissible, we are done. Otherwise we repeat the
procedure with all combinations of sons of t and s. However, in general it can be too expensive
to check an admissibility condition like (5.27). A traditional way is to determine the Chebyshev
circles for the domain, but we consider a simpler approach: Axis-parallel boxes.
For each cluster t TI we dene an axis-parallel bounding box Qt Rd such that Qt Qt holds.
Now we consider the admissibility condition
min{diam(Qt ), diam(Qs )} dist(Qt , Qs ).
If this condition is fullled, also (5.27) holds.
The above admissibility condition is the standard admissibility condition. Another important
admissibility condition is the so-called weak admissibility condition:
Qt = Qs
(5.28)
with t TI and s TJ .
5.3.6. Low-rank approximation
The admissibility condition indicates blocks of the matrix K which allow rank k approximation
and blocks where we have to calculate the exact entries of the matrix
K ij =
Rij
Kij
otherwise,
(5.29)
with the low-rank approximation R = ABT Rt and A Rtk and B Rsk , k N. Note
that any matrix of rank k can be represented in this form. One possibility to compute the lowrank approximation is the improved adaptive cross approximation algorithm, which one can nd
in Algorithm 1.
31
i {1, . . . , t},
jref arbitrary.
:=
(bref )j
argmini{1,...,t} |(aref )i |,
:=
Riref ,j ,
j {1, . . . , s}.
i
2. Determine the index j of the largest entry in modulus in bref ,
j := argmaxj{1,...,}\Pcols |bref |.
s
j
3. If |aref | > |bref |
i
j
4. otherwise
b k 2 / a1
b1
if i = iref
ref
argminj{1,...}\Pcols |bj |, else
s
ref
(a )i := Ri,jref , i {1, . . . , t}
8. Update reference row
a) if i = iref : bref := bref akref bk
i
b) otherwise we have to choose a new reference row corresponding to a reference index iref that has not been a pivot index and that is consistent to jref : iref =
i
9. Update the matrix: R = R ak (bk )T .
10. Update the rank: k = k + 1 and go back to 1.
32
Here, denotes the standard deviation and is proportional to the correlation length. The
second one is a lognormal distribution. A lognormal distribution is a distribution of a random
variable whose logarithm is normally or Gaussian distributed. If X is a random variable with a
normal distribution, then Y = exp(X) has a lognormal distribution. If , and covG (y, y ) denote
the standard deviation, the expected value and the covariance of a Gaussian distributed random
variable, respectively, then it holds for the lognormal distribution
expected value: log
variance:
2
covariance:
covlog (y, y )
= exp( + )
2
= 2 exp( 2 ) 1
log
= 2 (exp(covG (y, y )) 1) .
log
2
In our example we consider the domain D := (0, 1) with squared obstacles placed at the center
of each unit cell Y as illustrated in Figure 2.
(1,1)
dy
dx
(0,0)
xD
x D.
(6.1)
(Ku ) n = ,
u = g,
xD
x DN
(6.2)
x D .
The direct numerical simulation is dicult due to the ne scale heterogeneity. Therefore we
consider the homogenized problem, additionally.
33
0, y Y
yi , y Y
Kij =
i Kj dy.
= f,
= g,
xD
x D.
Reconstruction To compare the coarse solution u and the ne scale solution u , we reconstruct
a ne scale approximation as follows (cf. [41])
div (K) = 0 in V
u
with the boundary condition
u = uedge on V.
On D V we use the global boundary conditions. Here V denotes a dual coarse grid block
where the nodes of V are four neighboring cell centers of the original coarse grid and uedge the
solution of the four one-dimensional problems:
x1
uedge
x1
x2
uedge
x2
with 1 2 = V and i is the part of the boundary which is orthogonal to the unit vector ei
in the ith direction.
We solve these one-dimensional problems analytically.
Stochastic problem
f (x) x D
at D
P -a.e. we assume, in the case of normal distributed random variables, that the expected
value of the stochastic coecient is equal to the deterministic coecient, i.e.,
EK =
34
,
1,
|y y |2
2
1 .
In the example with the lognormal distribution we consider two dierent cases. In one case we
assume as in the Gaussian distribution, that the expected value is equal to the deterministic
2
coecient and in the other we use the exact mean log = exp( ).
2
Numerically we solve the deterministic equation as follows. First we assemble the matrix and the
right-hand side with the cell-centered nite volume method, which we describe in the following
section and thereafter we solve the resulting system of equations with the cg method. We describe
the generation of the random variables which is needed in the Monte Carlo simulation in Section
8.
div (Ku) dx
Ci
=
Ci
(Ku) ds
=
jNi
jNi
ij
(Ku) ij ds +
vij K (ij )
x
+
N
jBi
jBi
ij
u (j ) u (i )
x
x
+
hj
(Ku) ij ds
D
jBi
vij K (ij )
x
g (ij ) u (i )
x
x
hij
vij (ij )
x
f dx Vi f (i ) .
x
35
with Ui = u (i ),
x
Fi = Vi f (i ) +
x
D
jBi
Aii =
jNi
vij
K (ij ) g (ij ) +
x
x
hij
vij
K (ij ) +
x
hj
and
D
jBi
vij (ij ) ,
x
N
jBi
vij
K (ij )
x
hij
hij K (ij ) , if j Ni
x
j
Aij =
0,
else
f dx as follows
Ci
we approximate
f dx
K(ij )
x
jNi
(j )k (i )k
x
x
+
hj
K(i )
x
jBi
(ij )k (i )k
x
x
hij
outer normal
ij
normal pointing from Ci to Cj
2
hj
= k=1 |(i )k (j )k |
x
x
2
hij
= k=1 |(i )k (ij )k |
x
x
Ni
set of the global numbers of the neighbors of cell i
D
Bi
index set of boundary intersections with Dirichlet condition
N
Bi
index set of boundary intersections with Neumann condition
D
N
Bi
= Bi + Bi
2
K(ij ) = 1 + 1 , harmonic mean.
x
K (x i )
K (x j )
K1
K2
K2
K1
be constant and positive denite. For the sake of simplicity we consider in this section only
Dirichlet boundary conditions. Then we derive analogously, if Ci has less than two Dirichlet
boundary edges:
Aii =
jNi
36
vij
K1 +
hj
D
jBi
vij
K1 .
hij
If Ci is either the bottom left or the upper right corner cell with two Dirichlet boundary segments,
we get:
vij
K2
vij
K1 +
K1
Aii =
hj
hij
2
D
jNi
jBi
and if it is either the bottom right or the upper left corner cell with two Dirichlet boundary
segments, we get:
vij
vij
K2
Aii =
K1 +
K1 +
.
hj
hij
2
D
jNi
jBi
Let nx be the number of cells in x-direction and xul , xur , xbl and xbr the upper left node, the
i
i
i
i
upper right node, the bottom left node and the bottom right node of cell Ci , respectively, then
we get, if Ci is an interior cell:
Aii+1
Aii1
Aii+nx
Aiinx
Aii+nx +1
Aii+nx 1
Aiinx +1
Aiinx 1
Fi
vii+1
K1 ,
hi+1
vii1
K1 ,
hi1
vii+nx
K1 ,
hi+nx
viinx
K1 ,
hinx
K2
,
2
K2
,
2
K2
,
2
K2
,
Vi f (i ) .
x
vii+1
K1 ,
hi+1
vii+nx
=
K1
hi+nx
viinx
K1 +
=
hinx
=
Aii+nx +1
Aiinx +1
Fi
K2
,
2
K2
,
2
K2
,
2
K2
,
2
= Vi f (i ) +
x
vii1
K1 g(ii1 ) 2K2 g(xul ) + 2K2 g(xbl ).
x
i
i
hii1
37
K2
,
2
K2
=
,
2
= Vi f (i ) +
x
vii+1
K1 g(ii+1 ) + 2K2 g(xur ) 2K2 g(xbr ).
x
i
i
hii+1
Aii1
Aiinx
Aiinx +1
Aiinx 1
Fi
vii+1
K1 +
hi+1
vii1
K1
hi1
viinx
K1 ,
hinx
K2
,
2
K2
,
2
K2
,
2
K2
,
2
Vi f (i ) +
x
vii+nx
K1 g(ii+nx ) + 2K2 g(xur ) 2K2 g(xul ).
x
i
i
hii+nx
Aii1
Aii+nx
Aii+nx +1
Aii+nx 1
Fi
vii+1
K1
hi+1
vii1
K1 +
hi1
vii+nx
K1
hi+nx
K2
,
2
K2
,
2
Vi f (i ) +
x
K2
,
2
K2
,
2
viinx
K1 g(iinx ) 2K2 g(xbr ) + 2K2 g(xbl ).
x
i
i
hiinx
If Ci is the bottom left corner and both edges have Dirichlet boundary conditions, then we get:
Aii+1
38
vii+1
K2
K1
,
hi+1
2
Aii+nx
vii+nx
K2
K1
,
hi+nx
2
Aii+nx +1
K2
,
2
Fi
Vi f (i ) +
x
viinx
vii1
K1 g(ii1 ) +
x
K1 g(iinx )
x
hii1
hiinx
Aii+nx
Aii+nx 1
Fi
vii1
K2
K1 +
,
hi1
2
K2
vii+nx
K1 +
,
hi+nx
2
K2
,
2
Vi f (i ) +
x
viinx
vii+1
K1 g(ii+1 ) +
x
K1 g(iinx )
x
hii+1
hiinx
vii1
K2
K1
,
hi1
2
K2
viinx
K1
,
=
hinx
2
K2
,
2
= Vi f (i ) +
x
vii+nx
vii+1
K1 g(ii+1 ) +
x
K1 g(ii+nx )
x
hii+1
hii+nx
Aiinx
Aiinx +1
Fi
vii+1
K2
K1 +
,
hi+1
2
K2
viinx
K1 +
,
hinx
2
K2
,
2
Vi f (i ) +
x
vii+nx
vii1
K1 g(ii1 ) +
x
K1 g(ii+nx )
x
hii1
hii+nx
39
9. Numerical results
In this section we present numerical results for the methods described in Section 6. We start
with deterministic examples with only one scale where the exact solution is known to verify
the implemented nite volume method (cf. Sec. 9.1). In Section 9.2 we compute the eective
coecients for deterministic problems and we compare the reconstructed coarse solution with
the ne-scale reference solution. In Section 9.3 we compare the two introduced approaches to
approximate the eective tensor in a stochastic setting. We consider dierent distributions and in
Section 9.4 we discuss the use of hierarchical matrices to compute the Karhunen-Lo`ve expansion.
e
All implementations are based on the software package DUNE ([12, 11, 24, 14, 25]), the eigenvalue problem we solve with ARPACK ([1]) and for the hierarchical matrix computation we use
HLib ([15]).
40
10
symmetric test
asymmetric test
0
10
L2error
10
10
10
10
10
refine
Figure 3: L2 -error between approximation and exact solution (compare Section 9.1).
= x1 ,
= x1
(x)
= 0
K(y) =
,
1,
if y O = [0.45, 0.55]2
if y [0, 1]2 \O,
i.e., the obstacle O in the unit cell is a square of length 0.1. For we use either = 0.1 or = 20.
For both s we calculated the upscaled coecient on a grid with 1048576 cells. For = 0.1 we
have
0.982998
9.24133e 10
K0.1 =
9.24133e 10
0.982998
and for = 20
K20 =
1.01996
6.92546e 10
.
6.92546e 10
1.01996
For these coecients we calculate the coarse solution with 16 16 coarse blocks and after
reconstruction we compute the L2 -error and the maximum error of the approximation and the
ne scale solution (262144 cell) as illustrated in Figure 4.
41
10
10
2
L error
max error
error
error
10
10
10
10
L error
max error
10
refine
refine
Figure 4: Error between the reconstructed coarse solution and a ne scale reference solution
(262144 cells, 16 16 coarse blocks) with the boundary conditions described above.
MC
MC
|kN kN 1 | tol := 0.001 with kN = (KN )11 , (KN )22
M
Var(KN C )
2
tol
42
0.973177
1.03076 1016
1.03076 1016
.
0.973177
10
=0.5
=1
10
10
10
10
10
10
10
10
m
12
14
16
18
20
We observe that all stopping criteria give a good approximation of the mean. As expected the
accuracy decreases and the number of local problems to solve in each direction increases while we
increase the standard deviation. The heuristic stopping criterion ends with the smallest number
of such problems but also the error is the largest. The number of cell problems we have to solve
for dierent standard deviations in the Monte Carlo (MC2) ansatz and in the Karhunen-Lo`ve
e
one, is illustrated in Figure 6(a). In the case of = 1 the number of the needed cell problems for
the Karhunen-Lo`ve approach is less than for the Monte Carlo ansatz. So it may be reasonable
e
to choose the Karhunen-Lo`ve approach, but of course one has to take the costs of generating
e
random numbers and of solving the large eigenproblem into account in ones decision. For = 0.4
the Monte Carlo ansatz is already more appropriate than the Karhunen-Lo`ve ansatz, if = 0.5.
e
In Figure 6(b) we show the behavior of the mean entry of the upscaled coecient, if we use
Monte Carlo simulation. One can see, that the value is close to the mean value before we have
solved as many cell problems as the stopping criterion (MC2) demands. Therefore it makes sense
to look for another stopping criterion, our suggestion is the criterion (MC1).
7
10
1.0006
1.0004
10
1.0002
5
10
1
0.9998
10
0.9996
3
10
0.9994
0.9992
10
0.999
1
KL =1
KL =0.5
MC
10
0.9988
10
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
0.9986
800
820
840
860
880
900
N
920
940
960
980
1000
Figure 6: Number of realizations and the diagonal entry in the MC case for EK (y) = 1.
43
KL
KM (y, z) = log +
m m (y)zm
m=1
MC
Kn (y)
= Yn ,
KL
KM (y, z) =
m m (y)zm
+
m=1
MC
Kn (y) =
Yn log +
For the lognormal variance 2 we use the same values as for the normal one in the previous example. To determine the corresponding normal random variables, we need the standard deviation
of the underlying normal distribution of the lognormal distribution and the expected value for the
above mentioned coecients. We get
0.0001
0.001
0.01
0.1
0.0001
0.000999999
0.00999925
0.0992635
log
1
1
1.00005
1.00494.
In Table 32 and 33 we summarized the results of the rst case for = 1 and = 0.5, respectively.
In Tables 34-36 one nds the corresponding results for = 1 and = 2 for the dierent s. As
in the Gaussian example we observe decreasing accuracy and an increasing number of problems
to solve by increasing the standard deviation. Again we have a good approximation of the mean
and the Karhunen-Lo`ve approaches perform better.
e
44
{1, 5}
{2, 6, 3, 7}
{1, 5}
{2, 6}
{3, 7}
{11, 15}
{0} {4} {1} {5} {8} {12} {9} {13} {2} {6} {3} {7} {10} {14} {11} {15}
Figure 7: Cluster tree.
140
140
weak
weak
standard
120
100
80
80
seconds
100
seconds
standard
120
60
60
40
40
20
20
0
10
10
10
10
ACA
0
10
10
10
10
(a) Leafsize 1.
10
10
ACA
10
10
(b) Leafsize 4.
10
10
weak
weak
standard
standard
| appr|
10
| appr|
10
10
10
10
10
10
10
10
ACA
(a) Leafsize 1.
10
10
10
10
10
10
10
ACA
10
10
(b) Leafsize 4.
45
The absolute error of the eigenvalues (c.f. Figure 9) does not depend signicantly on ACA , only
for ACA = 102 the order is dierent. Therefore is: leafsize= 4, ACA = 104 the most reasonable
one. In the case of the standard admissibility condition the error of the eigenvalues is smaller, but
the time to calculate them is larger. For leafsize 1 the required time is even larger than the time of
the full-rank calculation. If one decides to take the standard condition leafsize 1 is not reasonable
at all, for leafsize 4 one can choose a parameter set for which the required time is smaller than the
full-rank calculation, e.g., ACA = 1010 . Since the error does not dier very much, if we chance
the admissibility condition, the weak one is more reasonable because of the signicant gain in
time.
In Figure 10 we consider errors of the matrix due to the dierent admissibility conditions. As
expected the error of the standard condition is smaller than the corresponding one with the weak
condition.
2
10
weak frobenius
standard frobenius
weak 2norm
standard 2norm
3
error
10
10
10
10
10
10
10
ACA
10
10
Figure 10: Matrix error due to the low rank approximation with dierent admissibility conditions.
The resulting block matrix structures for the two admissibility conditions are illustrated in
Figure 11.
1
2
1
2
1
2
1
1
1
2
1
Figure 11: Block matrix structure for two dierent admissibility conditions with the underlying index set: {0, 4, 1, 5, 8, 12, 9, 13, 2, 6, 3, 7, 10, 14, 11, 15} and the stopping criterion:
ACA = 104 .
46
Part II.
47
The following part is structured as followed. First we introduce multi-level Monte Carlo in a
general setting and the notations used for the dierent levels. Next we consider the computation
of eective properties where we use MLMC to compute the expectation or two-point correlation
of homogenized coecient. In Section 12 we discuss weighted multi-level Monte Carlo, which we
apply in Section 13 to compute the expectation of the coarse-scale solution. We present numerical
results for one- and two-dimensional examples. In the two-dimensional case, we only consider
three dierent levels. In all computations, we show that one can achieve a speed-up with MLMC
methods.
10. Preliminaries
10.1. Multi-level Monte Carlo method
We give a brief introduction of the multi-level Monte Carlo approach in a general setting. With
G we denote a random function, G = G(x, ). For example we will consider functions of the
eective coecient or the coarse-grid solution. We are interested in the ecient computation of
the expectation of this quantity, denoted by E[G].
A standard approach is the Monte Carlo method, where the expected value E[G] is approximated
by the arithmetic mean of a number (M ) of realizations of G (denoted by Gi ), i.e., EM :=
M
1
i
i=1 G .
M
The idea of multi-level Monte Carlo (MLMC) is to consider the quantity of interest Gl on
dierent levels l. In our case levels are various representative volume sizes or mesh sizes. We
assume it is most computationally expensive to compute many realizations at the level of interest
L. With L 1,..., 1 we introduce smaller levels, and assume that the lower the level, the cheaper
the computation of Gl , and the less accurate Gl is with respect to GL . We assume G0 = 0.
We write the quantity of interest at level L as telescopic sum of the smaller levels
L
GL =
l=1
(Gl Gl1 ) .
As mentioned above we vary either the RVE size or the coarse grid resolution. For the standard
MC approach we compute M realizations of the random variable GL at the level of interest L. In
contrast we work with Ml realizations of Gl at each level with M1 M2 ML . For the
expectation we write
L
E[GL ] =
l=1
E [Gl Gl1 ] .
At each level we approximate the expectation of the dierences with the arithmetic mean
E[Gl Gl1 ] EMl (Gl Gl1 ) =
1
Ml
Ml
i=1
(Gi Gi )
l
l1
where Gi is the ith realization of G computed at level l (note that we have Ml realizations of Gl1
l
since Ml1 Ml ). In the MLMC approach the expected value E[GL ] is approximated by
L
E L (GL ) :=
l=1
48
(10.1)
The realizations of Gl1 used with Gl to evaluate EMl (Gl Gl1 ) do not have to be independent of
the realizations Gl1 used for EMl1 (Gl1 Gl2 ) (cf. Section 10.2). In our analysis we consider
root mean square errors
eMLMC (GL )
(10.2)
eMC (GL )
(10.3)
with an appropriate norm ||| ||| depending on the quantity of interest, e.g., the absolute value for
any entry of the homogenized coecient. For the error estimation we will use ( see [17])
E |||E[G] EM (G)|||2
1
2
E [|||G E[G]|||]
M
(10.4)
which is valid for any random variable G and norm associated with a scalar product.
M2
1
M2
Gi Gi +
2
1
i=1
M2
1
1
M1
M2
=
i=1
1
M1
Gi +
1
M1
Gi
1
(10.5)
i=1
1 i
G
M2 2
M1
+
i=M2 +1
1 i
G
M1 1
1
M2
M2
i=1
Gi Gi +
2
1
M1
1
M1
Gi
1
(10.6)
i=M2 +1
with M1 = M1 M2 and M1 > M2 (therefor M1 > M2 ). Note that for the MLMC approximation
in the case of independent samples we use Ml = Ml Ml+1 instead of Ml to approximate E[Gl
Gl1 ]. Such that this approximation of the expectation is less accurate, but the independence
of the sample at the dierent levels might increase the accuracy of the whole approximation of
E[GL ]. If we add and subtract E[G1 ], we get
M2
1
1
M1
M2
L
Esame (GL ) =
i=1
L
Eind (GL ) =
1
M2
M2
i=1
G1 +
Gi G1 +
2
1
M1
1 i
G
M2 2
M1
M1
+
i=M2 +1
1 i
G ,
M1 1
G1
i=M2 +1
49
with Gl = Gi E[Gl ]. As mentioned above we are interested in the root mean square error. If
l
we use the same samples we get
(esame (G2 ))2
MLMC
=
=
L
Esame (G2 ) E[G2 ]
M2
i=1
M2
i=1
M2
i=1
M2
1
1
M1
M2
G1 +
1
1
M1
M2
G1 +
1
1
M1
M2
1 i
G
M2 2
1 i
G
M2 2
G1 +
1
2 Var(G2 ) +
M2
1 i
G E[G2 ]
M1 1
i=M2 +1
2
M1
1 i
+
G
M1 1
i=M2 +1
1
2
M1
1
1
M1
M2
1 i
G
M2 2
M1
Var(G1 ) +
M1
G1
i=M2 +1
1
1
M1
M2
2
M2
Cov(G1 , G2 ) +
=
=
1
1
1
1
1
Cov(G1 , G2 )
Var(G2 ) + M2
Var(G1 ) + 2
M2
M1
M2
M1
M2
1
M1 M2
M2 M1
Var(G2 ) +
Var(G1 ) + 2
Cov(G1 , G2 )
M2
M1 M2
M1 M2
=
=
=
=
=
50
1
2
M2
1
2
M2
L
Eind (G2 ) E[G2 ]
1
= E
M2
1
= E
M2
=
M2
i=1
M2
i
G2
i
G1
i=1
M2
G2 G1
E
i=1
M2
E
i=1
Gi
2
i
G1
G2
M1
1
M1
i=M2 +1
M1
1
M1
M1
G1
E[G2 ]
2
i
G1
i=M2 +1
1
2
M1
2
i
G1
i=M2 +1
i
2E G2 G1 + E
G1
1
M1
Var(G1 )
1
1
Var(G1 )
[Var(G1 ) + Var(G2 ) 2Cov(G1 , G2 )] +
M2
M1
1
1
1
2
Var(G1 ) +
Var(G2 )
Cov(G1 , G2 )
M2
M2
M2
M1
2
1
M1
Var(G2 )
Cov(G1 , G2 )
Var(G1 ) +
M2
M2
M1 M2
1
2
M1
Var(G1 ) +
Var(G2 )
Cov(G1 , G2 ).
(M1 M2 )M2
M2
M2
M1 M2
Var(G1 )
2
M1
L1 .
=
=
eind
(esame (G2 ))
MLMC (G2 )
MLMC
1
2
M1
Var(G1 ) +
Var(G2 )
Cov(G1 , G2 )
(M1 M2 )M2
M2
M2
1
M1 M2
M2 M1
Var(G2 ) +
Var(G1 ) + 2
Cov(G1 , G2 )
M2
M1 M2
M1 M2
M1
M1 M2
M2 M1
1
Var(G1 ) 2
Cov(G1 , G2 )
(M1 M2 )M2
M1 M2
M2
M1 M2
2
2M1 M2
Cov(G1 , G2 )
Var(G1 )
M1 (M1 M2 )
M2
2
2M1 M2
Var(G1 )Var(G2 )
Var(G1 )
M1 (M1 M2 )
M2
2M1 M2
Var(G1 ).
M1 (M1 M2 )
0.
Here we use Var(G1 ) Var(G2 ). The above analysis shows for two dierent levels that we achieve
a better accuracy with the same amount of computational costs, if we reuse the samples. This
coincides with our numerical results in the paragraph independent samples in Section 14.1.2.
51
111
000
111
000
111
000
where A and B are two random scalar valued functions. corresponds to the randomness of the
macroscopic scale and to the one of the microscopic scale. With K (x, , ) we denote the
homogenized coecient matrix which depends on the macroscopic variables (x, ) and on if no
ergodicity on B is assumed. Then we assume
2
x
K (x, , ) H K x, , ,
for some deterministic constant C independent of , x, and and any matrix norm ||||||. Furthermore, the rate is assumed to be independent of , x, and . Another, more general case
we will consider is when one cannot split the randomness of macroscopic and microscopic scale,
explicitely. Then the coecient writes K x, x , . We assume that K is scalar-valued, that we
can do homogenization in every macroscopic point, and that the following assumption holds
E
x
K (x, ) H K x, ,
with some constant C and rate independent of x, and . This is similar to the known results
for ergodic homogeneous coecient recalled in Section 14.1.1.
52
for some matrix norm |||||| and to simplify the notation we set Kl := Kl . We assume that
l C
(11.1)
with > 0 and C > 0 indendent of l, and . For some special cases, one can estimate rigorously,
but in general we suggest a precomputation strategy to estimate . This will be discussed later.
Note that Central Limit Type results correspond to = d (see e.g., [13] for such estimates in
a weakly stochastic case). For clarity we summarize the basic steps of MLMC for the upscaled
coecients below.
1. Generate m1 random variables 1 m1 .
2. For each level l, 1 l L, and each realization j , 1 j ml ,
Solve the RVE problems
div(K (x, j )j ) = 0, in Yl ,
i
j (x, j ) = xi on Yl ,
i
Kl (x, j )ei =
1
l d
Y l
K (x, j )j .
i
1
ml
ml
j=1
where we set K0 = 0. Note that we keep implicit the dependence of Eml (Kl Kl1 ) with
respect to x and the randomness.
4. Compute the multi-level approximation E L (KL ) of the expected value E[KL ] following
(10.1):
L
E L (KL ) =
l=1
We start to estimate the root mean square error of the approximation of E([KL ]ij ), for any entry
[Kl ]ij . This estimate can be extended to any smooth, scalar-valued function f of KL , this will be
53
discussed in Remark 11.2 below. For the multi-level Monte Carlo approach, we get
eMLMC (KL ) =
E E
=
L
l=1
Eml Kl Kl1
2
ml
(E Eml ) Kl Kl1
l=1
(E Eml ) Kl Kl1
l=1
l=1
l=1
Kl Kl1
where we have used (10.4). Writing that Kl Kl1 = (Kl K ) + (K Kl1 ) we deduce
eMLMC (KL ) =
l=1
L
+
l=2
ml
ml
1
+
m1
L
l=1
L
+
l=2
ml
E [(Kl K )2 ]
ml
E (K Kl1 )2
1
+
m1
L
=
l=1
L
=
l=2
1
l +
ml
l=2
1
1
l1 +
ml
m1
1
1
1 +
(l + l1 ) +
ml
m1
eMLMC (KL ) C
l=2
1
+
m1
54
ml
C
/2
/2
l1
/2
If the error is xed, the optimal choice for the number ml of realizations at level l (i.e., with RVE
size l ) is reached when these error parts are equilibrated. We choose
2
E[(K E[K ])2 ]
L
2 , l = 1
+
1
1
C
ml =
(11.2)
2
2
L
l
2
1+
,
l2
l1
eMLMC (KL )
C
l=2
1
+
m1
C
C
+1
L
l
l=2
/2
l1
1+
/2
L
l
l=2
l1
/2
/2
/2
/2
ml
l
l1
1
l
1
l
/2
1+
1
l1
l
/2
+ C1
l1
l
l=1
C(),
As mentioned above, we assume K to be a random quantity, with some positive variance. Therefor
it is natural to assume that the variance is roughly independent of L. Thus the Monte Carlo error
is of the order C/ m. To have a Monte Carlo error of the same order as the MLMC error, we
L
samples.
take m = O
If we choose these numbers of realizations for MLMC and MC, both methods reach the same
accuracy and we can compare their costs. Let Nl denote the cost to solve the RVE problem (4.9)
d
on the domain Yx of size l . The number of degrees of freedom is of the order (l /) . Assuming
l
55
ml N l
=
l=1
L
L
l
l=2
L
+
1+
2
l
l1
2
1
mNL
L
+d
W M LM C
RVE
In Figure 13 we illustrate the ratio W M C for dierent numbers of levels L and rates . As we
RVE
can see, for a given number L of levels, the cost ratio decreases if the rate increases, at equal
accuracy. Otherwise stated, the faster the convergence of the homogenized matrix with respect to
the RVE size (cf. (4.16)), the more interesting the MLMC approach is.
10
10
beta=1.0
beta=1.0
beta=1.5
10
beta=1.5
10
beta=2.0
beta=2.0
10
10
10
10
10
15
10
15
10
20
10
20
10
25
10
25
10
30
10
30
10
15
20
25
L
(a) =
30
35
40
45
50
10
10
1
10
20
25
L
(b) =
W M LM C
W MC
15
+1
30
35
40
45
50
250
10
2 , ml =
1
L
l
l
l1
+1
1
L,
2 , l 2.
l
Remark 11.1
In the above calculations, we assumed that the work of solving a local problem scales as Nl where Nl
is the number of degrees of freedoms. This is true if one uses iterative solvers and the conditioned
number of the preconditioned system is independent of the small scale . One can also compare
the work between MLMC and MC approaches when the work of solving a local problems scales as
C()N 1+ for some > 0.
56
Remark 11.2
Note that estimating the dierence between the expectation E[KL ] and the MLMC approximaL
tion E (KL ) can be replaced by estimating the dierence of E[f (KL )] and E L (f (KL )) for any
smooth, scalar-valued function f (KL ), so that the estimate of the dierence between two consecutive levels can be related to the estimate of Kl between two consecutive levels. Then we can write
f (Kl ) f (Kl1 ) Cf d
i,j=1 [Kl ]ij [Kl1 ]ij for some constant Cf and proceed as above.
Remark 11.3
To compute the optimal number of realizations ml the convergence rate is needed, which is not
known, in general. In Section 14.1.1 we propose some means to estimate numerically.
Above we have shown how to estimate E[KL ]. Another important function is the two-point
correlation function
as an empirical estimator of E ([K (x, )]ij [K (y, )]qp ). As MLMC approximation for the twopoint correlation function CorK (x, y) we get
L
CorL (KL ) :=
l=1
hj
(11.3)
for some constant C independent of i , hj and . We take mij samples at the level (i, j). It is
possible to consider all pairs (i, j); however, the cost of the computations can be large. For each
RVE size l we will choose a corresponding ne-grid size hl . We denote this as level l as before
eMLMC (KL,hL ) =
l=2
1
1
11 +
(ll + l1l1 ) +
ml
m1
57
hl
hl1
+
eMLMC (KL ) C
+
+
ml
l
l1
l=2
1
h1
C
+
+ E [(K E[K ])2 ] .
+
m1
1
If the error is xed, the optimal choice for the number ml of realizations at level l (i.e., with RVE
size l and mesh size hl ) is reached when these error parts are equilibrated. We choose
h1
2 , l = 1
hL
+
L
ml =
2
+ l
+
+ l1
l
l1
2 , l 2.
l
hL
Then we have
eMLMC (KL )
hL
C()
where C() = C l=1 l . For comparison we calculate the corresponding error of standard MC
with RVE size L and ne grid hL
1
Again we have a Monte Carlo error of the order of C/ m. To equilibrate the error terms in this
case we take m = O
hL
samples.
If we choose these numbers of realizations for MLMC and MC, both methods reach the same
accuracy and we can compare their costs. Let Nl denote the cost to solve the RVE problem (4.9)
on the domain Yx of size l with mesh size hl . The number of degrees of freedom is of the order
l
d
d
(l /hl ) . Assuming Nl = (l /hl ) , we have the following cost for MLMC
L
MLMC
WRVE
ml N l
l=1
hl
l1
hl1
hL
l
hl
2
l
1
h1
2
1
h1
58
l=2
1
C
= mNL
hL
L
hL
G = E[ G(, )
2
V
1
2
(12.1)
wl Gl =
l=1
l=1
l (Gl Gl1 )
and we get
L
l =
wi .
i=l
If we choose w1 w2 wL with
L
l=1
wl = 1 we have
1 = 1 2 L = wL .
L
l=1
wl Gl =
l=1
l=1
wl (G Gl )
l=1
wl G Gl
wl l .
(12.2)
l=1
Note that in the MLMC approach the systematic error is of the size G GL L . Now we
approximate the expected value of the weighted sum
L
wl E[Gl ] =
l=1
l=1
l E[Gl Gl1 ]
l=1
59
wl Gl
E
l=1
l=1
=
l=1
L
l=1
L
l=2
l G Gl +
Ml
l=2
l G Gl1 + 1 G E[G]
Ml
M1
l (l + l1 ) + 1 (1 + G E[G] ) .
Ml
M1
If the error is xed, the optimal choice for the number Ml of realizations at level l is reached when
these error parts are equilibrated. We choose
1 2
2
(1 + G E[G] ) , l = 1
L
Ml =
l 2 ( + )2 ,
2 l L.
l
l1
L
Then we get
wl Gl
l=1
l=1
This error is of the same order as in the MLMC approach though one needs to exercise caution
regarding the systematic error (see (12.2)).
Hl ,
HL ,
60
Hl
HL
= l2+2 22(Ll) ,
l = 1, , L
(13.1)
MLMC
Wcoarse
4
HL
2
2
C HL (log HL )3+
2
2 2+
HL (log HL )
for d = 1,
for d = 2,
for d = 3,
(13.2)
To approximate the expected value of the upscaled solution u we consider the tuples (Hl , Ml , l , ml )
for 1 l L, with the coarse mesh size Hl , RVE size l and Ml coarse-grid and ml local problem
solves.
To calculate the homogenized coecient
i
Bl
n,m
1
|Yl |
Y l
i
i Bl i dy
n
m
i = ej y on Yl .
j
(13.3)
As before, we denote the arithmetic mean of these coecients with Eml (Bl ), i.e.,
Eml (Bl ) =
1
ml
ml
i
Bl .
i=1
(13.4)
for each mesh size Hl and realization. Here Ak (x) is the kth realization of A(x, ). Note we use
l
a simplistic treatment for averaging over , since we assume that most of the randomness is in
A(x, ). In general one has to solve the homogenized problem for each realization of Bl . We
Ml
1
calculate the empirical mean EMl (ul ) = Ml k=1 uk of the solutions of (13.4). We use this mean
l
EMl (ul ) to approximate the expected value E[ul ] of the solution ul of the stochastic homogenized
problem
with mesh size Hl and the homogenized coecients calculated with RVE size l . With uk := 0,
0
1 k M1 , we dene
L
E (uL ) =
l=1
E[uL ] E L (uL )
E[
l=1
ul ul1 ]
l=1
l=1
=
l=1
L
l=1
L
l=1
L
=
l=1
L
l=2
ul ul1
Ml
1
ul u + u ul1
Ml
1
1
( u ul + u ul1 ) +
( u u1 + u ) .
Ml
M1
2
V
= E[ G E[G]
]=E G
2
V
E[G]
2
V
E G
2
V
= G 2.
(13.6)
i.e., u l is the solution solved with mesh size Hl but without any upscaling error of the coecient
H
B. It follows
u ul
u u l + u l ul .
H
H
Hl
62
E [|Eml (Bl ) B |2 ].
Proof. We write
Then,
|Eml (Bl ) B |
C.S.
ul (u l ul )
H
|Eml (Bl ) B |(
|ul |2 )1/2 (
|Eml (Bl ) B | u
and therefore
u l ul
H
=E
u l ul
H
E |Eml (Bl ) B |2
2
V
u 2 .
E [|Eml (Bl ) B |2 ]
ml
E [|E[Bl ] B |2 ]
2
l
> 0,
we get
L
L
E[uL ] E (uL )
l=1
Ml
1
+
M1
Hl +
+ Hl +
Cl
+
+ Hl1 +
ml
C1
+
m1
l1
Cl1
+
ml1
63
in many macro-grid points. At every level we will solve a local problem in each coarse-grid block.
The sizes of the RVE can be dierent at dierent locations. At every level l we denote the number
of RVE problems with Pl ( Hl2 ). This number coincides with the number of coarse-grid blocks.
We denote the set of coarse-grid points where we solve the local problems at level l with Pl . We
assume these sets are nested, i.e., P1 P2 PL . As before we solve for Ml samples
coarse-grid problems with mesh size Hl . We calculate the coecient by solving RVE problems
for each realization and averaging the energy over the spatial domain. If we calculate Kl ,Hl with
RVE size l , we get Kl ,Hj for j < l at the same coarse-grid points, automatically. This is true,
since the sets of coarse-grid points Pl are nested. So we only solve RVE problems for Ml Ml+1
independent realizations of size l with coarse mesh size Hl and get Ml coecients with dierent
RVE sizes as shown in Table 3 for illustration. Let uj ,Hi be the solution with a coarse grid with
HL
HL1
1
.
.
.
L1
L
KL ,HL
KL1 ,HL1
KL ,HL1
# coecients
on mesh size Hl
ML
ML1
H1
# coecients to calculate
with RVE size l
M1 M2
.
.
.
ML1 ML
ML
K1 ,H1
.
.
.
KL1 ,H1
KL ,H1
M1
Table 3: Calculating the (blue) coecients on the diagonal will automatically give the lower triangular values in the matrix.
mesh size Hi and RVE size j . Instead of E[u ] we interested in the approximation of E[] for
u
some u. To benet from the dierent numbers of eective coecients for the dierent grid and
j=l
l
l+1
Ml
Ml+1
u=
l=1 j=l+1
l
l=1
=
l=1
64
l
Ml
j=l
L
ML (uL ,HL uL ,HL1 )
ML
L1
+
(ML1 ML )(uL1 ,HL1 uL1 ,HL2 )
ML1
Ml Ml+1
ul ,Hl .
Ml
L2
(ML2 ML1 )(uL2 ,HL2 uL2 ,HL3 )
ML2
+(ML1 ML )(uL1 ,HL2 uL1 ,HL3 )
+ML (uL ,HL2 uL ,HL3 )
+
L
L
L1
ML uL ,HL1
ML uL ,HL +
ML
ML1
ML
L1
(ML1 ML )uL1 ,HL1
+
ML1
L1
L2
(ML1 ML )uL ,HL1
+
ML2
ML1
L1
L2
ML uL ,HL
+
ML2
ML1
+
L1
=
l=1 j=l+1
l
l+1
Ml
Ml+1
l
l=1
Ml Ml+1
ul ,Hl .
Ml
l
l=1
L
l
l=1
L
l
l=1
L
l
l=1
Ml Ml+1
u ul ,Hl +
Ml
Ml Ml+1
(Hl + l ) +
Ml
L1
Ml Ml+1
(Hl + l ) +
Ml
L1
L1
=
l=1
(Hl + l ) l l
l=1 j=l+1
L1
Ml Ml+1
(Hl + l ) +
Ml
L1
l=1 j=l+1
l=1
l=1
l
l+1
Ml
Ml+1
l+1
l
Ml
Ml+1
l+1
l
Ml
Ml+1
(Hl + l )
l+1
l
Ml
Ml+1
(Hl + l )Ml+1
j=l+1
(Mj Mj+1 )
Ml+1
Ml+1
+ l
l+1 + (HL + L )L
M l
M l
=
l=1
(Hl + l ) l l+1
with
L
l
l=1
Ml Ml+1
+
Ml
L1
l=1
l+1
l
Ml
Ml+1
j=l+1
(Mj Mj+1 )
65
l
l=1
Ml Ml+1
+
Ml
=
l=1
l l
=
l=1
l
l=1
l+1
l
Ml
Ml+1
l=1
Ml+1
L1
Ml+1
Ml
L1
l+1 + l
+
l=1
Ml+1
Ml
L1
Ml+1
Ml
L1
l+1 +
l=1
l
l=1
Ml+1
Ml
= 1 .
If we choose
l =
j
j=l
HL + L
Hj + j
we get
L
Cu u (HL + L )
and
C=
j=1
l
l=1
HL + L
.
Hj + j
For a given mesh size H and RVE size the systematic error for standard MC is
u u,H H + .
To have the same systematic error we choose
L
=
l=1
(l l+1 )Hl
=
l=1
(l l+1 )l .
L1
E[] =
u
l=1 j=l+1
L
=
l=1
l
Ml
l+1
l
Ml
Ml+1
l
l=1
Ml Ml+1
E[ul ,Hl ]
Ml
L
j=l
E L () =
u
l=1
(13.8)
instead of the sum of the arithmetic means at dierent levels as before. Where EMl (ul ul1 ) is
dened as
66
1
Ml
j=l
1
Ml
j=l
Mj Mj+1
i=1
uj ,Hl uj ,Hl1 (i ) .
For clarity we summarize the main steps of weighted MLMC for the coarse grid problem:
1. Generate M1 random variables 1 M1 .
2. For each level l, 1 l L, and each realization j , Ml+1 < j Ml , (ML+1 = 0)
Solve in each coarse grid block with mesh size Hl the RVE problems
div(A (x, j )j ) =
i
j
i
0, in Yl , i = 1, ..., d
xi on Yl .
Compute in each coarse grid block with mesh size Hl the homogenized coecients
1
l d
Y l
A (x, j )j .
i
1
Ml
1
Ml
L
k=l
L
k=l
Mk Mk+1
j=1
E L ()(x) =
u
l=1
with 1 2 ... l .
To estimate the weighted MLMC error, we have
L
E[]
u
L
l
l=1
L
l
l=1
l=1
1
Ml
1
Ml
L
j=l
j=l
67
l
l=1
1
Ml
j=l
l
l=1
j=l
L
j=l
Mj Mj+1
uj ,Hl u + 1
Ml
j=1
L
l
l=1
Mj Mj+1
uj ,Hl u + u uj ,Hl1
Ml
L
l
l=1
1
uj ,Hl uj ,Hl1 E[uj ,Hl uj ,Hl1 ]
Mj Mj+1
(Mj Mj+1 )
j=l
Mj Mj+1
(Hl + j ) + 1
Ml
j=1
Mj Mj+1
u
M1
Mj Mj+1
M1
1
1
l (Hl + l ) + 1
Ml
M1
l=1
To equate the error terms we choose
Ml = C
Then we have
E[]
u
l=1
1
1 (H+)
l (Hl +l )
l (H+)
l=1
2
l 2.
l EMl (ul ul1 ) = O(H + ).
In the MC case we choose M = C(H + )2 to get an error of the same order. As before the cost
of solving the coarse scale problems is
L
MLMC
Wcoarse
Ml Hl2
=
l=1
MC
Wcoarse = M H 2 .
The dominating part of the computational cost is the solution of RVE problems. We assume we
solve at Pl Hl2 points RVE problems with the nested sets of points of size P1 < P2 < PL .
=C
P 2
.
2 (H + )2
Note that for the weighted MLMC approach we solve Ml Ml+1 RVE problems for l . For MLMC,
we achieve the following work
L
MLMC
WRVE
=
l=1
(Ml Ml+1 )
Ml
l=1
68
Pl
Pl
Ml+1
l=1
Pl
Ml
l=1
M1
C
C
Pl
l=2
l=2
2
1
+ )
1 (H
P 2
2 (H + )2
Ml
P1 +
l1
Ml
Pl1
Pl
L
P1 +
l=2
2 P1
1
+
2P
1
Pl1
2
l (Hl + l )
l (H + )
l
l
l=2
l1
(Hl + l )2
Pl
Pl
l1
l1
Pl1
2
Pl1
In Figure 14 we illustrate the ratios of the work of weighted MLMC and MC for solving the coarse
problems and the RVE problems. As we can see, the cost ratio decreases if the number of levels
L increases.
2
10
10
RVE problems
coarse problems
RVE problems
coarse problems
10
10
10
10
10
10
10
10
10
10
10
10
10
10
L
(a) =
12
16
18
10
20
1
10
M LM
WRVE C
MC
WRVE
L 1 HL +
i=l L
Hi +
14
(b) =
and
M LM
Wcoarse C
MC
Wcoarse
10
L
12
14
16
18
20
250
10
H +
L
1 L
i=1 L
Hi +
1
L
and
The boundary conditions and the function f will be given below. Note that the homogenized coecient is independent of these choices. In the following we compare our MLMC results with standard
MC results at the highest level. In contrast to our theoretical analysis where we equate the error
and compare the computational costs, we equate the costs for calculating the coecient and the
solution separately and compare the errors in the numerics. We will consider one-dimensional and
two-dimensional examples. We have implemented the one-dimensional methods in Matlab ([2])
69
and analytical solutions are known in most of the examples. As previously, we use the modular
toolbox DUNE ([12, 11, 24, 14]) for the two-dimensional problems. Here we use the cell-centered
nite volume method as described in Section 7. In Table 4 we show the dierent mesh and RVE
sizes for three dierent levels. In the MLMC approach we approximate the homogenized coel
Hl
hl
1
16
1
32
1
64
1
128
1
128
1
128
0.125
256
0.25
1024
0.5
4096
2
3
Table 4: Dierent RVE and mesh sizes for the two-dimensional case.
cient with L = 3 dierent RVE sizes l (cf. Fig. 15) and when we calculate the coarse solution we
use L = 3 dierent coarse mesh sizes Hl . Note that we used the same ne mesh hl for each level.
In the two-dimensional examples we generate the coecient with the Karhunen-Lo`ve expansion
e
and the characteristic length scale is related to the correlation length = 2 = 0.04 .
2
In Section 14.1 we present numerical results for the homogenized coecient. First we explain
in Section 14.1.1 how to estimate the convergence rate (cf. (4.16)) numerically and then we
consider one-dimensional (cf. Section 14.1.2) and two-dimensional (cf. Section 14.1.3) results for
approximating the upscaled coecient with MLMC. We present numerical results for the coarse
solution using weighted MLMC in Section 14.2.
70
Kl K
1
1.5
2
2.5
3
regression line
3.5
data points
confidence interval
4
4.5
5
4
3.5
2.5
ln(epsilon/etal)
1.5
Figure 16: Computed data points with corresponding regression line with slope and upper and
lower points of the condence interval. 4 dierent levels,ml = (2000, 1000, 300, 140),
= 1.53 and ln C = 1.059.
for some constant C and rate independent of and . In this section we estimate the convergence
rate numerically. Therefor we consider as coecient a scalar random eld K( x , ) dened for
2
function cov(x, x ) = 2 exp( |xx | ) with standard deviation = 2 and = 2 = 0.04
2
(recall that |x x | denotes the Euclidian distance in 2 ). We generate samples of the coecient
with the Karhunen-Lo`ve expansion. For any 1 l L, we calculate the eective coecients
e
Kl (j ) for the RVE [0, l ]2 (with l = 0.5Ll ) for various realizations j , 1 j ml . Since
we cannot access the theoretical reference value K = lim E[K ] in practice, we dene the
reference value as
ml
L
1
1
K ( )
Kref :=
L
ml j=1 l j
l=1
where we use all the realizations on the RVEs [0, l ], 1 l L. For this choice of ref
erence it holds E[Kref ] = E[KL ]. Since it is expensive to compute the eective coecient
with RVE [0, L ] the statistical error would be large and we cannot use the unbiased estimamL
1
tor mL j=1 KL (j ). In practice, we consider only the rst entry [Kl ]11 and we use four levels
and ml = (2000, 1000, 300, 140). At each level , 1 l L, we expect
1
ml
ml
j=1
1
ml
ml
j=1
|Kl (j ) Kref |2 ) ln
+ ln C.
Results are shown in Figure 16, where we plot the computed data points with condence intervals.
It turns out that the data points for the smallest RVE [0, 1 ]2 show a dierent behavior than the
other data sets. Since the results with the smallest RVE are the least accurate ones, we take only
71
the three larger RVEs into account to compute the rate . The linear regression line for these
results is also plotted in Figure 16. We nd a line with slope = 1.53 and intercept ln C = 1.059.
Note that this rate is smaller, but close to the theoretical value theo = 2 which would be
obtained by a Central Limit theorem argument. This rate is essential to choose the number of
realizations ml at each level according to (11.2) appropriately. In the following two-dimensional
examples we use the RVEs [0, 1 ]2 , [0, 2 ]2 and [0, 3 ]2 only. If we take only these RVEs into
account we end with a rate close to 1. Therefore we choose = 1 in the two-dimensional
examples instead of theo . In the one-dimensional cases we determine for each coecient.
14.1.2. One dimensional examples
First, we present numerical results for one-dimensional problems
d
dx
x
d
x
K( , , )
u(x, , , )
dx
= f in D R
with some boundary conditions for a coecient K( x , , ) such that the local problems are
x
d
K( , , )
(x, , )
dx
(x, , )
= 0,
x Y =]a, b[
= x,
x {a, b}.
dx
and therefore we get
x
(x, , ) = C1
a
1
dy + C2 .
K( y , , )
= a,
C1
ba
b
1
a K( y ,, )
dy
Ka,b (, )
1
ba
1
ba
= C1 .
b
a
d
y
K( , , ) (y, , ) dy
dy
C1 dy
a
Ka,b (, ) =
ba
b
1
a K( x ,, )
dx
Kl (, ) := Kal ,bl (, ).
72
15
20
25
30
data points
regression line
35
40
14
12
10
8
ln(epsilon/etal)
Figure 17: Computed data points with corresponding regression line with slope . 15 levels, mean
over 50000 samples, = 2.0070 and ln C = 13.9254.
Separable coecient Example 1 As a rst step we use the separable coecient K( x , , )
i ( ) sin2 (
C+
i=1
2xi
) exp(),
where and i are i.i.d. uniformly distributed in [0, 1], and i are xed random numbers in [0.2, 2],
and C > 0 is deterministic. This coecient is separable in the sense that K 1 is a product of
a function of and a function of . As mentioned above, the homogenized coecient is the
harmonic mean in this case. Therefore the homogenized coecient on the RVE [a, b] is
Ka,b (, )
1
ba
x
( , , ) dx
exp()
=
C(b a) +
ba
b a sin
i ( )
2
i=1
N
4bi
sin
4ai
8i
In our simulation we use C = 1, N = 20 and = 0.5 . With this choice of we ensure that the
10
smallest RVE (of size 0.5L ) considered in the MLMC approach is much larger than the characteristic length scale of the homogenized coecient. As described in Section 14.1.1 we determine
the convergence rate for this homogenized coecient. We consider L = 15 dierent levels with
RVEs [al , bl ] = [0, 0.5L+1l ] and as reference we use
Kref (, )
K0, (, )
exp()
C exp() +
2
i ( )
i=1
We get a convergence rate of approximately 2 (cf. Figure 17). That is why we choose in the
following for the dierent numbers of realizations ml per level for the MLMC approach the ratio
73
ml
ml+1
l+1
l
the expectations are taken with respect to both, and . We are interested in the relative mean
square error between the MLMC approximation E L (KL ) with L levels and the true expected
2
(eM LM C )2 (KL )
(E[KL ])2
(10.2). Therefore we consider the MC approach with 400000 realizations of the coecient on the
largest RVE [aL , bL ] = [0, 0.5] as reference. For MLMC we use RVEs [al , bl ] = [0, 0.5L+1l] and
m = (4Ll mL , , 4mL , mL ). For comparison, we calculate the relative error of standard MC
2
(e
)2 (KL )
erel
(KL ) = M C 2 (with eMC (KL ) dened in (10.3)) with the largest RVE [aL , bL ] =
MC
(E[KL ])
(KL ) =
m b
[0, 0.5] with m = l=1L l l samples, so that both approaches have the same costs. Since the errors
b
depend on the set of chosen random numbers we repeat the computations N b = 1000 times and
calculate the corresponding condence intervals
mean((erel )2 )
2std((erel )2 )
2std((erel )2 )
.
, mean((erel )2 ) +
Nb
Nb
In Figure 18 we illustrate the relative mean square errors for three dierent levels for the
expected value and the two-point correlation of the eective coecient with the corresponding
condence intervals.
We can observe that the MLMC approach yields smaller errors for the same amount of work.
In both cases, expected value and two-point correlation, the relative mean square error with the
standard MC approach is approximately 2.3 times larger than the one for the MLMC approach,
i.e.,
Nb
1
rel
2
j=1 [eMC (KL (j , j ))]
Nb
Nb
1
rel
2
j=1 [eMLMC (KL (j , j ))]
Nb
2.3.
x 10
0.014
MLMC
MC
2.5
0.01
2
0.008
1.5
0.006
1
0.004
0.5
0.002
10
15
20
m3
25
30
35
10
15
20
25
30
35
m3
(a) Relative mean square errors of the expected (b) Relative mean square errors of the two-point
value of the eective coecients.
correlation of the eective coecients.
Figure 18: Relative mean square errors with equated costs and m = (16m3 , 4m3 , m3 ). Example 1
separable coecient.
74
10
11
12
13
14
15
16
17
data points
regression line
18
19
20
16
14
12
10
ln(epsilon/etal)
Figure 19: Computed data points with corresponding regression line with slope . 15 levels, = 1,
mean over 50000 samples, = 1.0003 and ln C = 5.5997.
Separable stationary coecient Example 2 Here we consider an example where the eective
coecient does not depend on , in the limit of innite large RVEs. We choose
K 1 (x, , ) =
C+
iZ
where and i are i.i.d. uniformly distributed in [0, 1], C = 1 and 1[i,i+1) (x) denotes the indicator
function which is 1 for x [i, i + 1) and zero elsewhere. Therefore the homogenized coecient on
the RVE [a, b] (for simplicity we choose a, b Z) is
Ka,b (, )
1
ba
(x, , ) dx
a
b1
exp()
i ( )
C(b a) + 0.5
ba
i=a
Since K is stationary in the variables (x, ), the standard stochastic homogenization theory holds,
i.e., the exact eective coecient is independent of . In this case the exact eective coecient
is
K () =
K 1 (x, , ) dx
= (exp() (C + 0.5E[]))
75
L
l=1 ml bl
bL
computed analytically and is very close to E[KL (, )] when the RVE at level L is large. In this
paragraph as reference we use
1 1
e
.
C + 0.25
1.4, L = 3
Nb
1
with and i.i.d. random variables uniformly distributed in [0.5, 1], = 0.5 (to ensure that the
10
smallest RVE is large compared to ) and C = 2e (which ensures that the coecient is uniformly
bounded away from 0). In comparison to the two previous examples and are not separable.
The eective coecient on [a, b] is
Ka,b (, )
=
=
1
ba
K
a
x
( , , ) dx
1
C(1 + )(b a) +
ba
b
a
exp sin( ) exp sin( )
As in the example with the separable coecient we consider the RVEs [al , bl ] = [0, 0.5L+1l] for
the MLMC approach. To choose the number of realizations ml for each level appropriately we
estimate the convergence rate numerically. The reference is
K () =
lim Ka,b (, ) =
ba
1
.
C(1 + )
Again we use 15 dierent levels and the rate is approximately 2 (cf. Figure 21). Again we
compare the accuracy to MC with equated costs on the RVE [aL , bL ] = [0, 0.5]. As in the previous
example we use the practical reference value
Kref =
76
lim E[Ka,b (, )] =
ba
2 ln 4
3
.
C
x 10
0.025
MLMC
MC
6
0.02
5
0.015
0.01
2
0.005
1
10
15
20
25
30
35
10
15
m3
20
25
30
35
m3
(a) Relative mean square errors of the expected (b) Relative mean square errors of the two-point
value of the eective coecients for 3 levels.
correlation of the eective coecients for 3 levels.
3
x 10
0.014
MLMC
MC
3.5
3
0.01
2.5
0.008
2
0.006
1.5
0.004
1
0.002
0.5
0
10
15
20
25
30
35
10
15
m5
20
25
30
35
m5
(c) Relative mean square errors of the expected (d) Relative mean square errors of the two-point
value of the eective coecients for 5 levels .
correlation of the eective coecients for 5 levels.
3
x 10
0.01
MLMC
MC
0.009
2.5
0.008
0.007
0.006
1.5
0.005
0.004
0.003
0.002
0.5
0.001
0
10
15
20
m7
25
30
35
10
15
20
25
30
35
m7
(e) Relative mean square errors of the expected (f) Relative mean square errors of the two-point
value of the eective coecients for 7 levels .
correlation of the eective coecients for 7 levels.
Figure 20: Relative mean square errors with equated costs and m = (2Ll mL , , 2mL , mL ).
Example 2 separable stationary coecient.
77
10
15
20
25
30
data points
regression line
35
40
14
12
10
8
ln(epsilon/etal)
Figure 21: Computed data points with corresponding regression line with slope . 15 levels,
L
= 0.5 , mean over 50000 samples, = 2.3548 and ln C = 8.3103.
10
2
Esame (KL ) :=
l=1
1
ml
ml
j=1
Kl (j , j ) Kl1 (j , j )
Eind (KL ) :=
l=1
1
ml
ml
j=ml+1 +1
Kl (j , j ) Kl1 (j , j )
with ml = ml ml+1 and mL+1 = 0. An important assumption for the MLMC approach is that
the number of realizations decreases as the level l increases. Therefore we assume ml ml+1 .
These two approximations share the same computational costs. In the following we compare
78
x 10
1
MLMC
MC
x 10
0.9
0.8
0.7
0.6
0.5
0.4
1
0.3
0.2
0.1
10
15
20
25
30
35
10
15
m3
20
25
30
35
m3
(a) Expected value of the eective coecients, (b) Two-point correlation of the eective coecients, m = (16m3 , 4m3 , m3 ).
m = (16m3 , 4m3 , m3 ).
4
x 10
2
MLMC
MC
x 10
1.8
5
1.6
4
1.4
1.2
3
1
2
0.8
0.6
1
0.4
0
10
15
20
m3
25
30
35
0.2
10
15
20
25
30
35
m3
(c) Expected value of the eective coecients, (d) Two-point correlation of the eective coem = (4m3 , 2m3 , m3 ).
cients, m = (4m3 , 2m3 , m3 ).
Figure 22: Relative mean square errors with equated costs. Example 3 non separable coecient.
79
x 10
3
MLMC
indMLMC
MC
2.5
x 10
3
MLMC
indMLMC
MC
2.5
x 10
MLMC
indMLMC
MC
2.5
1.5
1.5
1.5
0.5
0.5
0.5
10
15
20
25
30
35
10
m3
15
20
25
30
35
m3
10
15
20
25
30
35
m3
Figure 23: Relative mean square errors for MLMC with independent samples, reused samples and
the standard MC approach with L = 3, N b = 1000 and m = (4Ll mL , , 4mL , mL ).
separable
stationary
non separable
ErM C
same
ErM LM C
2.3
2.1
2.1
ErM C
ind
ErM LM C
1.8
1.7
1.7
ind
ErM LM C
same
ErM LM C
1.3
1.2
1.2
Table 5: Ratios between the relative mean square errors for MLMC with independent samples, reused samples and the standard MC approach with L = 3, N b = 1000 and
m = (4Ll mL , , 4mL , mL ).
L
the accuracies of Esame (KL ), Eind (KL ) and Em (KL ) by equated costs. We consider the same
coecients as in the previous paragraphs. If not otherwise stated we use the same parameters
(e.g., al , bl , N b). To ensure ml ml+1 we choose in all examples m = (4Ll mL , , 4mL , mL ).
In Figure 23 the considered relative mean square errors for L = 3 for the dierent examples are
shown. Again we are interested in the quotient of the dierent errors, for example between the
relative mean square error of the MC approach and the one of the MLMC approach with reusing
the samples
Nb
1
rel
2
j=1 [eMC (KL (j , j ))]
Nb
.
Nb
1
rel
2
j=1 [eMLMC (KL (j , j ))]
Nb
same
ind
We denote these errors with ErMC , ErMLMC and ErMLMC . So we are interested in the ratios
ErM C
ErM C
same
ind
ErM LM C , ErM LM C
Er ind
and ErM LM C . These are summarized in Table 5. For all example coecients
same
M LM C
the behavior is about the same, the MLMC approach with reused samples is always better than
the one with independent samples. The error with independent samples is approximately 1.2
times larger. In Table 6 and Figure 24 are shown the results for the example with a stationary
coecient for dierent numbers of levels. Compared to the error of the standard MC approach
the dierence in the two MLMC ansatzes decreases, but the ratio between the MLMC approach
with independent and reused samples stays xed, the error for independent samples is about 1.2
times larger than the one with reused samples.
80
L=3
L=5
L=7
ErM C
same
ErM LM C
2.1
7.6
26.1
ErM C
ind
ErM LM C
1.7
6.1
21.9
ind
ErM LM C
same
ErM LM C
1.2
1.3
1.2
Table 6: Ratios between the relative mean square errors for MLMC with independent samples,
reused samples and the standard MC approach for the stationary example for dierent
levels with N b = 1000 and m = (4Ll mL , , 4mL , mL ).
x 10
6
MLMC
indMLMC
MC
2.5
x 10
1.5
0.5
MLMC
indMLMC
MC
1.5
x 10
MLMC
indMLMC
MC
10
15
20
m3
(a) L = 3
25
30
35
0.5
10
15
20
m5
(b) L = 5
25
30
35
10
12
14
16
18
20
22
m7
(c) L = 7
Figure 24: Relative mean square errors for MLMC with independent samples, reused samples
and the standard MC approach for the stationary example for dierent levels with
N b = 1000 and m = (4Ll mL , , 4mL , mL ).
81
In this case we do not have an analytical expression for the homogenized coecient like the
arithmetic mean in the one-dimensional examples. Therefore we solve the cell problems with the
cell-centered nite volume scheme (cf. Section 7) and the parameters as in Table 4. In the second
two-dimensional problem we consider the more dicult case, when there is no separation between
uncertainties at the macro- and microscopic levels. However, we study a special case where we
can separate the space dimensions, i.e., we have (x = (x1 , x2 ))
x1
x2
x
K(x, , , ) := K1 (x1 , , , )K2 (x2 , , , ).
Here the local problems reduce to one-dimensional problems and we get an analytical expression
for the homogenized coecient. The presented results below refer to the rst entry K11 of the
where A and B are both scalar valued. B is a random eld with expected value E[B] = 10 and
the Gaussian covariance function
x
|x x |2
x
cov(x, x ) = Cov B( , ), B( , ) = 2 exp 2 2
Eml (Kl )( ) :=
82
1
ml
ml
Kl (j , j ) =
j=1
1
ml
ml
A(j )Bl (j )
j=1
implicitly in this notation. We use the MLMC approach to approximate the expected value, as in
(10.1) we have
L
E L (KL )( ) =
l=1
l=1
1
ml
ml
j=1
eMLMC (KL ) =
eMC (KL ) =
The theoretical reference value is E[KL ], which we cannot access in practice and since it is computationally expensive to solve local problems on RVE [0, L ]2 we cannot aord many realizations
of BL (j ). So the statistical error would be too large, if we approximate E[KL ] with the empirical
mean at level L. That is why we introduce a biased estimator Kref (E[Kref ] = E[KL ]), namely
E[KL ]
Kref
:=
1
L
:=
1
L
1
L
1
L
En (Emref (Kl )( ))
l=1
L
l=1
L
l=1
L
l=1
1
n
1
n
1
n
Emref (Kl )( i )
i=1
n
i=1
n
i=1
mref
l
1
mref
l
Kl (j , j )
j=1
mref
l
1
mref
l
A(j )Bl (j )
j=1
i
i
with i = (1 , , mref ), mref mref , where we have taken into account all the realizations on
l
l1
1
the RVEs [0, l ]2 . To approximate the remaining expectation we use the randomness of the coarse
scale, i.e.,
eMLMC (KL )
eMC (KL )
1
n
1
n
n
i=1
Kref E L (KL )( i )
n
i=1
Kref Em (KL )( i ) .
As mentioned above we equate the computational work of the MLMC and the MC approach and
2
compare the errors. For the MC approach the work is m
L and for the MLMC approach we
have
L
l=1
ml
l 2
.
L
l=1
ml
L 2
l 2
83
We work with three dierent levels, i.e L = 3 and we denote the vector of the number of realizations
at each level with m, i.e., m = (m1 , m2 , m3 ). In our example we have n = 500 and mref =
(2000, 1000, 300). In Figure 25(a) we show the errors e2 (KL ) and e2
MC
MLMC (KL ) on the rst
entry of the homogenized coecient matrix for m = (4 m3 , 2 m3 , m3 ). In this case we have chosen
the number of realizations per level ml according to the numerical estimated convergence rate
(cf. Section 14.1.1). From these simulations we observe that MLMC provides smaller errors
compared to standard MC for the same amount of work. In the mean the MC square error is
approximately 2.3 times larger than the MLMC square error. In Figure 25(b) we show the errors
theo
0.03
MLMC
MC
0.045
MLMC
MC
0.025
0.04
0.035
0.02
0.03
0.025
0.015
0.02
0.01
0.015
0.01
0.005
0.005
0
20
40
60
80
100
m3
120
140
160
180
0
20
30
(a) m = (4 m3 , 2 m3 , m3 )
40
50
60
m3
70
80
90
100
(b) m = (16 m3 , 4 m3 , m3 )
In Figure 26 we compare the MLMC results for the rst entry [Kl ]11 of the eective coecient
and the second one [Kl ]22 . The behavior is the same and only if we consider only a small range
of M3 one can see a slight dierence (cf. Fig. 26(b)). That is why we consider the rst entry only
in the other examples.
3
x 10
0.025
MLMC
MLMC second entry
MLMC
MLMC second entry
11
0.02
10.5
10
0.015
9.5
0.01
8.5
0.005
8
0
20
40
60
80
100
m3
120
140
(a) m = (4 m3 , 2 m3 , m3 )
160
180
38
40
42
44
46
48
m3
50
52
54
56
58
(b) m = (4 m3 , 2 m3 , m3 )
Figure 26: Errors between the rst entry [Kl ]11 of the eective coecient for MLMC and the
second one [Kl ]22 . Separable coecient.
84
Non separable coecient Next, we consider the case when there is no separation between macroand micro-level uncertainties. In general it is very expensive to have a suciently large number
of samples to reduce the statistical noise. Since in the limit of innitely large RVEs the eective
coecient does not depend on the choice of the boundary conditions of the local problems (cf.
([16])), we consider a special case where we can solve the local problems and thus the eective
coecient analytically. Therefore we look at local problems with Dirichlet and no-ow boundary
conditions, i.e.,
x
= 0 in Yl
div K(x, , , )i
i = xi at YD
l
n i
0 at Yl \ YD
l
Then the local problem reduces to a one-dimensional problem in the direction xi for the function
i that only depends on xi . Analogous to the one-dimensional case the solution reads,
i (xi , , ) =
1
l
l
0
1
Ki (xi ,
xi
, , ) di
x
1
l
xi
0
1
Ki (xi ,
xi
, , ) di .
x
1
l
l
0
1
K1 (x1 ,
x1
, , ) d1
x
1
l
K2 (x2 ,
0
x2
, , ) d2 .
x
x2
x2
cos
x2
where and are i.i.d. random variables distributed in [0.5, 1] and C = 2e.
As in the example with the separable coecient we consider the RVEs [0, l ] = [0, 0.5L+1l ]
L
and = 0.5 . To choose the number of realizations ml in the MLMC approach for each level
10
appropriately we estimate the convergence rate numerically. The reference is
Again we use 15 dierent levels and the rate is approximately 2 (cf. Figure 27). As reference
we use the MC approach with mref = 400000 realizations (, ) on the largest RVE [0, L ]2 =
[0, 0.5]2 . Thanks to the special expression (14.1) of the coecient we can compute a large number
of samples in this two-dimensional example. For the MLMC approach we consider three dierent
levels (L = 3) and according to we choose m = (16m3 , 4m3 , m3 ) realizations. To determine
a condence interval we repeat it for N b = 2000 dierent sets of realizations. In Figure 28
we compare the accuracies of the MC and MLMC approaches for the mean and the two-point
2
L
ml ( l )
correlation with equal costs, i.e., m = l=1L 2 . Again, the MLMC approach is more accurate,
( )
we nd that the relative mean square error for the MC approach is approximately 5 times larger
than the one for the MLMC approach.
85
10
15
20
data points
regression line
25
30
14
12
10
8
ln(epsilon/etal)
Figure 27: Computed data points with corresponding regression line with slope . 15 levels,
L
= 0.5 , mean over 50000 samples, = 2.3600 and ln C = 1.5888.
10
0.03
0.09
MLMC
MC
0.08
0.025
0.07
0.02
0.06
0.05
0.015
0.04
0.01
0.03
0.02
0.005
0.01
0
10
15
20
m3
25
30
35
10
15
20
25
30
35
m3
(a) Relative mean square errors of the expected (b) Relative mean square errors of the two-point
correlation of the eective coecients.
value of the eective coecients.
Figure 28: Relative mean square errors with equated costs and m = (16m3 , 4m3 , m3 ) for the twodimensional case.
86
K (x, , )
d
u
dx
= f (x) in [0, 1]
1
0
u(x, , ) =
(K (y, , ))1
f (z) dz dy.
To use an MLMC approach we need an approximation of the coarse solution on dierent coarse
grids. Therefore we denote the vertices of the grid with xH , 0 i N , i.e., the mesh size is
i
1
H
H = xi xi1 = N and gi = g(xH ) for any function g. We approximate the solution with
i
uH
i
uH
i1
+ (K
(xH ))1
i
(K (xH ))1
j
=
j=1
xH
i
f (y) dy dx
xH
i1
xH
j
f (y) dy dx.
xH
j1
Therefor uHl is an approximation of u,Hl (xi ), i.e., there is no error due to the homogenization
i
1
1
H
} {xi l+1 |1 i
}.
Hl
Hl+1
So we are in the setting described in Section 13.3 and we can apply the weighted MLMC approach.
We consider the following example
f (x)
1
(K (x, , ))
=
=
ex e + 1
C(1 + exp(5))x
1
+
exp (1 + x) sin
b a
exp (1 + x) sin
where and are random variables, uniformly distributed in [0.5, 1] and C = 2e. As before
we use the dierent RVEs [al , bl ] = [0, 0.5L+1l]. As reference we use the mean of the analytical
solution for b =
E[u] = C(1 + exp(5 0.75))(exp(x)x exp(x) (e 1)
x3
0.5x2 + 1).
3
87
b1
We consider three dierent levels and choose all weights equal to 1 and = 100 . We equate the costs
of solving the coarse grid problem for a xed ratio of realizations per level M = (16M3 , 4M3 , M3 )
1 1
and the grid sizes H = ( 1 , 8 , 16 ). As in the example for the coecient we repeat the calculation
4
for N b = 20000 dierent sets of random numbers and determine the mean and the condence
intervals. We consider the relative mean square errors for the L2 -norm, i.e., E[u] E L () 2 =
u
E E[u] E L () 2 2 . In Figure 29 you nd a larger relative weighted MC L2 -error compared
u L
to MLMC, namely the MC error is about 1.2 times larger.
0.09
MLMC
MC
0.08
0.07
0.06
0.05
0.04
0.03
0.02
0.01
6
M3
10
12
Figure 29: Relative mean square L2 -errors of the solution for 10 dierent realizations, the weight
1
1
(1, 1, 1), M = (16M3 , 4M3 , M3 ) and H = ( 4 , 1 , 16 ).
8
x
div K(x, , , )u = f in D = (0, 1)2
where B is a log-normal distributed random eld B = eK with K having zero mean and covariance
2
function cov(x, x ) = 2 exp( |xx | ) with = 2, = 0.04 For A we choose
2
A(x, ) = 2 + |1 sin(2x1 )| + |2 sin(2x2 )| + |3 sin(x1 )|
88
(14.2)
L
L
with grid size HL . As reference we solve (14.2) with ml = mref = 50 and Ml = M ref = 1000 for
all levels and calculate the mean over both the levels and the number of realizations, i.e,
i
EM ref ,L
1
=
L
l=1
1
M ref
M ref
uk .
l
k=1
As before we have three dierent levels and we equate the computational costs for the com 2 +m2 2 +m 2
2
2
2
M3 H3 +M2 H2 +M1 H1
2
H3
0.9
MLMC
MC
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
20
40
60
80
100
120
140
160
180
200
i
Figure 30: Relative L2 -errors ei
MC and eMLMC of the solution for 200 dierent realizations, M =
(32, 32, 16), m = (50, 40, 20) .
Figure 30, we compare the relative L2 -errors for MLMC and MC with M = (32, 32, 16) and
m = (50, 40, 20) for 200 independent sets of realization for the coarse problem, i.e., we consider
ei
MLMC (uL )
ei (uL ) =
MC
i
EM ref ,L (uL ) E i,L (uL )
L2 (D)
EM ref ,L (uL )
L2 (D)
with 1 i 200. Note that M is chosen based on the calculations presented in [10] (we have
also veried these calculation for nite volume methods) and we do not change the number of
realizations M and m along the x-axes, here we use dierent random numbers. Therefore the
error does not decrease like in the other gures. On average we gain almost a factor of 5
E200 (eMLMC ) =
0.1411
E200 (eMC ) =
0.6851
89
and also the standard deviation for the MLMC approach is much smaller
stdMLMC
0.0324
stdMC
0.0565.
i
Note that the standard deviation of ei
MLMC (uL ) and eMC (uL ) coincides with the root mean square
errors considered in the theoretical part (cf. Section 13.2). The standard variation for the MC
approach is 1.7 times larger than the one for MLMC. If we consider the mean square errors as in
the previous numerical examples, we gain a factor of 3.
Non separable case In this section, we consider a non-separable case when uncertainties at
macro-scale and micro-scale are not separable. As we mentioned in this case we use a weighted
MLMC approach. For comparison we will also use the standard MLMC approach using the
elements in the diagonal of Table 3 where many upscaled computations are not taken into account
(see discussions in Section 13.3). We consider
x
x
K(x, , , ) = exp A(x, )B( , ) ,
where B is a random eld with expected value E[B] = 0 and the Gaussian covariance function
2
cov(x, x ) = 2 exp( |xx | ), with = 2 and = 0.04. We assume A(x, ) is an independent
2
(in space) discrete random variable that takes the values Ai = 1 + 0.1i for 0 i 5 uniformly.
For each Ai we calculate the upscaled coecient denoted by Ki ( ). As for the spatial correlation
in macroscale, we choose
K (x, , ) = Ki ( ) if i <
x1 + x2
i+1
2
with i.i.d. between 0 and 5. Since we do not know the exact solution of the coarse problem we
use
L
1
EMref (ul ,Hl )
E[u ]
L
l=1
as reference with Mref = 1000. We consider the weighted MLMC and the MLMC approach.
In both cases we use M = (200, 100, 50). That guarantees the same costs for solving the coarse
grid problems for the weighted MLMC and the MLMC approach. Note that the total costs for
the MLMC approach are higher than for weighted MLMC, since we compute samples of the
homogenized coecients for Ml and Ml Ml1 , respectively. For the MC approach we equate the
2
Ml Hl
L
HL
For weighted MLMC we use the weights (1, 6 , 4 ). This choice guarantees the same order of the
7 7
systematic error for MC and MLMC and the constant in front of the exact solution is one, i.e.,
we have
MLMC: Cu u
MC: u uL ,HL
= O(HL + L )
= O(HL + L )
with C = 1. Our numerical results yield the following relative mean square L2 errors of 200
samples
90
MLMC:
0.0050
weighted MLMC: 0.0016
MC:
0.0023
In Figure 31, L2 -errors for MLMC, weighted MLMC and MC are plotted. Note we do not
change the number of realizations M along the x-axes, we use here dierent random numbers.
Therefor the error does not decrease as in the other gures. We see from this gure that weighted
MLMC is more accurate.
0.12
weighted MLMC
MLMC
MC
0.1
0.08
0.06
0.04
0.02
10
Figure 31: L2 -error for weighted MLMC, MLMC and MC with M = (200, 100, 50) and the weights
(1, 6 , 4 ).
7 7
The reference solution, the weighted MLMC and the MC solution are plotted in Figure 32.
(c) MC
Figure 32: Plots of the reference solution and the solutions calculated with weighted MLMC and
MC.
91
Part III.
15. Preliminaries
15.1. Physical quantities
In this part we consider two-phase ow and transport in porous media. For further understanding
we introduce some physical quantities (cf. [39]). In an unsaturated media one distinguishes three
93
dierent phases in general, the wetting phase, the non-wetting phase and a gas phase. To build
dierent phases the uids may not mix. Fluids with dierent physical properties (e.g., hydrophilic
and hydrophobical) can build dierent phases. Since gases are always miscible there exists one
gas phase only. We consider only saturated zones with two dierent phases, the wetting and
non-wetting phase. In the following referred to as water (w) and oil (o) phase, respectively. With
a porous media we denote a solid body with pores, which are at least partly connected. The rock
properties, determined as volume fractions and distributions of the pores, are parameters of the
multi-phase ow in a reservoir. The void volume fraction of the medium is the rock porosity, i.e.,
:=
void volume
,
medium volume
0 1.
S :=
, 0 S 1.
Therefore we have
S = 1.
The permeability k indicates the ability of the medium to transmit a single uid. In general the
permeability in the dierent directions depends on the other directions, i.e., k is a tensor. If there
is more than one phase, the permeability of each phase depends on the presence of the other uids.
Thus we introduce the so-called relative permeability kr (Sw ), 0 kr 1, as a dimensionless in
general nonlinear function of the saturation. Due to the tension at the interface of two dierent
phases the phase pressures p are dierent, the capillary pressure is dened as this dierence
pc := po pw .
Usually it is assumed that the capillary pressure depends on the saturation only. With and
we denote the density and viscosity of phase , respectively.
kr (Sw )
k (p G) ,
(15.2)
with the gravitational pull-down force G depending on the gravitational constant g. In the following we rewrite the two continuity equations into a system consisting of a pressure and a saturation
or uid-transport equation. First we introduce the phase mobility
(Sw ) =
94
kr (Sw )
.
Pressure equation After expanding the derivatives in space and time and dividing by the phase
densities, we get for (15.1)
S
S
v
= q .
S +
+
+ div v +
t
t
t
(15.3)
=
=
vw + vo
qw + qo
we get
q
=
=
vo o
(Sw + So )
Sw w
So o
vw w
+
(Sw + So ) +
+
+
+ div v +
t
t
w t
o t
w
o
vo o
Sw w
So o
vw w
+
.
+
+
+ div v +
t
w t
o t
w
o
Here we used 1 = Sw + So . For simplicity we assume that the rock and the uid phases are
incompressible, i.e all terms with derivatives (spatial and time) of the porosity and the phase
densities vanish. The above equation reduces to
div v = q
with
v
= vw + vo
kro (Sw )
krw (Sw )
k (pw w G) +
k (po o G)
=
w
o
= (w (Sw )k (pw w G) + o (Sw )k (po o G)) .
=
=
=
with the total mobility (Sw ) = w (Sw ) + o (Sw ). So we end with the following elliptic equation
for the pressure
div ((Sw )kpo k [w (Sw )w + o (Sw )o ] G w (Sw )kpc (Sw )) = q
(15.4)
95
=
=
=
=
=
=
=
=
(Sw w )
t
(Sw w )
t
(Sw w )
t
(Sw w )
t
(Sw w )
t
(Sw w )
t
(Sw w )
t
(Sw w )
t
+ div (w vw ) w qw
o
w
1+
vw w qw
w + o
w
o
w
w qw
vo + vw vo +
vw
w
w + o
w
o
w
w qw
v v0 +
vw
w
w + o
w
o
w
v + o k (po o G)
w k (pw w G)
w qw
w
w + o
w
w
(v + o k ((po pw ) o G + w G)) w qw
w
w + o
o w
w
v + div w
k (pc w G + o G) w qw
w
w + o
w + o
+ div w
+ div
+ div
+ div
+ div
+ div
with
f (Sw ) =
w (Sw )
.
(Sw )
Sw
+ div (f v) + div (o f kpc ) + (o G w G) div (o f k) = qw .
t
(15.5)
To have a complete description of the model boundary, e.g., no-ow, and initial conditions have to
be imposed. This saturation equation (15.5) is a parabolic equation in general, but on a reservoir
scale the eects of the capillary pressure are usually dominated by the viscous and gravity forces,
represented by f v and Gv, respectively. Then the saturation equation becomes hyperbolic.
Two-phase ow and transport equation In the following we assume that the displacement is
dominated by viscous eects, i.e., we neglect the eects of gravity, compressibility, and capillary
pressure, and consider the porosity to be constant. Since po pw = const we neglect the subscript
(S)
and with S we denote the water saturation. Then Darcys law of each phase reads v = kr k
x D
(15.6)
S
+ div(vf (S)) = qw , x D, t [0, T ]
(15.7)
t
with v = (S)k p, almost everywhere in . In our application we assume the permeability to
be random. To ensure the existence of a unique solution p of (15.6) we assume the permeability k
is uniformly bounded and coercive, in the sense that there exist two positive deterministic numbers
0 < kmin kmax such that for any d and any 1 i, j d
almost everywhere in D and almost surely. We use a single set of relative permeability curves.
The above equations (15.6) and (15.7) are non-linearly coupled, mainly through the saturationdependent mobilities in the pressure equation (15.6) and through the pressure-dependent velocity v in the saturation equation (15.7). For each time step we solve (15.6) for the velocity with
the saturation of the previous time step. With this velocity we solve (15.7) for the saturation. In
the following Sections 15.3, and 16 we show how we solve the pressure equation 15.6 for many different realizations of the permeability and in Section 15.4 we introduce an implicit upwind scheme
in combination with the Newton method to determine the solution of the hyperbolic equation
(15.7). In Section 17 we combine these techniques with a multi-level Monte Carlo approach to
estimate the expectation of the water saturation S. In our numerics (cf. Sec. 18) we will consider
two dierent cases: single-phase ow ((S) = 1 and f (S) nonlinear) and two-phase ow. In both
cases, we will compare the saturation eld at a certain time instant. Note that in this context the
considered equation in Parts I and II can be seen as the pressure equation for single-phase ow.
(15.8)
(x)k(x)p n = g(x) on D.
For simplicity, we assume Neumann boundary conditions. With Vh H(div, D) := {u : u
0
(L2 (D))d , div u L2 (D)} and Qh L2 (D)/ we denote nite dimensional spaces and let Vh =
Vh H0 (div, D), where H0 (div, D) is H(div, D) with Neumann homogeneous boundary conditions.
By multiplying with test functions (uh , bh ) Vh Qh and integrating by parts we get the numerical
approximation of (15.8) on the ne grid. It reads: Find {vh , ph } Vh Qh such that vh n = gh
on D and
((k)1 vh , uh ) (divuh , ph )
(divvh , bh )
= 0
= (q, bh )
0
uh Vh
bh Qh ,
(15.9)
with the usual L2 inner product, (, ). The idea behind the mixed MsFEM is to approximate
the velocity using multiscale basis functions which contain the small-scale features. Therefor
these multiscale basis functions are constructed for each edge of every block. To approximate the
pressure eld we use piecewise constant functions. To construct the multiscale basis we dene a
partitioning of the domain in polyhedral elements Di = D. Let I be the multi index set of
index pairs of two neighboring blocks, i.e if Di Di = , = {ii } I. This interface we denote
with := Di Di for I. For we dene a multiscale basis function ,k = k(x)w,k
97
no-ow boundary
nii
g
Ki
Ki
no-ow boundary
(div(k(x)w,k )) |Di =
(div(k(x)w,k )) |Di =
1
|Di |
if
q
Di
1
|Di |
q
D
i
k(x)w,k nii =
q=0
else,
if
Di
Di
q=0
(15.10)
else,
g
0
on
else,
where the choice of g will be discussed later and nii is the normal pointing from Di to Di (see
Figure 33).
Then the nite dimensional approximation space of the velocity is dened as
Vh (k) :=
I
0
Vh (k) :=
{,k },
Note that this space Vh (k) corresponds to a xed realization of the permeability. In the stochastic
framework we determine Vh (kj ) for many realizations kj of the permeability and construct the
approximation space Vh in two dierent ways, which we introduce in Sections 16.1 and 16.2. The
approaches are based on ensemble level methods.
The accuracy of the mixed MsFEM can be aected by the choice of boundary conditions g in
(15.10). In the following subsections we introduce two dierent kinds of boundary conditions, local
and global ones. Note that one has to solve a single-phase ow problem for the global boundary
conditions.
15.3.1. Mixed MsFEM using local boundary conditions
1
Piecewise constant coarse-scale uxes on the boundary of the coarse elements, i.e., g = | | are
used in [21]. However, the choice of piecewise constant boundary conditions can lead to large
errors between the solution of the original problem and the mixed MsFEM solution. In general, it
is possible to consider any boundary conditions that involve local information, e.g., permeabilities
in local domains. In this case the boundary condition does not contain any ne-scale features that
are in the velocity of the reference solution.
98
t
t
with S k (x) := S(x, tk ) and t := tk+1 tk . Then we use the following approximation instead of
equation (15.7)
(15.12)
S(x, t)
Si (t)i (x)
=
i=1
with
i (x) =
1, if x Di
0, else.
If we multiply (15.12) with a piecewise constant basis function i and integrate over the domain
D we get by applying the Gauss theorem:
|Di | k+1
k
Si Si +
t
Di
[f (S k+1 )v] n + (1 )
Di
[f (S k )v] n = Qi
99
with Qi = Di qw . Since the ux f (S) over the boundary may be discontinuous we replace it with
a numerical consistent and conservative ux
f (S)ii =
f (Si ), if v nii 0
f (Si ), if v nii < 0
for the normal nii pointing from Di to the neighbor cell Di . The boundary integral reads
Di
[f (S k )v] n
=
ii =Di Di
ii
[f (S k )v] nii
f (S k )ii
ii =Di Di
ii
v nii
f (S k )ii vii
=
ii =Di Di
with vii :=
ii
k+1
k
Si Si +
t
|Di |
ii
t
Qi .
|Di |
We consider a fully implicit discretization, i.e., = 1 to allow arbitrary time steps. In the following
k
we assume that Si is known for all i, 1 i N and we are interested in the unknown saturation
k+1
in the whole reservoir for t = tk+1 . We dene the vector S = (Si )1iN and the function
t
t
k
.
H(S) = Si Si
Qi
f (S)ii vii +
|Di |
|Di |
ii
1iN
The unknown saturation vector is a zero of this function and we use the Newton method to
compute the saturation for each time step. The Newton method reads
S n+1 = S n (DH (S n ))1 H(S n )
with the Jacobian matrix DH (S) of H(S). Note that S n denotes the Newton iteration after n
steps and is does not coincide with S n . However, if we use the start vector S 0 = S k , the Newton
methods converges to the saturation of the next time step. Crucial for this method is to know
the velocity in advance. To solve the system of the pressure and saturation equation (15.6) and
(15.7) we solve (15.6) with mixed MsFEM for the velocity with the total mobility depending on
the saturation of the previous time step (S k ) and with this velcotiy we solve (15.7) with the
above scheme to compute S k+1 .
100
to statistically estimate the upscaled two-phase functions for arbitrary realizations. We apply an
ensemble level approach in combination with the above described mixed multiscale FEM (cf. Sec
15.3).
The main idea of ensemble level multiscale methods is to use precomputed multiscale basis
functions depending on a few ensemble members to solve the problem for any member of the
ensemble.
In this section we introduce two ensemble level methods for mixed MsFEM, the no-local-solveonline ensemble level method (NLSO) and the local-solve-online ensemble level method (NLSO).
The computations are divided into oine and online computations. In both methods we construct
sets of basis functions based on a few (Nl ) realizations of the permeability in the oine stage.
We choose the realizations randomly or we use proper orthogonal decomposition (POD). We use
POD to nd best local basis functions with respect to the velocity.
In general we can also choose oine realizations following the techniques used in reduced basis
methods (cf.[51, 52]). For both methods we can use either boundary conditions using global
single-phase ow information or local information.
Vh (kj ).
j=1
In the NLSO approach one does not compute a basis for each realization, but the coarse space
of all the precomputed basis functions is used to solve the equation on a coarse grid. Therefore
we solve the equations on a Nl |I| coarse space (since we have Nl basis functions on each edge
instead of one).
First we solve (15.8) on a coarse grid with the help of the precomputed multiscale basis. Then
we use the velocity solution to solve the transport equation (15.7) with an implicit scheme to
determine the saturation.
Since the basis functions do not change during the online stage, we can precompute the integrals
the NP OD eigenvalues with the largest absolute value and the corresponding eigenvectors Vi =
T
matrix of the scaled eigenvectors, i.e., for each column of the matrix it holds: Vi = N1 j Vi .
l
j=1
Vi
The POD multiscale basis functions are the columns of B = BV . We have implemented the POD
approaches using the L2 inner product of the velocity. This approach increases the accuracy of the
approximation of the velocity and the saturation (cf. Sections 19.1, 20.1). In the POD approach
the improvement is larger for the velocity since we consider the L2 inner product of the velocity.
101
It is not clear if this inner product is optimal for the quantity of interest: the water saturation.
The choice of an optimal inner product for the saturation will be investigated in future work. If
we combine the ensemble level mixed method with multi-level Monte Carlo we have not observed
any gain using the POD approach (cf. Sections 19.3, 20.3).
(15.10) for each realization (cf. Section 15.3) we dene the space Vh := j=1 ,kj for every edge
= {ii }. In this space Vh we approximate the solution w,k of the auxiliary problem (15.10) with
the coecient k(x) = k(x, ) for some random in the online phase. As approximation space for
for ,k = k(x)w,k .
For each computed multiscale velocity we solve the transport equation (15.7) to compute the
saturation.
In this approach we use a dierent multiscale basis for each coecient. That is the reason why
the integrals ,k ,k over a coarse block cannot be precomputed. However, since each basis
function ,k is a linear combination of the precomputed basis functions ,kj , 1 j Nl , the
calculations can be done inexpensively using precomputed quantities.
As in the previous approach, NLSO, we can use POD to nd the best local basis. Again we
observe an improvement in the numerical simulations for approximating the velocity and the
saturation (cf. Sections 19.1, 20.1). We have not observed any gain of the POD approach in
combination with the multi-level Monte Carlo approach.
cj ,kj
v=
I j=1
c ,k =
v=
I
102
cj ,kj .
c
I
j=1
Oine stage
Dene Vh (kj ) =
,kj .
NLSO MsFEM
Online stage
LSO MsFEM
For each edge solve for
Nl
j=1 ,kj
VhLSO = ,kml
Solve for velocity in VhSLO
Solve for saturation
Figure 34: Steps of the two multiscale methods, of NLSO and LSO.
This is exactly the same if we have the freedom to choose cj . Note that the coecients cj in LSO
is determined from the solution of local problems that compute basis functions.
As mentioned above global boundary conditions are more expensive than local ones, but more
accurate. That was demonstrated in [49, 29] theoretically and numerically. This is particularly
true when the problem is solved multiple times. Next we state that we cannot use LSO type
approaches due to the fact that we have use local boundary conditions in the online stage if an
accurate solution is sought.
The costs of the online computations of the NLSO approach do not depend on the choice of the
boundary conditions g . That is why it is reasonable to choose limited global boundary conditions.
The rst part of the online computations of the LSO method is to solve the auxiliary problem
(15.10). Boundary conditions using global information would increase the computational cost of
the method. Therefore we choose local boundary condition for the online part of the LSO method
(global for the oine part). With this choice of boundary conditions the computational costs for
NLSO and LSO are comparable. The NLSO approach is more accurate than the LSO method if
global boundary conditions are used as we will show in Sections 19.1 and 20.1.
103
the NLSO and LSO approach to calculate the water saturation Sl where Nl realizations of the
permeability eld are chosen to compute the basis for the whole ensemble. In this case dierent
levels correspond to dierent accuracies in the velocity approximation space Vh . We are interested
in the expected value of the saturation with a large number of precomputed basis functions NL . We
use levels which are less accurate and less computational expensive to approximate the quantity
1
of interest. Particularly, we assume Sl S N , with N1 N2 NL and > 0. As
l
E L (SL ) =
l=1
where EMl denotes the arithmetic mean with Ml samples. Again, we consider the root mean
square errors
E[SL ] E L (SL ) := E[ E[SL ] E L (SL )
1
2
E[SL ] E L (SL )
l=2
1
1
1
.
+
M l Nl
M1
2
NL ,
NL
Nl
l=1
2
, 2 l L,
E[SL ] E L (SL ) = O
NL
(17.1)
To predict the saturation eld we select at each level l a dierent number of realizations of the
permeability eld Nl , N1 N2 NL , to build a low dimensional approximation space Vh for
the velocity that captures both small scale (sub coarse-grid) spatial variability in the permeability
data and stochastic variability due to uncertainties in the data. In particular, we calculate the
velocity at level l for Ml realizations with one of the ensemble level mixed MsFEMs to compute
the saturations Sl,m with 1 l L and 1 m Ml with the upwind method. With these
saturations we build the MLMC approximation of the expected value of the ne scale saturation.
For clarity we summarize the basic steps below.
Oine computations
1. Generation of coarse grid.
Partition the domain into a coarse grid. The coarse grid is a partitioning of the
ne grid where each cell in the ne grid belongs to a unique block in the coarse
grid and each coarse grid block is connected. [7].
2. Select NL realizations from the stochastic permeability distribution kj (x), 1 j NL .
104
vj nii
,
v nii ds
j
,kj .
N
4. NLSO: Construction of the multiscale approximation space Vh LSO at level l, 1 l
L:
N
Dene Vh LSO =
Nl
j=1
Vh (kj ).
Online computations
5. Multi-level mixed MsFEM computations for estimating an expectation at level l, 1
l L:
Select Ml realizations of the permeability kml , 1 ml Ml .
LSO
LSO: Construction of the approximation space Vh
{,km }.
Dene V LSO =
h
Nl
j=1
Solve two-phase ow and transport (15.6)-(15.7) for Sl,m . At each time step, the
velocity eld is constructed by solving (15.6) on a coarse grid using NLSO or LSO,
respectively.
Calculate the arithmetic mean
1
EMl (Sl Sl1 ) =
Ml
Ml
m=1
(Sl,m Sl1,m ) .
E L (SL ) =
l=1
105
NLSO and LSO approach. In the LSO approach the multiscale basis of each edge can be solved
in parallel, such that the computational time for the LSO ansatz is much smaller. As in Part II,
in our simulations, we equate the costs of solving the pressure equation for MC and MLMC and
compare the accuracy of these approximations. Although for single-phase ow this comparison is
accurate (up to the cost of computation of basis functions), one needs to take into account the
cost of solving the saturation equation in two-phase ow and transport simulations. To solve for
the saturation we use a coarse-grid velocity eld so the computational costs at each time instant
are the same at any level. However, the cost of solving the pressure equation is larger than that
for the saturation equation on a coarse grid, because there are more degrees of freedom and the
convergence of iterative solvers requires many iterations for multiscale problems. Since in the
NLSO method we use several basis functions per coarse edge and in the LSO multiscale method
we have to calculate the velocity approximation space online, the cost of computing the pressure
solution can be several times larger than that of the saturation equation because the coarse system
is several times larger for the pressure equation. That is why we will ignore the costs of saturation
computation on a coarse grid in two-phase ow examples.
We have the following computational costs for MLMC based on pressure:
L
WMLMC
=
l=1
1
1
2 Hj 1 Nl2 Hj Ml
L
2 1
1
= 2NL Hj (Hj 1)
2
Nl22 + N1
l=2
In our numerical simulations, we will equate the work and compare the accuracy of MLMC and
MC.
If we increase the dimension of the coarse space (corresponds to Nl ), we observe that the
accuracy of the multiscale methods increases (see [4] and the discussions below). However, in
general this accuracy cannot be estimated, so we propose an empirical procedure to estimate
the convergence rate based on simulations with few samples in Sections 19.2 and 20.2. With
this estimated rate we select the number of realizations, Ml based on (17.1), for the MLMC
approach.
cov(x, y) = 2 exp
106
|x2 y2 |2
|x1 y1 |2
,
22
22
1
2
(18.1)
|x1 y1 | |x2 y2 |
.
1
2
(18.2)
As previously, we denote with 1 and 2 the correlation lengths in each dimension and 2 = E(Y 2 )
is a constant that represents the variance of the permeability eld, which we choose 2 = 2 in all
examples. We use the Karhunen-Lo`ve expansion to parameterize these permeability elds. In the
e
rst case, we expect faster decay of the eigenvalues compared to the second case, log-Exponential,
for a given set of correlation lengths. In both cases the permeability eld Y (x) is given on a
100 100 ne Cartesian grid. In particular we consider:
Isotropic Gaussian eld: correlation length 1 = 2 = 0.2, stochastic dimension 10
Anisotropic Gaussian eld: correlation length 1 = 0.5 and 2 = 0.1, stochastic dimension
12
Isotropic Exponential eld: correlation length 1 = 2 = 0.2, stochastic dimension 300
Anisotropic Exponential eld: correlation length 1 = 0.5 and 2 = 0.1, stochastic dimension
350
We apply the above described MLMC approach. To build the multiscale basis, we generate NL
independent realizations of the permeability and for these realizations we solve (15.10). The
multiscale basis functions are not recomputed during the simulations, i.e., they are computed at
time zero. We choose the realizations for the precomputed multiscale basis functions randomly
and we use POD to nd a best local basis.
We compare the MLMC accuracy of the saturation with the accuracy of standard MC at level
L with the same amount of costs; therefore, we choose
M=
L
l=1
Nl2 Ml
.
2
NL
As reference Sref , we use the arithmetic mean of Mref samples of the saturation solved with
a multiscale velocity. For this velocity the multiscale basis is recomputed for each permeability
realization, such that we use exactly the same realization as in the pressure equation. We consider
the square root of the arithmetic mean square of the relative L2 errors, e.g., for MLMC we consider
MLMC error =
1
Nb
Nb
j=1
j
Sref SMLMC
Sref 2 2
L
2
L2
(18.3)
j
where SMLMC denotes the MLMC approximation of the expectation for a given set of permeability
j
j
j
realization Kof f = (k1 , , kNL ) to compute the multiscale basis functions (cf. (15.10)) and
j
j
j
dierent realizations Kon = (k1 , , kM1 ) to solve the ow and transport equations (cf. (15.6)(15.7)) to compute the saturation. In the following Section 19 we present our results for the
single-phase ow on the ne grid and in Section 20 we consider two-phase ow on the coarse grid.
This part is done with Matlab ([2]). The implementation is based on the code of Aarnes, where
the solver for the saturation equation (15.7), the mixed MsFEM method and the ensemble level
approach are implemented (cf. [3, 4, 6]). A description of the main part of the code can be found
in [6, 45].
107
19.1. Comparison of the NLSO and the LSO approach for single-phase ow
In this section we study the dierences of the two ensemble level mixed MsFEMs, the inuence
of the boundary conditions using local or limited global information and the inuence of using
proper orthogonal decomposition to determine a best basis with respect to the velocity.
We show that methods using local boundary conditions have a residual error no matter how
many basis functions we pick.
The global boundary conditions are
g (kj ) =
vj nii
vj nii ds
as dened in (15.11). In our case the local boundary conditions depend on the permeability
realization in the considered cells, i.e.,
g (kj ) =
Gj
j G ds
with
Gj (x) =
2
1
kj (xnii h)
1
kj (x+nii h)
108
Isotropic Gaussian
Sref S
gNLSO
no POD
POD
2.0154
1.3175
0.3265
0.2431
L2
Sref loc S
16.2392
16.2149
L2
16.1794
16.2162
gLSO
no POD
POD
16.2749 16.2132
16.0556 16.0887
6.6100
1.8751
3.1272
1.4486
lNLSO
no POD
POD
6.3233
6.3794
1.9055
1.7348
15.0033
16.0174
15.3998
16.0938
lLSO
no POD
POD
16.1875 16.1169
16.1615 16.1972
2.6582
0.7492
1.4822
0.3746
16.2200
L2
Table 7: Mean saturation errors (percent) of 100 realizations of the isotropic Gaussian distribution
for the dierent methods and boundary conditions for the single-phase ow example, 6, 12
basis functions.
Isotropic Gaussian
vref v
gNLSO
no POD POD
0.1440
0.0944
0.0195
0.0176
L2
vref loc v
1.3792
1.3671
L2
1.3719
1.3670
gLSO
no POD POD
1.2821
1.3367
1.3528
1.3603
0.6375
0.2041
0.3101
0.1406
lNLSO
no POD POD
0.6470
0.5881
0.1918
0.1707
1.3806
1.3686
1.3745
1.3743
lLSO
no POD POD
1.3707
1.3628
1.3661
1.3669
0.2070
0.0610
0.1041
0.0290
1.3669
L2
Table 8: Mean velocity errors (percent) of 100 realizations of the isotropic Gaussian distribution
for the dierent methods and boundary conditions for the single-phase ow example,
6, 12 basis functions.
Isotropic Exponential
Sref S
gNLSO
no POD
POD
19.7784 17.7025
15.5608 12.9127
Sref loc S
L2
lNLSO
no POD
POD
24.2868 22.6637
20.0685 17.7077
lLSO
no POD
POD
31.7842 31.3074
31.1171 31.0616
25.9793
26.8417
L2
gLSO
no POD
POD
32.1457 31.8803
31.3271 30.8887
22.5036
18.5179
23.8879
24.5747
22.5670
16.1490
27.3023
27.7851
18.6939
15.6346
23.7598
25.5206
16.0498
12.9676
30.3272
L2
Table 9: Mean saturation errors (percent) of 100 realizations of the isotropic Exponential distribution for the dierent methods and boundary conditions for the single-phase ow example,
6, 12 basis functions.
Isotropic Exponential
vref v
gNLSO
no POD POD
2.8652
2.2179
2.1377
1.5891
L2
vref loc v
3.8502
3.6400
L2
L2
3.6259
3.5168
gLSO
no POD POD
3.2436
3.1131
3.1698
3.0914
2.9904
2.5808
2.6848
2.2091
lNLSO
no POD POD
3.6413
3.6586
3.2082
2.8214
4.1118
3.9426
4.1260
3.8258
lLSO
no POD POD
3.5080
3.5085
3.4817
3.4847
2.6649
2.0581
2.0330
1.5502
3.4769
Table 10: Mean velocity errors (percent) of 100 realizations of the isotropic Exponential distribution for the dierent methods and boundary conditions for the single-phase ow example,
6, 12 basis functions.
109
Anisotropic Gaussian
Sref S
gNLSO
no POD
POD
4.4798
3.7621
1.4889
1.0872
Sref loc S
L2
lNLSO
no POD
POD
10.6921
9.9636
5.9529
5.4429
27.4733
27.6664
L2
gLSO
no POD
POD
28.0264 27.3662
27.3692 27.3779
12.3253
4.2140
25.1565
26.6987
27.5175
27.6771
6.2222
3.7559
25.3781
26.8279
lLSO
no POD
POD
27.5760 27.6249
27.6672 27.6798
5.0260
2.0322
2.9356
1.1568
27.7428
L2
Table 11: Mean saturation errors (percent) of 100 realizations of the anisotropic Gaussian distribution for the dierent methods and boundary conditions for the single-phase ow
example, 6, 12 basis functions.
Anisotropic Gaussian
vref v
gNLSO
no POD POD
0.4539
0.3828
0.1232
0.0931
L2
vref loc v
2.3963
2.3536
L2
2.3707
2.3529
gLSO
no POD POD
1.9906
2.2211
2.2799
2.2914
1.3507
0.6093
0.8364
0.5135
lNLSO
no POD POD
1.3898
1.2797
0.7342
0.5897
1.3898
0.7342
1.2797
0.5897
lLSO
no POD POD
2.3359
2.3476
2.3553
2.3527
2.3359
2.3553
2.3476
2.3527
2.3518
L2
Table 12: Mean velocity errors (percent) of 100 realizations of the anisotropic Gaussian distribution for the dierent methods and boundary conditions for the single-phase ow example,
6, 12 basis functions.
Anisotropic Exponential
Sref S
gNLSO
no POD
POD
20.7988 19.7001
16.1815 14.0968
Sref loc S
L2
lNLSO
no POD
POD
26.3559 25.0294
21.4738 20.5792
lLSO
no POD
POD
35.6875 33.4379
33.9996 33.2703
29.3508
30.1783
L2
gLSO
no POD
POD
35.6866 34.7461
33.6023 33.1095
25.1490
19.8822
25.0723
27.1934
25.2204
17.5943
30.0347
31.0464
21.6110
16.7638
25.5648
27.4769
16.2809
13.0112
33.8186
L2
Table 13: Mean saturation errors (percent) of 100 realizations of the anisotropic Exponential distribution for the dierent methods and boundary conditions for the single-phase ow
example, 6, 12 basis functions.
Anisotropic Exponential
vref v
gNLSO
no POD POD
3.5702
2.8821
2.8207
2.2679
L2
vref loc v
4.3484
4.1108
L2
L2
3.9808
4.0028
gLSO
no POD POD
3.7861
3.6260
3.6015
3.5865
3.1189
2.6538
2.7728
2.3411
lNLSO
no POD POD
4.5283
4.5108
3.9577
3.8736
4.5251
4.3364
4.5562
4.3447
lLSO
no POD POD
4.0864
4.0154
3.9667
3.9984
2.9389
2.2073
2.1638
1.6922
4.0063
Table 14: Mean velocity errors (percent) of 100 realizations of the anisotropic Exponential distribution for the dierent methods and boundary conditions for the single-phase ow
example, 6, 12 basis functions.
110
global reference is of the same size as the error between the global reference and the local one,
independent of the underlying distribution. In both approaches global boundary conditions give a
better approximation of the ne-scale solution (at least if we choose the number of precomputed
basis functions large enough). For the velocities the behavior of the mean errors is comparable,
but the approximations are much more accurate. Furthermore, the approximation of the nescale velocity is more accurate with global boundary conditions independent of the number of
basis functions even for the LSO approach. For this reason we will only consider global boundary
conditions in the following. In Figure 35 we show the water saturations with global boundary
conditions for one sample of the permeability for the considered distributions for all methods.
Section 17) is fullled for the NLSO and the LSO approach. Here S denotes the saturation with
the basis calculated for the permeability realization which is used in the pressure equation and Sl
the saturation with a precomputed basis with Nl permeability realizations. With these l s, it is
possible to nd appropriate choices of realizations Ml at each level, namely
1 2
2
(std(S) + 1 ), l = 1
L
Ml = C
2
l ,
2 l L.
L
To determine the convergence rate, we choose N = (3, 6, 12) and calculate the mean over M = 100
permeability realizations as follows. For 10 sets of permeability realizations we compute the
multiscale basis and each of these sets we use to compute the error for 10 dierent permeability
realizations. We compute the arithmetic mean of the 10 10 numbers. In Table 15 we summarize
Sl S
l
the error ratios L = SL S L2 .
L2
For the NLSO approach the resulting s depend on the underlying distribution. If we precompute the multiscale basis functions with randomly chosen realizations we get for the isotropic
Gaussian distribution for the dierent levels the ratios (13.5, 5.1, 1) and for the anisotropic Gaussian (5.2, 2.6, 1). For the two Exponential distributions the ratios are almost the same, namely
(1.6, 1.3, 1). For POD the resulting ratios are comparable. Since the s do not decrease fast
we choose larger Nl s in Section 19.3 where we combine the ensemble level mixed MsFEMs with
MLMC.
For the LSO approach all ratios are close to one, i.e., the errors are almost independent of
the number of used precomputed basis functions. However, since we observed in the previous
section that the LSO approach does not converge to the ne scale saturation we could not expect
anything. However, we choose the same numbers of precomputed basis functions Nl , 1 l L,
as in the NLSO approach in Section 19.3.
111
Reference
Global NLSO
Global LSO
20
20
20
20
20
40
40
40
40
40
60
60
60
60
60
80
80
80
80
100
100
20
40
60
80
100
100
20
40
60
80
100
80
100
20
40
60
80
100
100
20
40
60
80
100
20
40
60
80
100
Global NLSO
Global LSO
20
20
20
20
20
40
40
40
40
40
60
60
60
60
60
80
80
80
80
100
100
20
40
60
80
100
100
20
40
60
80
100
80
100
20
40
60
80
100
100
20
40
60
80
100
20
40
60
80
100
Global NLSO
Global LSO
20
20
20
20
20
40
40
40
40
40
60
60
60
60
60
80
80
80
80
100
100
20
40
60
80
100
100
20
40
60
80
100
80
100
20
40
60
80
100
100
20
40
60
80
100
20
40
60
80
100
Global NLSO
Global LSO
20
20
20
20
20
40
40
40
40
40
60
60
60
60
60
80
80
80
80
100
100
20
40
60
80
100
100
20
40
60
80
100
80
100
20
40
60
80
100
100
20
40
60
80
100
20
40
60
80
100
Figure 35: On realization of the water saturation for global boundary conditions for the dierent
methods with 12 precomputed basis functions.
112
isotropic Gaussian
anisotropic Gaussian
isotropic Exponential
anisotropic Exponential
NLSO
S1 S
S3 S
S2 S
S3 S
LSO
L2
13.5458
1.6024
1.6183
2.6086
1.2746
1.3254
1.0773
1.0378
1.0991
1.1817
1.0018
L2
5.2335
5.0930
L2
L2
1.0081
1.0087
1.0608
S1 S
S3 S
S2 S
S3 S
L2
L2
L2
S1 S
S3 S
S2 S
S3 S
L2
L2
12.7403
5.0250
1.9021
2.0390
L2
L2
4.7024
2.2227
1.4007
1.4202
S1 S
S3 S
S2 S
S3 S
L2
L2
L2
1.0072
1.1122
1.2138
1.2974
1.0037
1.0046
1.0395
1.0510
L2
L2
Table 15: Convergence of the single-phase ow example for the dierent methods and distributions
with N = (3, 6, 12).
the other methods, in particular, we use Nb = 20. If not otherwise stated we use Nb = 20. For
the online computations we choose the number of samples of the permeability independent of the
underlying distribution at each level as M = (70, 20, 10). The dimension of the approximations
(dened as the number of independent samples selected to construct the multiscale space, Nl at
level l) for the Exponential distributions is eight times larger than for the Gaussian tests. For
the proper orthogonal decomposition examples we precompute multiscale basis functions for a
large number NP OD compared to NL and determine the used Nl basis functions with POD. More
precisely we use NP OD = 100 for underlying Gaussian distributions and NP OD = 500 for the
Exponential cases. As mentioned before we equate the computational costs and compare the
resulting relative errors for the MLMC and MC approaches. With this choice of realizations for
the MLMC method, we get for MC with equated costs M = 20, where M is the number of
permeability realizations needed for forward simulations.
(N1 , N2 , N3 )
(M1 , M2 , M3 )
N
M
MMCref
NP OD
MLMC error NLSO Nb = 100
MC error NLSO Nb = 100
MC error
MLMC error NLSO Nb = 100
MLMC error NLSO
MC error NLSO
MC error
MLMC error NLSO
MLMC error LSO
MC error LSO
MC error
MLMC error LSO
MLMC error POD NLSO
MC error POD NLSO
MC error
MLMC error LSO
MLMC error POD LSO
MC error POD LSO
MC error
MLMC error LSO
isotropic Gaussian
(3, 6, 12)
(70, 20, 10)
12
20
500
100
0.0769
0.1386
1.80
0.0758
0.1439
1.90
0.1101
0.3027
2.75
0.0789
0.1332
1.67
0.1020
0.1462
1.4
anisotropic Gaussian
(3, 6, 12)
(70, 20, 10)
12
20
500
100
0.0853
0.1282
1.50
0.0847
0.1222
1.44
0.1189
0.4244
3.57
0.0824
0.1343
1.63
0.1228
0.1447
1.18
isotropic Exponential
(24, 48, 96)
(70, 20, 10)
96
20
500
500
0.0952
0.1418
1.49
0.0934
0.1389
1.49
0.1781
0.2112
1.19
0.0947
0.1532
1.62
0.1819
0.2024
1.11
anisotropic Exponential
(24, 48, 96)
(70, 20, 10)
96
20
500
500
0.0879
0.1321
1.50
0.0889
0.1343
1.51
0.1711
0.1839
1.07
0.0864
0.1348
1.56
0.1757
0.2029
1.15
113
for the Gaussian ones. For Nb = 20 the errors do not change signicantly. So it makes sense to
reduce the computational time and use Nb = 20 for the other cases.
For all considered combinations -NLSO, LSO with POD or without POD for the dierent
distributions- we increase the accuracy with the help of MLMC in comparison to MC at the
largest level with equated costs. As expected the errors for the NLSO method are smaller than
the errors for the LSO approach independent of the underlying distributions. For instance, we
have for the isotropic Exponential distribution for MLMC an error of 9% for the NLSO case and
18% otherwise (cf. Table 16). This coincides with our results in 19.1. However, note that LSO
approach provides good results although the ratios are close to one (cf. Section 19.2).
There is no signicant inuence of using POD. It seems to decrease the inuence of the underlying distribution. Particularly, without POD the ratio of the MC and the MLMC error is between
1.44% and 1.90% for NLSO. For LSO it lies in the interval [1.07, 3.57]. If we use POD we get
[1.56, 1.67] for NLSO and [1.11, 1.40] for LSO.
The resulting mean water saturations for the dierent covariance functions (isotropic and
anisotropic Gaussian, isotropic and anisotropic Exponential) and the dierent methods (MLMC,
MC, reference) for the considered ensemble level mixed MsFEMs we illustrate in Figures 36-39. In
contrast to the LSO method one observes no dierences for MLMC, MC and the reference in the
NLSO approach (cf. Fig.36, 38). In the gures corresponding to the LSO approach (Fig. 37, 39)
one can see the coarse grid due to the local boundary conditions in the online stage. Furthermore
one can observe dierences for the MLMC and the MC approach, e.g., 37(a). The error of the
MC approach for the Gaussian distributions is signicantly smaller if one uses POD in the case
of LSO (cf. Table 16). This can be seen in the corresponding Figure 39 as well.
If one is interested in the expected value of the water saturation at the producer only, the
MLMC approach increases the accuracy in comparison to the MC approach with equated costs.
The improvement of MLMC is approximately the same, but the errors are much smaller. For
instance, we get an error of 0.6% with the MLMC approach and on of 1.3% for MC in the case of
the NLSO method with POD for an isotropic Gaussian distribution. For the LSO approach the
corresponding errors are 0.7% and 1.2% for MLMC and MC, respectively.
114
10
10
10
20
20
20
30
30
30
40
40
40
50
50
50
60
60
60
70
70
70
80
80
80
90
90
90
100
100
100
20
40
60
80
20
100
40
60
80
100
20
40
60
80
100
10
10
10
20
20
20
30
30
30
40
40
40
50
50
50
60
60
60
70
70
70
80
80
80
90
90
90
100
100
100
20
40
60
80
20
100
40
60
80
100
20
40
60
80
100
10
10
10
20
20
20
30
30
30
40
40
40
50
50
50
60
60
60
70
70
70
80
80
80
90
90
90
100
100
100
20
40
60
80
20
100
40
60
80
100
20
40
60
80
100
10
10
10
20
20
20
30
30
30
40
40
40
50
50
50
60
60
60
70
70
70
80
80
80
90
90
90
100
100
100
20
40
60
80
100
20
40
60
80
100
20
40
60
80
100
Figure 36: Water saturation for NLSO for MLMC and MC for the dierent distributions for singlephase ow, Nb = 100.
115
10
10
10
20
20
20
30
30
30
40
40
40
50
50
50
60
60
60
70
70
70
80
80
80
90
90
90
100
100
100
20
40
60
80
20
100
40
60
80
100
20
40
60
80
100
10
10
10
20
20
20
30
30
30
40
40
40
50
50
50
60
60
60
70
70
70
80
80
80
90
90
90
100
100
100
20
40
60
80
20
100
40
60
80
100
20
40
60
80
100
10
10
10
20
20
20
30
30
30
40
40
40
50
50
50
60
60
60
70
70
70
80
80
80
90
90
90
100
100
100
20
40
60
80
20
100
40
60
80
100
20
40
60
80
100
10
10
10
20
20
20
30
30
30
40
40
40
50
50
50
60
60
60
70
70
70
80
80
80
90
90
90
100
100
100
20
40
60
80
100
20
40
60
80
100
20
40
60
80
100
Figure 37: Water saturation for LSO for MLMC and MC for the dierent methods and distributions for single-phase ow.
116
10
10
10
20
20
20
30
30
30
40
40
40
50
50
50
60
60
60
70
70
70
80
80
80
90
90
90
100
100
100
20
40
60
80
20
100
40
60
80
100
20
40
60
80
100
10
10
10
20
20
20
30
30
30
40
40
40
50
50
50
60
60
60
70
70
70
80
80
80
90
90
90
100
100
100
20
40
60
80
20
100
40
60
80
100
20
40
60
80
100
10
10
10
20
20
20
30
30
30
40
40
40
50
50
50
60
60
60
70
70
70
80
80
80
90
90
90
100
100
100
20
40
60
80
20
100
40
60
80
100
20
40
60
80
100
10
10
10
20
20
20
30
30
30
40
40
40
50
50
50
60
60
60
70
70
70
80
80
80
90
90
90
100
100
100
20
40
60
80
100
20
40
60
80
100
20
40
60
80
100
Figure 38: Water saturation for NLSO with POD for MLMC and MC for the dierent methods
and distributions for single-phase ow.
117
10
10
10
20
20
20
30
30
30
40
40
40
50
50
50
60
60
60
70
70
70
80
80
80
90
90
90
100
100
100
20
40
60
80
20
100
40
60
80
100
20
40
60
80
100
10
10
10
20
20
20
30
30
30
40
40
40
50
50
50
60
60
60
70
70
70
80
80
80
90
90
90
100
100
100
20
40
60
80
20
100
40
60
80
100
20
40
60
80
100
10
10
10
20
20
20
30
30
30
40
40
40
50
50
50
60
60
60
70
70
70
80
80
80
90
90
90
100
100
100
20
40
60
80
20
100
40
60
80
100
20
40
60
80
100
10
10
10
20
20
20
30
30
30
40
40
40
50
50
50
60
60
60
70
70
70
80
80
80
90
90
90
100
100
100
20
40
60
80
100
20
40
60
80
100
20
40
60
80
100
Figure 39: Water saturation for LSO with POD for MLMC and MC for the dierent methods and
distributions for single-phase ow.
118
o
equation on the 5 5 coarse grid with mixed MsFEM for each time step.
20.1. Comparison of the NLSO and the LSO approach for two-phase ow
Analogously to the single-phase ow case we study the dierences of the two ensemble level mixed
MsFEMs, the inuence of boundary conditions using local or limited global information and the
inuence of using proper orthogonal decomposition to determine a best basis with respect to the
velocity.
Again we observe that methods using local boundary conditions have a residual error no matter
how many basis functions we pick.
j
j
In this case we consider the multiscale saturations Sref and Sref loc as references where the
basis is calculated for the realization kj with global boundary conditions or with local boundary
conditions, respectively, as well as the corresponding velocities.
As before, we compute the mean L2 -errors for 6 and 12 multiscale basis functions. For the POD
we precompute 100 multiscale functions and use the rst 6, 12 functions as basis.
Isotropic Gaussian
Sref S
gNLSO
no POD POD
1.3792
1.2134
0.8252
0.8815
L2
Sref loc S
8.3251
8.2087
L2
8.3319
8.2358
gLSO
no POD POD
7.6178
7.8785
7.9849
8.0405
1.8642
0.5198
1.0486
0.3121
lNLSO
no POD POD
4.4814
3.4160
1.3310
1.1110
8.4147
8.2615
8.6531
8.1302
lLSO
no POD POD
7.9852
8.1864
8.1728
8.1900
0.6369
0.1040
0.2410
0.0436
8.1947
L2
Table 17: Mean saturation errors (percent) of 100 realizations of the isotropic Gaussian distribution for the dierent methods and boundary conditions for the two-phase ow example,
6, 12 basis functions.
Isotropic Gaussian
vref v
gNLSO
no POD POD
0.1132
0.1036
0.0542
0.0552
L2
vref loc v
0.4081
0.4059
L2
L2
0.4005
0.4027
gLSO
no POD POD
0.3684
0.3843
0.3874
0.3935
0.1000
0.0308
0.0499
0.0178
lNLSO
no POD POD
0.2614
0.2222
0.0776
0.0647
0.4655
0.4014
0.4573
0.3987
lLSO
no POD POD
0.3964
0.4006
0.4028
0.4034
0.0337
0.0071
0.0156
0.0034
0.4040
Table 18: Mean velocity errors (percent) of 100 realizations of the isotropic Gaussian distribution
for the dierent methods and boundary conditions for the two-phase ow example, 6, 12
basis functions.
We have summarized the results in Tables 17-24. Again, note that we solve the online problems
of the LSO method with local boundary conditions.
Similar to the single-ow case we observe an additional error due to the online problem in the
LSO method. Again the NLSO approach seems to be a good approximation of the solution with
119
Isotropic Exponential
Sref S
gNLSO
no POD
POD
10.2557
8.2078
7.0350
5.2391
Sref loc S
L2
lNLSO
no POD
POD
13.3052 12.5890
10.1616
8.8308
lLSO
no POD
POD
19.6968 19.7828
19.9663 20.2782
18.7326
19.5572
L2
gLSO
no POD
POD
19.7424 19.4531
19.7486 19.7901
11.8344
10.0678
17.2977
18.4222
10.4369
7.9656
19.5738
20.1890
10.3493
8.4615
17.2377
18.8266
8.0460
5.6863
20.7051
L2
Table 19: Mean saturation errors (percent) of 100 realizations of the isotropic Exponential distribution for the dierent methods and boundary conditions for the two-phase ow
example, 6, 12 basis functions.
Isotropic Exponential
vref v
gNLSO
no POD POD
0.7057
0.5706
0.4180
0.3244
L2
vref loc v
1.2120
1.1680
L2
1.1911
1.1784
gLSO
no POD POD
1.3037
1.1934
1.2228
1.1397
0.8416
0.6700
0.6359
0.4968
lNLSO
no POD POD
0.8832
0.8323
0.6529
0.5720
1.1373
1.1306
1.1307
1.1950
lLSO
no POD POD
1.2297
1.1719
1.1815
1.1889
0.6899
0.4901
0.5008
0.3718
1.2040
L2
Table 20: Mean velocity errors (percent) of 100 realizations of the isotropic Exponential distribution for the dierent methods and boundary conditions for the two-phase ow example,
6, 12 basis functions.
Anisotropic Gaussian
Sref S
gNLSO
no POD
POD
2.6836
2.6356
1.5851
1.4145
L2
Sref loc S
18.2849
18.1816
L2
18.3574
18.2574
gLSO
no POD
POD
16.5501 17.1777
17.6444 17.6032
4.8377
1.2725
2.6597
1.0496
lNLSO
no POD
POD
5.3143
5.2178
3.4046
2.9868
17.9094
17.9992
17.3160
17.9370
lLSO
no POD
POD
17.6067 17.8641
17.9567 17.9591
1.3827
0.3430
0.5543
0.1005
17.9892
L2
Table 21: Mean saturation errors (percent) of 100 realizations of the anisotropic Gaussian distribution for the dierent methods and boundary conditions for the two-phase ow example,
6, 12 basis functions.
Anisotropic Gaussian
vref v
gNLSO
no POD POD
0.1339
0.1353
0.0739
0.0713
L2
vref loc v
0.8518
0.8428
L2
L2
0.8531
0.8388
gLSO
no POD POD
0.7849
0.8243
0.8187
0.8108
0.2384
0.0636
0.1328
0.0483
lNLSO
no POD POD
0.2944
0.2868
0.1823
0.1491
0.8307
0.8438
0.8175
0.8418
lLSO
no POD POD
0.8141
0.8259
0.8302
0.8303
0.0582
0.0161
0.0269
0.0048
0.8317
Table 22: Mean velocity errors (percent) of 100 realizations of the anisotropic Gaussian distribution for the dierent methods and boundary conditions for the two-phase ow example,
6, 12 basis functions.
120
Anisotropic Exponential
Sref S
gNLSO
no POD
POD
10.9965 11.0470
7.4683
6.4064
Sref loc S
L2
lNLSO
no POD
POD
15.4963 14.4225
11.1717 10.9566
lLSO
no POD
POD
21.6356 21.7126
22.1763 22.1820
21.2597
22.1649
L2
gLSO
no POD
POD
22.1169 21.5021
21.2290 21.2887
13.3213
11.0198
17.9521
20.0669
12.3714
8.4631
21.1819
22.3475
11.5873
9.4439
18.1224
19.9342
8.3137
5.8418
23.7082
L2
Table 23: Mean saturation errors (percent) of 100 realizations of the anisotropic Exponential distribution for the dierent methods and boundary conditions for the two-phase ow
example, 6, 12 basis functions.
Anisotropic Exponential
vref v
gNLSO
no POD POD
0.7689
0.7371
0.5040
0.4776
L2
vref loc v
1.3688
1.4036
L2
L2
1.2647
1.4278
gLSO
no POD POD
1.4086
1.3380
1.2925
1.2982
0.8417
0.6704
0.6131
0.5211
lNLSO
no POD POD
1.1682
0.9662
0.6848
0.6967
1.2812
1.2220
lLSO
no POD POD
1.4078
1.3604
1.3351
1.3844
1.1923
1.2361
0.8408
0.5335
0.4765
0.3568
1.4636
Table 24: Mean velocity errors (percent) of 100 realizations of the anisotropic Exponential distribution for the dierent methods and boundary conditions for the two-phase ow
example, 6, 12 basis functions.
global boundary conditions, while the solution of LSO method is closer to the local reference.
The errors in the LSO method are of the same size as the error between the reference solutions
using local or global boundary conditions, i.e., the LSO approach does not remove the residual
error. We will consider global boundary conditions only, since in both approaches global boundary
conditions give a better approximation of the solution using global information if the number of
precomputed basis functions is large enough.
For the velocities the behavior of the mean errors is comparable, but the approximations are
much more accurate.
In Figure 40 we show the water saturations with global boundary conditions for one sample of
the permeability for the considered distributions for all methods.
(cf. Section 17) is fullled for the NLSO and the LSO approach.
We choose the same parameters as in the single-phase case, namely N = (3, 6, 12) and M =
100 = 10 10.
In the two-phase ow case ratios are almost independent of the underlying distribution. For the
NLSO approach the ratios for the Gaussian distributions are slightly larger than the Exponential
ones, if we do not apply POD. With POD the ratios of the isotropic Gaussian distributions decrease
in comparison to the corresponding ratio without POD. This is reasonable because the errors for
the smallest level, N1 = 3, are signicantly smaller due to POD.
Again the LSO method does not give appropriate s. In many cases the ratios are even smaller
than one. That means, the assumption for our error estimate in Section 17 are not valid. Since
the LSO saturation does not converge to coarse reference saturation (cf. Section 20.1), we choose
the number of precomputed basis functions as for the NLSO approach.
121
Reference
Global NLSO
Global LSO
20
20
20
20
20
40
40
40
40
40
60
60
60
60
60
80
80
80
80
100
100
20
40
60
80
100
100
20
40
60
80
100
80
100
20
40
60
80
100
100
20
40
60
80
100
20
40
60
80
100
Global NLSO
Global LSO
20
20
20
20
20
40
40
40
40
40
60
60
60
60
60
80
80
80
80
100
100
20
40
60
80
100
100
20
40
60
80
100
80
100
20
40
60
80
100
100
20
40
60
80
100
20
40
60
80
100
Global NLSO
Global LSO
20
20
20
20
20
40
40
40
40
40
60
60
60
60
60
80
80
80
80
100
100
20
40
60
80
100
100
20
40
60
80
100
80
100
20
40
60
80
100
100
20
40
60
80
100
20
40
60
80
100
Global NLSO
Global LSO
20
20
20
20
20
40
40
40
40
40
60
60
60
60
60
80
80
80
80
100
100
20
40
60
80
100
100
20
40
60
80
100
80
100
20
40
60
80
100
100
20
40
60
80
100
20
40
60
80
100
Figure 40: One realization of the water saturation for global boundary conditions for the dierent
methods for two-phase ow with 12 basis functions.
122
isotropic Gaussian
anisotropic Gaussian
isotropic Exponential
anisotropic Exponential
2.8265
2.5523
2.0302
1.9180
1.6382
1.6738
1.5291
1.4913
0.9970
0.9244
1.0891
0.9357
0.9491
0.9492
1.0010
0.9559
NLSO
S1 S
S3 S
S2 S
S3 S
LSO
L2
L2
L2
L2
S1 S
S3 S
S2 S
S3 S
L2
L2
L2
S1 S
S3 S
S2 S
S3 S
L2
L2
1.8794
3.4429
3.0525
3.5156
L2
L2
1.3096
1.7965
1.6624
1.9182
S1 S
S3 S
S2 S
S3 S
L2
L2
L2
0.8749
0.8903
1.1059
1.0248
0.9806
0.9951
1.0024
1.0251
L2
L2
Table 25: Convergence of the two-phase ow example for the dierent methods and distributions
with N = (3, 6, 12).
123
10
10
10
20
20
20
30
30
30
40
40
40
50
50
50
60
60
60
70
70
70
80
80
80
90
90
90
100
100
100
20
40
60
80
20
100
40
60
80
100
20
40
60
80
100
10
10
10
20
20
20
30
30
30
40
40
40
50
50
50
60
60
60
70
70
70
80
80
80
90
90
90
100
100
100
20
40
60
80
20
100
40
60
80
100
20
40
60
80
100
10
10
10
20
20
20
30
30
30
40
40
40
50
50
50
60
60
60
70
70
70
80
80
80
90
90
90
100
100
100
20
40
60
80
20
100
40
60
80
100
20
40
60
80
100
10
10
10
20
20
20
30
30
30
40
40
40
50
50
50
60
60
60
70
70
70
80
80
80
90
90
90
100
100
100
20
40
60
80
100
20
40
60
80
100
20
40
60
80
100
Figure 41: Water saturation for NLSO for MLMC and MC for the dierent methods and distributions for two-phase ow, Nb = 100.
124
10
10
10
20
20
20
30
30
30
40
40
40
50
50
50
60
60
60
70
70
70
80
80
80
90
90
90
100
100
100
20
40
60
80
20
100
40
60
80
100
20
40
60
80
100
10
10
10
20
20
20
30
30
30
40
40
40
50
50
50
60
60
60
70
70
70
80
80
80
90
90
90
100
100
100
20
40
60
80
20
100
40
60
80
100
20
40
60
80
100
10
10
10
20
20
20
30
30
30
40
40
40
50
50
50
60
60
60
70
70
70
80
80
80
90
90
90
100
100
100
20
40
60
80
20
100
40
60
80
100
20
40
60
80
100
10
10
10
20
20
20
30
30
30
40
40
40
50
50
50
60
60
60
70
70
70
80
80
80
90
90
90
100
100
100
20
40
60
80
100
20
40
60
80
100
20
40
60
80
100
Figure 42: Water saturation for LSO for MLMC and MC for the dierent methods and distributions for two-phase ow.
125
10
10
10
20
20
20
30
30
30
40
40
40
50
50
50
60
60
60
70
70
70
80
80
80
90
90
90
100
100
100
20
40
60
80
20
100
40
60
80
100
20
40
60
80
100
10
10
10
20
20
20
30
30
30
40
40
40
50
50
50
60
60
60
70
70
70
80
80
80
90
90
90
100
100
100
20
40
60
80
20
100
40
60
80
100
20
40
60
80
100
10
10
10
20
20
20
30
30
30
40
40
40
50
50
50
60
60
60
70
70
70
80
80
80
90
90
90
100
100
100
20
40
60
80
20
100
40
60
80
100
20
40
60
80
100
10
10
10
20
20
20
30
30
30
40
40
40
50
50
50
60
60
60
70
70
70
80
80
80
90
90
90
100
100
100
20
40
60
80
100
20
40
60
80
100
20
40
60
80
100
Figure 43: Water saturation for NLSO with POD for MLMC and MC for the dierent methods
and distributions for two-phase ow.
126
10
10
10
20
20
20
30
30
30
40
40
40
50
50
50
60
60
60
70
70
70
80
80
80
90
90
90
100
100
100
20
40
60
80
20
100
40
60
80
100
20
40
60
80
100
10
10
10
20
20
20
30
30
30
40
40
40
50
50
50
60
60
60
70
70
70
80
80
80
90
90
90
100
100
100
20
40
60
80
20
100
40
60
80
100
20
40
60
80
100
10
10
10
20
20
20
30
30
30
40
40
40
50
50
50
60
60
60
70
70
70
80
80
80
90
90
90
100
100
100
20
40
60
80
20
100
40
60
80
100
20
40
60
80
100
10
10
10
20
20
20
30
30
30
40
40
40
50
50
50
60
60
60
70
70
70
80
80
80
90
90
90
100
100
100
20
40
60
80
100
20
40
60
80
100
20
40
60
80
100
Figure 44: Water saturation for LSO with POD for MLMC and MC for the dierent methods and
distributions for two-phase ow.
127
(N1 , N2 , N3 )
(M1 , M2 , M3 )
N
M
MMCref
NP OD
MLMC error NLSO Nb = 100
MC error NLSO Nb = 100
MC error
MLMC error NLSO Nb = 100
MLMC error NLSO
MC error NLSO
MC error
MLMC error NLSO
MLMC error LSO
MC error LSO
MC error
MLMC error LSO
MLMC error POD NLSO
MC error POD NLSO
MC error
MLMC error LSO
MLMC error POD LSO
MC error POD LSO
MC error
MLMC error LSO
isotropic Gaussian
(3, 6, 12)
(70, 20, 10)
12
20
500
100
0.0566
0.0970
1.71
0.0522
0.0989
1.90
0.0448
0.0839
1.72
0.0479
0.0884
1.85
0.0565
0.0992
1.76
anisotropic Gaussian
(3, 6, 12)
(70, 20, 10)
12
20
500
100
0.0557
0.0889
1.60
0.0497
0.0849
1.71
0.0854
0.1056
1.24
0.0563
0.0937
1.67
0.0843
0.1083
1.29
isotropic Exponential
(24, 48, 96)
(70, 20, 10)
96
20
500
500
0.0537
0.0943
1.76
0.0529
0.0947
1.79
0.0581
0.1039
1.79
0.0568
0.0972
1.71
0.0652
0.0952
1.46
anisotropic Exponential
(24, 48, 96)
(70, 20, 10)
96
20
500
500
0.0529
0.0840
1.59
0.0543
0.0898
1.65
0.1161
0.1274
1.10
0.0583
0.0911
1.56
0.1299
0.1524
1.17
128
21. Conclusions
In this work we consider multiscale problems with stochastic coecients. We combine multiscale
methods for deterministic problems, such as mixed multiscale nite elements and homogenization
with stochastic methods, such as multi-level Monte Carlo methods and Karhunen-Lo`ve expane
sions to increase the accuracy in comparison to a standard approach -the Monte Carlo method. Our
objective is to rapidly compute the expectation of the macroscale quantities, such as macroscale
solutions, homogenized coecients, or functionals of these quantities.
Part I and Part II are devoted to the study of numerical homogenization with dierent stochastic
methods. We consider elliptic stationary diusion equations with stochastic coecients which vary
on the macroscale and the microscale.
In Part I we decouple the high-dimensional local problems with the help of the Karhunen-Lo`ve
e
expansion and a polynomial chaos approach. The gain in the computational work depends on the
underlying distribution of the coecient. In general, we cannot state our developed approach is
more appropriate than Monte Carlo. Additionally, we introduce a method to speed-up the approximation of the eigenpairs of the covariance operator needed for the Karhunen-Lo`ve expansion.
e
This method is based on low-rank approximations of the matrix. Since the Karhunen-Lo`ve expane
sion is widely used to approximate random elds the applicability of the low-rank approximation
approach is not restricted to this part of the work.
In Part II we combine numerical homogenization with multi-level Monte Carlo methods. We
consider dierent levels of coarse-grid meshes and representative volumes. We combine the results
from a few expensive computations that involve smallest coarse meshes and largest representative
volumes with many less expensive computations with larger coarse mesh and smaller representative
volume sizes. The larger the coarse mesh or the smaller the representative volume size the more
computations are used. We show that by selecting the number of realizations at each level carefully
we can achieve a speed-up in the computations. For the computations of homogenized solutions,
we propose weighted multi-level Monte Carlo where weights are chosen at each level such that it
optimizes the accuracy at a given cost.
In Part III we consider multi-phase ow and transport equations. Here we combine multiscale nite element methods and multi-level Monte Carlo techniques to speed-up Monte Carlo
simulations. In particular, we consider no-local-solve-online (NLSO) and local-solve-online (LSO)
ensemble level mixed multiscale nite element approaches. We precompute for a few realizations
of the random eld multiscale basis functions. An ensemble of these basis functions is used to
solve the multi-phase ow equation for an arbitrary realization. We show that NLSO provides
better accuracy, since it is reasonable to choose local boundary conditions for the computation of
the online multiscale basis in LSO.
Dierent sizes of the ensemble are related to dierent levels in the multi-level Monte Carlo
approach. The use of larger ensembles yields more accurate solutions. We run more accurate (and
expensive) forward simulations with fewer samples while less accurate (and inexpensive) forward
simulations are run with a larger number of samples. Selecting the number of expensive and
inexpensive simulations carefully, one can show that multi-level Monte Carlo can provide better
accuracy at the same costs than Monte Carlo.
In this work we presented dierent methods to deal with the stochastic nature of the problems
in combination with dierent approaches to handle the many scales. However, this gives only a
small view insight the topic. A natural next step would be to consider a random mobility in the
two-phase ow problem, for example. In Part III we have chosen the permeability realizations
of the oine stage randomly or with proper orthogonal decomposition. Since we do POD with
respect to the velocity, the inuence on the saturation is of interest. Techniques as in reduced
basis methods to select the samples and other POD approaches need to be studied. Of course, we
129
have not considered any possible combination of multiscale approaches and stochastic methods.
Another interesting case is to combine the multiscale methods with collocation methods.
130
A. Notation
Domains:
D
D
DD
DN
Y
Y
Yx
Spaces:
Lp (D)
Lp (, B)
Lp ( d , Lp ())
loc
Lp ( d , Lp ())
unif
C 0 (D)
1
C (D)
C t (D)
k
H (D)
H r (D)
p
H0 (D)
p
H (D)
H(div, D)
H0 (div, D)
bounded domain, D Rd
smooth boundary of D
boundary part with Dirichlet boundary conditions, DD D
boundary part with Neumann boundary conditions, DN D
periodicity cell,
Y = {y = (y1 , . . . , yd ) : 0 < yi < 1 for i = 1, . . . , d}
RVE, Y = (0, )d
RVE, Yx = (x , x + )d
2
2
Lp space
Bochner space with a Banach space B
u Lp (K, Lp ()), K d , K compact
u Lp(Bx ,Lp ()) C, x d , with C independent of x and
the unit ball Bx with center x
space of continuous functions
space of continuously dierentiable functions
Hlder space, 0 < t < 1
o
Sobolev space, k
r , u H k (D), such that |u|2 r (D) < , r = k + t, k ,
/
H
0<t<1
p
H0 (D) H p (D)
p
dual space of the Sobolev space H0 (D)
2
d
H(div, D) = {u : u (L (D)) , div u L2 (D)}
H(div, D) with zero Neumann boundary
Norms:
u
Lp (D)
Lp (,B)
C 0 (D)
C 1 (D)
|u|C t (D)
Lp (D)
1/p
|u(x)|p dx
D
esssup
xD |u(x)|,
u p dP ()
B
u Lp(,B) =
esssup
u ,
u C 0(D) = sup |u(x)|
xD
C 1 (D)
|u|C t (D) =
D u
||1
sup
x,yD:x=y
, p<
p=
1/p
, p<
p=
C 0 (D)
|u(x)u(y)|
,
|xy|t
131
C t (D)
|u|H k (D)
u
H k (D)
C t (D)
xD
|u|H k (D) =
u
H k (D)
1/2
D ||=k
|D u| dx
1/2
=
D ||k
, semi-norm
|D u| dx
1/2
[D u(x)D u(y)]2
|xy|d+2t
|u|H r (D)
|u|H r (D) =
u H r (D)
u
|||u|||
|x|
|x| =
Stochastics:
(, F, P )
E[G]
EK
log
Cov(G1 , G2 )
Var(G)
std(G)
cov(y, y )
covG
covlog
132
i=1
DD ||=k
x2 , x
i
dx dy
, semi-norm
1/2
probability space
expected value of a random variable G,
E[G] = G() dP ()
expected value of the coecient K, EK (y) = E[K(y)]
expected value of a Gaussian distributed random variable
expected value of a lognormal distribution,
2
log = exp( + )
2
covariance of two random variables G1 and G2 ,
Cov(G1 , G2 ) = E[(G1 E[G1 ])(G2 E[G2 ])]
variance of a random variable G, Var(G) =Cov(G, G)
standard deviation of a random variable G, std(G) = Var(G)
standard deviation of the normal distribution
standard distribution of the lognormal distribution,
2 = 2 exp( 2 ) 1
log
covariance function of the coecient K,
cov(y, y ) =
K(y, )K(y , ) dP () EK (y)EK (y ) =
Cov(K(y, ), K(y , ))
Gaussian covariance, covG (y, y ) = 2 exp
covariance of lognormal distribution,
covlog (y, y ) = 2 (exp(covG (y, y )) 1)
log
correlation length
= 2
|yy |2
22
Homogenization:
K
K
i
u
u
u
uedge
microscale coecient
exact eective coecient
1
[K ]ij = lim (2m)d |y|m (ei + i (y, ))T K(y, )ej dy
m
kN
MC,n
i
KL:
K KL
KL
KM
KL
KM
KL,j
KM
KL,j
KM
K KL
det
Karhunen-Lo`ve expansion of
e
K(y, ), (5.3),
M
KL
KM (y, ) = EK (y) + m=1 m m (y)Xm ()
polynomial chaos expansion, (5.7)
KL
KM (y, z) = EK (y) + M
m=1 m m (y)zm , coecient of the
cell problems (5.8) and (5.9)
KL,j
rm
KM (y) = EK (y) + M
m=1 m m (y)jm , (5.13) coecient in
the decoupled Karhunen-Lo`ve cell problem (5.12)
e
KL
Kdet =
m=1
jr Y
KL,j
KL (y) KM (y)KL (y) dy
l,j
k,j
133
KL,j
KL
l,j
k,j
Kdet = jr Y KL (y) KM (y)KL (y) dy
eective coecient of (5.8),
KL
el K KL (z)ek = Y KL KM (y, z)KL dy
l
k
eective coecient of (5.22),
KL K KL,j (y, z) KL dy
K j
=
KL
Kdet
K KL
j
KKL
KL
KKL
KL
i (y, z)
KL (y, z)
i
Eigenproblem:
Xm ()
(m , m )1m
(h , h )1mN
m
m
n Sh
m
K
K
M
Decoupling:
Pr
0jrm
{j }0jrm
A
B
Pjr
Pr
Pjr
KL
fi,j
l,j
M
Y
j
KKL
jr
, (5.23)
|r|
k,j
KKL =
solution of the Karhunen-Lo`ve cell problem (5.8)
e
solution of the Karhunen-Lo`ve cell problem (5.9),
e
KL = KL yi
i
i
solution of the decoupled Karhunen-Lo`ve cell problem (5.12)
e
KL (y) = KL (y) + y ei , (5.21)
i,j
i,j
solution of modied decoupled Karhunen-Lo`ve cell problem
e
(5.22)
KL
i,j
KL
i,j
i,j
KL
r
rm , Pjrm
j
lk
m = M1/2 m
(K)ij = Y Y cov(y, y )j (y )i (y) dy dy
K = M1/2 KM1/2
(M)ij = Y j (y)i (y) dy
space of polynomials of degree at most r,
Pr = span{1, t, t2 , . . . , tr } L2 (I)
r = (r1 , r2 , . . . , rM ) NM
0
eigenpairs of the symmetric bilinear form
1/2
f KL (y) = P r div ei K
(y) , (5.16), right-hand side of the
i,j
134
EK
H-matrices:
T
TL
TIJ
t
L(T )
sons (t)
K
EK =
EK (y) dy
t = m(t) I label of t N
L(T ) := {t N : sons(t) = } set of leaves
set of descendants of a node t N , eq. (5.25)
low-rank approximation of K, eq. (5.29)
s
Finite Volume:
Ci
Ci
ij
ij
A
F
U
Vi
vij
xi
xij
xul
i
xur
i
xbl
i
xbr
i
hj
hij
Ni
D
Bi
N
Bi
Bi
K(ij )
x
nx
K (x i )
K (x j )
Hl
level of interest
level, 1 l L
RVE size at level l, 1 < 2 < < L
RVE size used for MC, normally = L
135
H
hl
ml
mlj
m
Ml
Ml
M
Mref
m
M
H
Pl
Pl
Gl
Gi
l
E L (GL )
L
Esame (GL )
L
Eind (GL )
EMl (Gl )
eMLMC (GL )
eref
MLMC (GL )
esame (GL )
MLMC
1
l
arithmetic mean, EMl (Gl ) = Ml i=1 Gi
l
eMLMC (GL ) = E [|||E[GL ] E L (GL )|||2 ], (10.2)
eM LM C (GL )
eref
MLMC (GL ) =
|||E[GL ]|||
esame (GL ) = eMLMC (GL )
MLMC
eind
MLMC (GL )
eind
MLMC (GL ) =
eMC (GL )
eMC (GL ) =
eref (GL )
MC
eref (GL )
MC
ErMLMC
ErMC
Kl
Kl,hj
Kl ,Hj
Kref
l =
136
L
E |||E[GL ] Eind (GL )|||2
eM C (GL )
= |||E[GL]|||
1
2
ErMLMC = N b N b [erel
j=1 MLMC (KL (j ))]
Nb
1
rel
2
ErMC = N b j=1 [eMC (KL (j ))]
E |||K Kl |||2 C
(10.3)
K Kl,hj
hj
l,j =
l=1
j=l
weighted MLMC approximation,
L
ei
MLMC (uL )
ei
MLMC (uL ) =
ei (uL )
MC
ei (uL ) =
MC
Nl
MLMC
WRVE
MC
WRVE
MLMC
Wcoarse
MC
Wcoarse
CorK
Corml (Kl )
CorL (KL )
wl
l
u
u l
H
ul
uj ,Hi
u
E L ()
u
i
EM ref ,L
l,j
i
EM ref ,L =
1
L
M ref k
L
1
k=1 ul
l=1 M ref
i
EM ref ,L (uL )E i,L (uL )
L2 (D)
EM ref ,L (uL )
L2 (D)
S
S
k
137
kmin
kmax
kr
p
pc
p
q
q
G
g
v
v
t
coercivity constant
bounded constant
relative permeability of phase
pressure of phase
capillary pressure, pc = po pw
pressure, solution of eq. (15.6)
density of phase
viscosity of phase
source (q > 0) or sink (q < 0) of phase
total source, q = qw + qo
gravitational pull-down force
gravitational constant
phase velocity, v = kr k (p G), eq. (15.2)
total velocity, v = vw + vo
time, t [0, T ]
phase mobility, = kr
total mobility, = w + o
w
ux term, f =
Mixed MsFEM:
Vh
0
Vh
Vh (kj )
0
Vh (kj )
N LSO
Vh
Vh
LSO
Vh
Qh
function index
Di
N
Nl
I
|I|
n
138
Vh = j=1 ,kj
velocity approximation space local-solve-online (LSO) method,
LSO
Vh
= I {,k }
nii
g
g
vj
kj
j
Kof f
j
Kon
k
w,k
w,k
,k
,k
Saturation scheme:
tk
t
S k (x)
Si (t)
k
Si
S
Sn
i
Qi
f (S)ii
ii
k(x) = k(x, )
,k = k(x)w,k
f (S)ii =
f (Si ), if v nii < 0
139
vii
H(S)
DH (S)
MLMC:
L
l
Ml
M
Mref
M
Sl
Sl,m
Sref
E L (SL )
j
SMLMC
WMLMC
WMC
f (S)ii vii + t Qi
H(S) = S k Si t
i
|Di |
ii
1iN
level of interest
level, 1 l L
number of realizations used at level l with Nl permeability realizations to compute the velocity approximation space,
M1 > M2 > > ML
number of realizations used for MC with NL permeability realizations
number of realizations used to calculate a reference saturation
M = (M1 , M2 , , ML )
water saturation at level l, i.e., Nl permeability realizations are
used to compute the velocity approximation space
realization of the water saturation at level l, Sl,m = Sl (x, t, m ),
1 m Ml
reference saturation, arithmetic mean for Mref realizations
MLMC approximation of E[SL ],
L
E L (SL ) = l=1 EMl (Sl Sl1 )
MLMC approximation of the expectation for given sets of perj
j
j
j
meability realization Kof f and Kon , E L (SL ; Kof f , Kon )
1
convergence rate, Sl S N
l
Nb
j=1
j
Sref SM LM C
Sref 2 2
MLMC error
MLMC error =
Y (x, )
Hj
i
Y (x, ) = log[k(x, )]
coarse mesh size
correlation length in direction i
140
|Di |
2
L2
B. Tables
Ansatz
KL
coecient
K =
0.0001
MC
MC
K =
K =
0.973177
3.89458 109
0.973179
3.18901 108
0.973202
3.95239 109
3.89458 109
0.973177
1.00662 108
0.973179
100
8.79283 107
0.973201
KL
K =
0.973177
1.01326 108
1.01326108
0.973177
MC
K =
0.9732
4.06617 107
9.64221 108
0.9732
100
9.80544 106
0.973411
0.001
MC
KL
K =
K =
0.01
MC
MC
KL
K =
K =
K =
0.1
0.97343
5.78373 107
0.973176
2.35257 109
0.973389
4.66184 106
0.972753
4.45803 105
0.973057
1.21852 109
2.35257 109
0.973176
1.47233 106
0.973392
100
2.59393 105
0.972688
11
1.21852 109
0.973057
54
MC
K =
0.974138
9.22774 106
3.80639 105
0.973826
100
MC
K =
0.973099
2.5945 105
3.08822 105
0.972769
99
141
Ansatz
0.0001
KL1
coecient
K =
KL2
MC1
MC2
0.001
KL1
0.999997
3 1014
K =
1.00003
1.21728 107
K =
K =
1
1 1014
1.00002
1.72008 107
K =
1
1.3 1012
N in each direction
3 1014
0.999997
1 1014
1
4.17333 107
1.00003
1.15326 106
1.00002
1.3 1012
1
KL2
1
3.43732 108
3.43732 108
1
MC1
K =
1.00031
3.77327 106
3.77327 106
1.00031
MC2
0.01
K =
K =
1.00023
2.84615 107
1.12308 105
1.00022
3.015 1012
1
2.81223 108
1
KL1
KL2
MC1
MC2
0.1
KL1
K =
K =
1
3.015 1012
1
2.81223 108
K =
K =
K =
3.40494 106
1.00105
1.00021
7.34104 106
3.94357 106
1.00021
100
0.999948
6.05926 1010
6.05926 1010
0.999948
54
1.00112
6.51628 105
KL2
K =
0.999938
1.13249 106
1.13249 106
0.999938
54
MC1
K =
18
MC2
K =
0.999219
1.56579 106
9.45786 106
0.999235
10000
142
Ansatz
0.0001
KL1
coecient
K =
2
2.58898 1015
K =
KL2
MC1
MC2
0.001
KL1
KL2
MC1
MC2
K =
K =
K =
K =
K =
2 0
0 2
2.00003
3.81182 107
2.00002
3.13272 108
K =
2.58898 1015
2
2
1.7 1012
2
4.37498 108
2.00031
3.58652 106
2.00023
4.07172 107
9.53789 108
2.00003
N in each direction
2
8.00069 107
2.00002
1.7 1012
2
4.37498 108
2
3.97768 106
2.00031
1.10975 105
2.00022
KL1
K =
2
3.11344 109
3.11344 109
2
KL2
0.01
K =
2
6.25077 108
6.25077 108
2
2.63017 106
2.00106
2.00022
7.03134 106
3.66665 106
2.00022
100
1.99998
6.87222 1010
6.87222 1010
1.99998
54
1.99997
3.72147 107
3.72147 107
1.99997
54
MC1
MC2
0.1
KL1
KL2
MC1
MC2
K =
K =
K =
K =
K =
K =
2.00112
6.59184 105
1.97657
5.55191 104
1.99954
1.76096 106
8.74909 105
1.97623
18
9.51709 106
1.99954
10000
143
Ansatz
0.0001
KL1
KL2
MC1
MC2
0.001
coecient
K =
KL2
MC1
1
6.24999 109
K =
K =
1.00002
1.0951 107
1.00001
4.37568 107
K =
K =
KL1
1
3 1014
K =
K =
N in each direction
3 1014
1
6.24999 109
1
3.20372 107
1.00002
1.20297 106
1.00001
0.999999 1 1013
1 1013 0.999999
1
1.47002 1012
1.47002 1012
1
1.00021
1.60917 106
3.20309 106
1.00022
MC2
0.01
K =
1.00012
3.55854 106
1.15273 105
1.00013
KL1
K =
0.999998
7.80573 1010
7.80573 1010
0.999998
7.81533 109
1
KL2
1
7.81533 109
MC1
K =
1.00055
6.7596 105
5.19363 106
1.0005
MC2
0.1
K =
K =
1.00014
5.25817 106
5.80497 106
1.00011
100
KL1
K =
0.999948
6.05926 1010
6.05926 1010
0.999948
54
KL2
K =
0.999872
3.0674 108
3.0674 108
0.999872
54
MC1
K =
0.980719
8.61976 104
1.1145 104
0.980919
18
1.59201 105
0.998297
10000
MC2
K =
0.998296
8.65048 107
144
Ansatz
0.0001
coecient
KL1
K =
K =
KL2
MC1
MC2
0.001
KL1
KL2
MC1
K =
K =
K =
2
2 1014
2.00002
2.60996 107
2.00001
1.12553 107
K =
K =
2
0
2
1.1 1012
2
6.24975 108
2.00021
1.60966 106
N in each direction
0
2
2 1014
2
4.09402 107
2.00002
1.39367 106
2.00001
1.1 1012
2
6.24975 108
2
3.21561 106
2.00022
MC2
0.01
K =
2.00012
3.52223 106
1.16113 105
2.00013
KL1
K =
2
7.00001 1013
7.00001 1013
2
2
6.25494 109
6.25494 109
2
KL2
K =
MC1
2.00056
6.65335 105
6.36304 106
2.00051
18
MC2
0.1
K =
K =
2.00015
5.69184 106
5.37206 106
2.00015
100
KL1
K =
1.99995
7.2887 1010
7.2887 1010
1.99995
54
KL2
K =
1.99994
9.27524 107
9.27524 107
1.99994
54
MC1
K =
1.9819
8.42196 104
9.23236 105
1.98166
18
1.40936 105
1.99908
10000
MC2
K =
1.99909
2.59981 106
145
|M
Ansatz
= 0.0001
KL1
log = 1
KL2
K =
MC1
K =
MC2
coecient
N in each direction
3 1014
0.999997
1
1.44426 1012
1.44426 1012
1
1.00003
1.21728 107
4.17333 107
1.00003
K =
0.999997
3 1014
1.00002
1.72008 107
K =
K =
1
1.3 1012
1.15326 106
1.00002
1.3 1012
= 0.001
KL1
log = 1
KL2
K =
1
2.92263 1012
2.92263 1012
1
MC1
K =
1.00031
3.58006 106
3.77329 106
1.00031
MC2
K =
1.00023
2.8462 107
1.12308 105
1.00022
3.015 1012
1.00005
K =
1.00005
3.015 1012
= 0.01
KL1
log = 1.00005
KL2
K =
1.00005
1.77067 1011
1.77067 1011
1.00005
MC1
K =
1.00113
6.51272 105
3.48853 106
1.00107
3.95062 106
1.00027
100
6.10741 1010
1.00489
54
5.3252 1011
1.0049
54
2.79847 104
1.00273
14
9.27642 106
1.00419
10000
MC2
= 0.1
KL1
log = 1.00494
KL2
MC1
MC2
K =
K =
1.00027
7.40914 106
1.00489
6.10741 1010
K =
K =
K =
1.0049
5.3252 1011
1.00316
4.48964 104
1.00417
5.49208 107
Table 32: Upscaled lognormal coecient with expected value log = exp( ) and = 1.
2
146
|M
Ansatz
= 0.0001
KL1
log = 1
KL2
MC1
MC2
coecient
K =
1
3 1014
1
1.59191 1012
K =
K =
1.00002
1.0951 107
1.00001
4.37568 107
K =
K =
N in each direction
3 1014
1
1.59191 1012
1
3.20373 107
1.00002
1.20297 106
1.00001
0.999999 1 1013
1 1013 0.999999
= 0.001
KL1
log = 1
KL2
K =
1
4.3991 1012
4.3991 1012
1
MC1
K =
1.00021
1.62479 106
3.20465 106
1.00022
MC2
K =
1.00012
3.52732 106
1.15304 105
1.00013
= 0.01
KL1
K =
1.00005
7.80613 1010
7.80613 1010
1.00005
log = 1.00005
KL2
1.00005
3.2471 1011
3.2471 1011
1.00005
K =
MC1
K =
1.00058
6.75597 105
5.33551 106
1.00053
MC2
K =
1.00019
5.32602 106
5.74519 106
1.0002
100
= 0.1
KL1
log = 1.00494
KL2
K =
K =
1.00484
1.29519 109
1.00485
7.4433 1011
1.29519 109
1.00484
54
7.4433 1011
1.00485
54
Table 33: Upscaled lognormal coecient with expected value log = exp( ) and = 0.5.
2
147
Ansatz
0.0001
coecient
KL1
K =
0.999997 3 1014
3 1014 0.999997
N in each direction
2
KL2
K =
1
1.44426 1012
1.44426 1012
1
MC1
K =
1.00003
1.21728 107
4.17333 107
1.00003
MC2
0.001
KL1
K =
1.00002
1.72008 107
K =
1
1.3 1012
1.15326 106
1.00002
1.3 1012
1
KL2
1
2.92263 1012
2.92263 1012
1
MC1
K =
1.00031
3.58006 106
3.77329 106
1.00031
MC2
0.01
K =
K =
1.00023
2.8462 107
1.12308 105
1.00022
3.075 1012
1
KL1
K =
1
3.075 1012
KL2
K =
1
1.77066 1011
1.77066 1011
1
MC1
K =
1.00108
6.51238 105
3.49242 106
1.00102
MC2
1.00022
7.40804 106
3.94998 106
1.00022
100
KL1
K =
0.999948
2.65352 109
2.65352 109
0.999948
54
KL2
0.1
K =
K =
0.999956
4.8025 1011
4.8025 1011
0.999956
54
148
Ansatz
0.0001
KL1
KL2
MC1
MC2
0.001
coecient
K =
1
1.59191 1012
K =
K =
1.00002
1.0951 107
1.00001
4.37568 106
K =
K =
KL1
1
3 1014
N in each direction
3 1014
1
1.59191 1012
1
3.20373 107
1.00002
1.20297 106
1.00001
0.999999 1 1013
1 1013 0.999999
KL2
K =
1
4.3991 1012
4.3991 1012
1
MC1
K =
1.00021
1.62479 106
3.20465 106
1.00022
MC2
0.01
K =
1.00012
3.52732 106
1.15304 105
1.00013
KL1
K =
0.999998
7.80573 1010
7.80573 1010
0.999998
1
3.2471 1011
3.2471 1011
1
KL2
K =
MC1
1.00053
6.75526 105
5.33934 106
1.00048
MC2
0.1
K =
K =
1.00014
5.32569 106
5.74432 106
1.00015
100
KL1
KL2
K =
K =
0.999898
1.27463 109
0.999914
8.3377 1011
1.27463 109
0.999898
54
8.3377 1011
0.999914
54
149
Ansatz
0.0001
KL1
KL2
MC1
MC2
0.001
KL1
coecient
0
2
2
2.87191 1012
2.87191 1012
2
2.00002
2.60996 107
4.09402 107
2.00002
K =
K =
K =
K =
2
0
N in each direction
2.00001
1.12553 107
K =
2
1.1 1012
1.39367 106
2.00001
1.1 1012
2
KL2
K =
2
5.6791 1012
5.6791 1012
2
MC1
K =
2.00021
1.60966 106
3.21561 106
2.00022
MC2
0.01
K =
2.00012
3.52224 106
1.16113 105
2.00013
KL1
K =
2
7.00005 1013
7.00005 1013
2
2
3.3751 1011
3.3751 1011
2
KL2
K =
MC1
2.00054
6.64053 105
6.53769 106
2.00049
MC2
0.1
K =
K =
2.00015
5.77404 106
5.32668 106
2.00016
100
KL1
KL2
K =
K =
1.99995
1.58594 109
1.58594 109
1.99995
54
1.99996
1.09234 1010
1.09234 1010
1.99996
54
150
PP
m
PP
ACA PPP
P
10
20
40
80
102
0.18
0.56
0.57
1.18
3.01
104
0.36
0.36
1.07
2.23
2.64
106
0.23
0.35
0.69
1.45
2.74
108
0.36
0.65
1.15
2.22
3.91
1010
0.35
0.59
1.03
3.10
3.65
1.65
1.98
3.01
5.91
12.58
Table 37: Required time in seconds to calculate m eigenpairs for the weak admissibility condition
and leaf size 1.
PP
m
PP
ACA PPP
P
10
20
40
80
102
0.05
0.20
0.28
0.48
1.66
104
0.13
0.19
0.65
1.26
1.66
106
0.13
0.20
0.57
0.67
1.70
108
0.16
0.24
0.70
0.80
1.96
1010
0.31
0.30
0.61
0.99
2.27
1.65
1.98
3.01
5.91
12.58
Table 38: Required time in seconds to calculate m eigenpairs for the weak admissibility condition
and leaf size 4.
151
PP
m
PP
ACA PPP
P
10
20
40
80
102
1.86
10.93
7.57
12.4
17.84
104
2.01
2.65
6.18
31.42
24.78
106
2.11
4.58
8.71
17.04
27.07
108
2.1
2.99
5.77
11.48
18.23
1010
2.01
2.82
5.48
10.76
17.18
1.65
1.98
3.01
5.91
12.58
Table 39: Required time in seconds to calculate m eigenpairs for the standard admissibility condition and leaf size 1.
PP
m
PP
ACA PPP
P
10
20
40
80
102
0.73
3.27
3.20
6.40
9.21
104
0.89
1.27
3.59
14.49
21.24
106
0.75
1.08
2.08
3.32
8.33
108
0.97
1.16
2.24
3.55
7.40
1010
1.00
1.30
2.49
3.99
8.16
1.65
1.98
3.01
5.91
12.58
Table 40: Required time in seconds to calculate m eigenpairs for the standard admissibility condition and leaf size 4.
152
References
[1] Arpack. Website. http://www.caam.rice.edu/software/ARPACK/.
[2] Matlab. Website. http://www.mathworks.com/products/matlab/.
[3] J.E. Aarnes. On the use of a mixed multiscale nite element method for greater exibility
and increased speed or improved accuracy in reservoir simulation. SIAM MMS, 2:421439,
2004.
[4] J.E. Aarnes and Y. Efendiev. Mixed multiscale nite element methods for stochastic porous
media ows. SIAM J. Sci. Comput., 30(5):23192339, 2008.
[5] J.E. Aarnes, Y. Efendiev, and L. Jiang. Analysis of multiscale nite element methods using
global information for two-phase ow simulations. SIAM MMS, 7(2):655676, 2007.
[6] J.E. Aarnes, T. Gimse, and K.-A. Lie. An introduction to the numerics of ow in porous
media using matlab. In Geometric Modelling, Numerical Simulation, and Optimization, pages
265306. 2007.
[7] J.E. Aarnes, S. Krogstad, and K.-A. Lie. A hierarchical multiscale method for two-phase ow
based upon mixed nite elements and nonuniform coarse grids. Multiscale Model. Simul.,
5(2):337363, 2006.
[8] A. Anantharaman, R. Costaouec, C. Le Bris, F. Legoll, and F. Thomines. Introduction to
numerical stochastic homogenization and the related computational challenges: some recent
developments. In W. Bao and Q. Du, editors, Multiscale Modeling and Analysis for Material
Simulation, volume 22, pages 197272, National University of Singapore, 2011. Institute for
Mathematical Sciences, Lecture Notes Series.
[9] G. Bal. Homogenization in random media and eective medium theory for high frequency
waves. Discrete and Continuous Dynamical Systems, Ser. B, 8(2):473492, 2007.
[10] A. Barth, C. Schwab, and N. Zollinger. Multi-level Monte Carlo Finite Element method for
elliptic PDEs with stochastic coecients. Numerische Mathematik, 199(1):123161, 2011.
[11] P. Bastian, M. Blatt, A. Dedner, C. Engwer, R. Klfkorn, R. Kornhuber, M. Ohlberger,
o
and O. Sander. A generic grid interface for parallel and adaptive scientic computing. II:
Implementation and tests in DUNE. Computing, 82(2-3):121138, 2008.
[12] P. Bastian, M. Blatt, A. Dedner, C. Engwer, R. Klfkorn, M. Ohlberger, and O. Sander. A
o
generic grid interface for parallel and adaptive scientic computing. I: Abstract framework.
Computing, 82(2-3):103119, 2008.
[13] X. Blanc, R. Costaouec, C. Le Bris, and F. Legoll.
Variance reduction in stochastic homogenization using antithetic variables.
Markov Processes and Related Fields, 18(1):3166, 2012.
(preliminary version available at
http://www.alglib.net/eigen/symmetric/symmevd.php).
[14] M. Blatt and P. Bastian. The iterative solver template library. In B. Kagstr m, E. Elmroth,
u
J. Dongarra, and J. Wasniewski, editors, Applied Parallel Computing. State of the Art in
Scientic Computing, number 4699 in Lecture Notes in Scientic Computing, pages 666675.
Springer, 2007.
153
Website.
[26] U. (ed.) Hornung. Homogenization and porous media. Interdisciplinary Applied Mathematics.
6. New York, NY: Springer. xvi, 275 p., 1997.
[27] Y. Efendiev. The multiscale nite element method (MsFEM) and its applications. PhD thesis,
California Institute of Technology, 1999.
[28] Y. Efendiev, J. Galvis, and F. Thomines. A systematic coarse-scale model reduction technique
for parameter-dependent ows in highly heterogeneous media and its applications. Multiscale
Model. Sim., (submitted), 2011.
[29] Y. Efendiev and T.Y. Hou. Multiscale nite element methods. Theory and applications.
Springer, 2009.
[30] Y. Efendiev, O. Iliev, and C. Kronsbein. Multi-level Monte Carlo methods using ensemble
level mixed MsFEM for two-phase ow and transport simulations. Submitted to Comp.
Geosci., 2013.
154
[31] Y. Efendiev, F. Legoll, and C. Kronsbein. Multi-level Monte Carlo approaches for numerical homogenization. Submitted to SIAM MMS (preliminary version available at
http://arxiv.org/abs/1301.2798), 2013.
[32] G.S. Fishman. Monte Carlo: Concepts, algorithms, and applications. Springer Series in
Operations Research. Springer-Verlag, New York, 1996.
[33] P. Frauenfelder, C. Schwab, and R.A. Todor. Finite elements for elliptic problems with
stochastic coecients. Comput. Methods Appl. Mech. Eng., 194(2-5):205228, 2005.
[34] M. Giles. Improved multilevel Monte Carlo convergence using the Milstein scheme. Keller,
Alexander (ed.) et al., Monte Carlo and quasi-Monte Carlo methods 2006. Berlin: Springer.
343-358 (2008)., 2008.
[35] M. Giles. Multilevel Monte Carlo path simulation. Operations Research, 56(3):607617, 2008.
[36] A. Gloria and F. Otto. An optimal variance estimate in stochastic homogenization of discrete
elliptic equations. Ann. Probab., 39(3):779856, 2011.
[37] W. Hackbusch, L. Grasedyck, and S. Brm. An introduction to hierarchical matrices. Math.
o
Bohem., 127(2):229241, 2002.
[38] S. Heinrich. Multilevel Monte Carlo methods. In S. Margenov, J. Wasniewski, and P. Yalamov,
editors, Large-scale scientic computing, volume 2179 of Lecture Notes in Computer Science,
pages 5867. Berlin: Springer, 2001.
[39] R. Helmig. Multiphase Flow and Transport Processes in the Subsurface: A Contribution to
the Modeling of Hydrosystems. Springer-Verlag, Berlin, Heidelberg, 1997.
[40] T.Y. Hou and X.H. Wu. A multiscale nite element method for elliptic problems in composite
materials and porous media. J. Comput. Phys., 134(1):169189, 1997.
[41] O. Iliev and I. Rybak. On numerical upscaling for ows in heterogeneous porous media.
Comput. Methods Appl. Math., 8(1):6067, 2008.
[42] V.V. Jikov, S M. Kozlov, and O.A. Oleinik. Homogenization of dierential operators and
integral functionals. Springer-Verlag, 1994.
[43] T. Kanit, S. Forest, I. Galliet, V. Mounoury, and D. Jeulin. Determination of the size of the
representative volume element for random composites: Statistical and numerical approach.
International Journal of Solids and Structures, 40(13-14):36473679, 2003.
[44] B.N. Khoromskij, A. Litvinenko, and H.G. Matthies. Application of hierarchical matrices for
computing the Karhunen-Lo`ve expansion. Computing, 84(1-2):4967, 2009.
e
[45] K.-A. Lie, S. Krogstad, I. Ligaarden, J. Natvig, H. Nilsen, and B. Skaestad. Open-source
MATLAB implementation of consistent discretisations on complex grids. Comput. Geosci.,
16(2):297322, 2012.
[46] M. Loeve. Probability theory II. 4th ed. Graduate Texts in Mathematics. 46. New York Heidelberg - Berlin: Springer-Verlag. XVI, 413 p. , 1978.
[47] X. Ma and N. Zabaras. A stochastic mixed nite element heterogeneous multiscale method
for ow in porous media. J. Comput. Phys., 230(12):46964722, 2011.
155
[48] J. Natvig and K.-A. Lie. Fast computation of multiphase ow in porous media by implicit
discontiuous Galerkin schemes with optimal ordering of elements.
[49] H. Owhadi and L. Zhang. Metric-based upscaling. Communications on Pure and Applied
Mathematics, 60(5):675723, 2007.
[50] G. Papanicolaou and S. Varadhan. Boundary value problems with rapidly oscillating random
coecients. Random elds. Rigorous results in statistical mechanics and quantum eld theory,
Esztergom 1979, Colloq. Math. Soc. Janos Bolyai 27, 835-873 (1981)., 1981.
[51] A.T. Patera and G. Rozza. Reduced Basis Approximation and A Posteriori Error Estimation
for Parameterized Partial Dierential Equations, volume 1.0. to appear in (tentative rubric)
MIT Pappalardo Graduate Monographs in Mechanical Engineering, 2006.
[52] D.V. Rovas, L. Machiels, and Y. Maday. Reduced-basis output bound methods for parabolic
problems. IMA J. Numer. Anal., 26(3):423445, 2006.
[53] X.H. Wu, Y. Efendiev, and T.Y. Hou. Analysis of upscaling absolute permeability. Discrete
Contin. Dyn. Syst., Ser. B, 2(2):185204, 2002.
[54] D. Xiu. Fast numerical methods for stochastic computations: A review. Commun. Comput.
Phys, 5(2-4):242272, 2009.
[55] D. Xiu and G.E. Karniadakis. Modeling uncertainty in steady state diusion problems via
generalized polynomial chaos. Comput. Methods Appl. Mech. Eng., 191(43):49274948, 2002.
[56] D. Xiu and G.E. Karniadakis. The WienerAskey polynomial chaos for stochastic dierential
equations. SIAM J. Sci. Comput., 24(2):619644, 2002.
[57] V.V. Yurinskii. Averaging of symmetric diusion in random medium. Sibirskii Mat. Zh.,
27(4):167180, 1986.
156
CURRICULUM VITAE
Cornelia Kronsbein
deutsch:
1988-1992
1992-2001
2001-2008
2009-2012
English:
1988-1992
1992-2001
2001-2008
2009-2012