Está en la página 1de 76

Vector calculus notes

Andrea Moiola
19th November 2013Version 3
These notes are meant to be a support for the vector calculus module (MA2VC/MA3VC)
taking place at the University of Reading in the Autumn term 2013. They are mainly based
on previous notes by Mark W. Matsen, Alex Lukyanov and Calvin J. Smith.
The present document does not substitute the notes taken in class, where more
examples and proofs are provided and where results and definitions are discussed in greater
detail. Only some of the statements and formulas are proved in these notes, some others will
be shown in class and you can try to prove the remaining ones as exercise.
Some of the figures in the text have been made with Matlab. The scripts used for
generating these plots are available on the Blackboard webpage of the module; most of them
can be run in Octave as well. You can use and modify them: playing with the different
graphical representations of scalar and vector fields is a good way to familiarise with them.
The suggested textbook is [1]; we will cover only a part of the content of Chapters 1016,
more precise references to single sections will be given in the text and in Table 3. This book
also contains numerous exercises (useful to prepare the exam) and interesting applications
of the theory. Several other good books on vector calculus and vector analysis are available
and you are encouraged to find the book that suits you best; see for example shelf 515.63
(and surroundings) in the University Library.
Warning 1: The paragraphs marked as Remark are addressed to the students
willing to deepen the subject and learn about closely related topics. Some of these remarks try
to relate the presented topic to the content of other modules (e.g., linear algebra or analysis).
These parts are not requested for the exam.
Warning 2: These notes are not entirely rigorous, for example we often assume that
fields are smooth enough without specifying in detail this assumption; multidimensional
analysis is treated in a rigorous fashion in other modules, e.g. analysis in several variables.
On the other hand, the (formal) proofs of vector identities and of some theorems are a fundamental part of the lecture, and at the exam you will be asked to prove some simple
results. The purpose of this class is not only to learn how to compute integrals and solve
exercises!
Warning 3: For simplicity, here we only consider the 3-dimensional Euclidean space R3 .
However, all the results not involving neither the vector product nor the curl operator can be
generalised to Euclidean spaces RN of any dimensions N N.
If you find any errors or typos, please let me know at a.moiola@reading.ac.uk , during
the office hours, before or after the lectures.

References
[1] R.A. Adams, C. Essex, Calculus, a complete course, Pearson Canada, Toronto, 7th ed.,
2010. Library call number: FOLIO515-ADA.

A. Moiola

1
1.1

Vector calculus notes, version 3

Fields and vector differential operators


Review of vectors in 3-dimensional Euclidean space

We quickly recall some notions about vectors and vector operations known from previous modules; see Sections 10.210.3 of [1] and the Vectors and Matrices lecture notes
(MA1VM).
Notation. In order to distinguish scalar from vector quantities, we denote vectors with boldface and a little arrow: ~
u R3 . Note that several books use underlined (u) symbols. We use
the hat symbol () to denote unit vectors, i.e. vectors of length 1.
that constitute the canonical basis of R3 . We always
We fix three vectors
, and k
assume that the basis has a right-handed orientation, i.e. it is ordered according to the
right-hand rule: closing the fingers of the right hand from the
direction to the direction,
direction.
the thumb points towards the k
Any three-dimensional vector ~u can be represented as1

~u = u1
+ u2 + u3 k,
where the scalars u1 , u2 and u3 R are the components, or coordinates, of ~u. Sometimes
it is convenient to write ~u in matrix notation as

u1
~u = u2 .
u3
u3

~u

u2

u1
and the components of the vector ~u.
Figure 1: The basis vectors
, , k
Definition 1.1. The magnitude or length of the vector ~u is defined as
u := |~u| :=

q
u21 + u22 + u23 .

(1)

The direction of a (non-zero) vector ~u is defined as


u
:=

~u
.
|~u|

(2)

~ = |~u|
Every vector satisfies u
u. Therefore length and direction uniquely identify a
does not have a specified direction.
vector. The vector of length 0 (i.e. ~0 := 0
+ 0
+ 0k)
Note that, if we want to represent vectors with arrows, the point of application (where we
draw the arrow) is not relevant: two arrows with the same direction and length represent the
same vector; we can imagine that all the arrows have the application point in the origin ~0.
1 Another common notation for vector components is ~
we try to avoid it as it can be
u = ux
+ uy + uz k;
confused with partial derivatives.
2 The notation A := B means the object A is defined to be equal to the object B. A and B may be
scalars, vectors, matrices, sets. . .

A. Moiola

Vector calculus notes, version 3

Example 1.2 (The position vector). The position vector

~r = x
+ y
+ zk
represents the position of a point the three-dimensional Euclidean space relative to the origin.

This is the only vector we do not denote as ~r = r1


+ r2 + r3 k.
Example 1.3 (The rotation vector). If a three-dimensional object is rotated, its rotation can
~ The magnitude ||
~ represents
be represented with the rotation vector, often denoted .

the angle the object is rotated, measured in radians. The direction represents the rotation
axis, with the direction given by the right-hand rule: closing the fingers in the direction of
(see Figure 2).
the rotation, the thumb points toward

~ :=
Figure 2: A rotation of angle around the axis n
can be described by the vector
n with
~ = and direction
=n
magnitude ||
.
Many examples of vectors are provided by physics: velocity, acceleration, displacement,
force, momentum.
Remark 1.4 (Are vectors arrows, triples of numbers, or elements of a vector space?). There
are several different definitions of vectors, this fact may lead to some confusion. Vectors
defined as geometric entities fully described by magnitude and direction are sometimes called
Euclidean vectors or geometric vectors. Note that, even though a vector is represented as
an arrow, in order to be able to sum any two vectors the position of the arrow (the application
point) has no importance.
Often, three-dimensional vectors are intended as triples of real numbers (the components).
is fixed.
This is equivalent to the previous geometric definition once a canonical basis {
, , k}
This approach is particularly helpful to manipulate vectors with a computer program (e.g.
Matlab, Octave, Mathematica, Python. . . ).
Vectors are often rigorously defined as elements of an abstract vector space (who attended the linear algebra class should be familiar with this concept). This is an extremely general and powerful definition which immediately allows some algebraic manipulations (sums
and multiplications with scalars) but in general does not provide the notions of magnitude,
unit vector and direction. If the considered vector space is real, finite-dimensional and is provided with an inner product, then it is an Euclidean space (i.e., Rn for some natural number
n). If a canonical basis is fixed, then elements of Rn can be represented as n-tuples of real
numbers (i.e., ordered sets of n real numbers).
See http://en.wikipedia.org/wiki/Euclidean_vector for a comparison of different
definitions.
Several operations are possible with vectors. The most basic operations are the addition
~u + w
~ and the multiplication ~u with a scalar R. These operations are available in
any vector space. In the following we briefly recall the definitions and the main properties of
the scalar product, the vector product and the triple product. For more properties, examples
and exercises we refer to [1, 10.210.3].
Remark 1.5. The addition, the scalar multiplication and the scalar product are defined
for Euclidean spaces of any dimension, while the vector product (thus also the triple product)
is defined only in three dimensions.

A. Moiola

1.1.1

Vector calculus notes, version 3

Scalar product

~ , the scalar product (also called dot product or inner prodGiven two vectors ~u and w
~:
uct) gives in output a scalar denoted ~u w
~
~ := u1 w1 + u2 w2 + u3 w3 = |~u||~
uw
w| cos ,

(3)

~ , as in Figure 3. The scalar product is commutative


where is the angle between ~u and w
~)
~ ) = ~u ~v + ~u w
~ =w
~ ~u), is distributive with respect to the addition (i.e.
(i.e. ~u w
~u (~v + w
and can be used to compute the magnitude of a vector: |~u| = ~u ~u. The scalar product
~:
can be used to evaluate the projection of ~u in the direction of w
projection = |~u| cos =

~u w
~
= ~u w,

|~
w|

and the angle between two (non-zero) vectors:





~u w
~
= arccos
= arccos u
w
.
|~u||~
w|
The vector components can be computed as scalar products:
u1 = ~u
,

u3 = ~u k.

u2 = ~u ,

~ are orthogonal or perpendicular if their scalar product is zero:


u and w
Two vectors ~
~u w
~ = 0; they are parallel if ~
u = ~
w for some scalar 6= 0 (in physics, if < 0 they are
sometimes called antiparallel).
~
w

|~u| cos

~u

~.
Figure 3: The scalar product between ~u and w
Exercise 1.6. Compute the scalar product between the elements of the canonical basis:

,
and so on.

,
k,
~ , prove that
Exercise 1.7 (Orthogonal decomposition). Given two non-zero vectors ~u and w
~ , (b)
there exists a unique pair of vectors ~u and ~u|| , such that: (a) ~u is perpendicular to w
~u|| is parallel to w
~ and (c) ~u = ~
u + ~u|| .
Hint: in order to show the existence of ~u and ~u|| you can proceed in two ways. (i)
~ to represent ~u and ~u|| in dependence of
You can use the condition ~u|| is parallel to w
~ to find an equation for
a parameter , and use the condition ~u|| is perpendicular to w
itself. (ii) You can use your geometric intuition to guess the expression of the two desired
vectors and then verify that they satisfy all the required conditions. (Dont forget to prove the
uniqueness.)
1.1.2

Vector product

~ , the vector product (or cross product) gives in output a


Given two vectors ~u and w
~ (sometimes denoted ~u w
~ to avoid confusion with the letter x on the
vector denoted ~
uw
board):




k


~
~ : = u1 u2 u3
uw
w1 w2 w3






(4)
u1 u2
u1 u3
u2 u3





+
k

=

w1 w2
w1 w3
w2 w3

= (u2 w3 u3 w2 )
+ (u3 w1 u1 w3 )
+ (u1 w2 u2 w1 )k,
4

A. Moiola

Vector calculus notes, version 3

where | : : | denotes the matrix determinant. Note that the 3 3 determinant is a formal
determinant, as the matrix contains three vectors and six scalars: it is only a short form for
the next expression containing three true 2 2 determinants.
The magnitude of the vector product
~ | = |~u||~
|~u w
w| sin
~ . Its direction is orthogonal to
is equal to the area of the parallelogram defined by ~u and w
~ and the triad ~u, w
~ , ~u w
~ is right-handed. The vector product is distributive
both ~u and w
~ ) = ~u ~v + ~u w
~ and (~v + w
~ ) ~u = ~v ~u + w
~ ~u)
with respect to the sum (i.e. ~u (~v + w
~ ) 6= (~u ~v) w
~.
but in general is not associative: ~u (~v w
~u w
~
~
w

~u
~ has magnitude equal to grey area and is orthogonal to the
Figure 4: The vector product ~u w
plane that contains it.
Exercise 1.8. Show that the elements of the standard basis satisfy the following identities:

= k,


= k,

=
k
,

k =
,


k
= ,

k =
,

k
= ~0.

= = k

~ ~u = ~u w
~ for
Exercise 1.9. Show that the vector product is anticommutative, i.e. w
~ in R3 , and satisfies ~u ~u = ~0.
all ~u and w
Exercise 1.10. Prove that the vector product is not associative by showing that (
) 6=

(
).
Exercise 1.11. Show that the following identity holds true:
~u (~v w
~ ) = ~v(~u w
~)w
~ (~u ~v)

~ R3 .
~u, ~v, w

(5)

~ ) is called triple vector product.)


(Sometimes ~u (~v w
Solution. We prove identity (5) componentwise, namely we verify that the three components
on the left-hand side and the right-hand side of the identity agree with each other. Consider
the first component:
 (4)

~ ]3 u3 [~v w
~ ]2
~u (~v w
~ ) 1 = u2 [~v w
(4)

= u2 (v1 w2 v2 w1 ) u3 (v3 w1 v1 w3 )

= (u2 w2 + u3 w3 )v1 (u2 v2 + u3 v3 )w1


~ u1 w1 )v1 (~u ~v u1 v1 )w1
= (~u w
~ )v1 (~u ~v)w1
= (~u w


~ )~v (~u ~v)~
= (~u w
w 1.

To conclude one must verify that the analogous identity holds for the y- and the z-components.
~ , ~p R3 :
Exercise 1.12. Show that the following identities hold true for all ~u, ~v, w
~u (~v w
~ ) + ~v (~
~ (~u ~v) = ~0
w ~u) + w
~ )(~v ~
~)
(~u ~v) (~
w ~p) = (~u w
p) (~u ~p)(~v w
5

(Jacobi identity),
(BinetCauchy identity),

A. Moiola

Vector calculus notes, version 3

|~u ~v|2 + (~u ~v)2 = |~u|2 |~v|2

(Lagrange identity).

Hint: to prove the first one you can either proceed componentwise (long and boring!) or
use identity (5). For the second identity you can expand both sides and collect some terms
appropriately. The last identity will follow easily from the previous one.
Remark 1.13. The vector product is used in physics to compute the angular momentum
of a moving object, the torque of a force, the Lorentz force acting on a charge moving in a
magnetic field.
1.1.3

Triple product

~ , the triple product gives in output the scalar


Given three vectors ~u and ~v and w
~u (~v w
~ ).
Its absolute value is the volume of the parallelepiped P defined by the three vectors as in
~
Figure 5. To see this, we define the unit vector orthogonal to the plane containing ~v and w
~
w
as n
:= |~~vv
~ . Then
w|
Volume (P ) : = (area of base) height
~ | |~u n
= |~v w
|



~v w
~

~ | ~u
= |~v w
~ |
|~v w
~ )| .
= |~u (~v w

From the definition (4), we see that the triple product can be computed as the determinant
of the matrix of the vector components:




k


~
~ ) = ~u v1 v2 v3
(6)
u (~v w
w1 w2 w3


u1 u2 u3


= v1 v2 v3
(7)
w1 w2 w3
= u1 v2 w3 + u2 v3 w1 + u3 v1 w2 u1 v3 w2 u2 v1 w3 u3 v2 w1 .

~ ) is zero if the three vectors are linearly dependent, is positive


The triple product ~u (~v w
~ is right-handed, and is negative otherwise.
if the triad ~u, ~v and w
Exercise 1.14. Show the following identities (you may use the determinant representation
(7))
~ (~u ~v)
~ R3 .
~u (~v w
~ ) = ~v (~
~u, ~v, w
w ~u) = w
~ ) and w
~ (~v ~u)? What is ~u (~v ~u)?
What is the relation between ~u (~v w

~u
height = |~u n
|

~
w

~|
area = |~v w

~v

~ ) as the volume of a parallelepiped.


Figure 5: The triple product ~u (~v w

A. Moiola

1.2

Vector calculus notes, version 3

Scalar fields, vector fields and vector functions

Depending
Fields3 are functions of position, described by the position vector ~r = x
+y
+z k.
on the kind of output they are either scalar fields or vector fields. Vector functions are
vector-valued functions of a real variable. Scalar fields, vector fields and vector functions are
described in [1] in Sections 12.1, 15.1 and 11.1 respectively.
1.2.1

Scalar fields

A scalar field is a function f : D R, where the domain D is a subset of R3 . The value


of f at ~r may be written as f (~r) or f (x, y, z). Scalar field may also be called multivariate
functions or functions of several variables. Some examples of scalar fields are
f (~r) = x2 y 2 ,

h(~r) = |~r|4 .

g(~r) = xyez ,

Smooth scalar fields can be graphically represented using level sets, i.e. sets defined by
f (~r)=constant; see Figure 6 for an example (see also the MA1CAL lecture notes). The level
sets of a two-dimensional field are (planar) level curves and can be easily drawn, while
the level sets of a three-dimensional vector field are the level surfaces, which are harder
to represent. Level curves are also called contour lines or isolines, and level surfaces are
called isosurfaces. Note that two level surfaces (or curves) never intersect each other, since
at every point ~r the field f takes only one value, but they might self-intersect.
y

2
1.5

2 1

0
2

0.5

0.5

0
3

1.5

2
2
2

1
1.5

3
0.5

3
0

0.5

1
1.5

0
2

Figure 6: The level set representation of the scalar field f (~r) = x2 y 2 . Since f does not depend
on the z-component of the position, we can think at it as a two-dimensional field and represent
it with the level curves corresponding to a section in the xy-plane. Each colour represents the
set of points ~r such that f (~r) is equal to a certain constant, e.g. f (~r) = 0 along the green lines
and f (~r) = 2 along the red curves. This plot is obtained with Matlabs command contour.
Example 1.15 (Different fields with the same level sets). Consider the level surfaces of the
scalar field f (~r) = |~r|2 . Every surface corresponds to the set of points that are solutions of
the quadratic equation f (~r) = |~r|2 = x2 + y 2 + z 2 = C for some C R (C 0), thus they
are the spheres centred at the origin.
2
Now consider the level surfaces of the scalar field g(~r) = e|~r| . They correspond to the
2
2
2
solutions of the equation ex y z = C or log(x2 + y 2 + z 2 ) = C for some C R
(0 < C < 1). Therefore also in this case they are the spheres centred at the origin (indeed
the two fields are related by the relation g(~r) = ef (~r) ). We conclude that two different
smooth scalar fields may have the same level surfaces, associated with different field values
(see Figure 7).
3 Note that the fields that are object of this section have nothing to do with the algebraic definition of fields
as abstract sets provided with addition and multiplication (like R, Q or C). The word field is commonly used
for both meaning and can be a source of confusion; the use of scalar field and vector field is unambiguous.

A. Moiola

Vector calculus notes, version 3

1.5
1

4
3
2

2
4
1

1.5
2
2

0.3
0.2
0.1

76 5

4
5 67

1.5

1
2

0.5

0.7

0.8

9
0.
0.87
.
0
00.5
.4
0.2
0.1

0.5

2
2

.4

0.2
0.3

2
3

0.5

0.5

0.2 0

0.1

0.4
00.5.6

0.

5 67

76 5
4

1.5

0.1
0.3 0
.3
0.60
.5

Figure 7: The level sets of the two scalar fields described in Example 1.15; only the plane z = 0
is represented here. In the left plot, the field f (~r) = |~r|2 (level sets for the values 1, 2,. . . , 7); in
2
the right plot, the field g(~r) = e|~r| (level sets for the values 0.1, 0.2,. . . , 0.9). Each level set
of f is also a level set of g (do not forget that the plots represent only few level sets).
Remark 1.16. A real function of one variable g : R R is usually represented with its
graph Gg = {(x, y) R2 s.t. y = g(x)}, which is a subset of the plane. Exactly in the same
way, a two-dimensional scalar field f : R2 R can be visualised using its graph


Gf = ~r R3 s.t. z = f (x, y) ,

which is a surface in R3 ; see the example in Figure 8. We can represent in the same way
also a three-dimensional field that does not depend on one coordinate, but the graph of a
general three-dimensional scalar field is a hypersurface and we can not easily visualise it
as we would need four dimensions.
As exercise, you can try to represent the graphs of the two fields defined in Example 1.15:
despite having the same level sets the surfaces representing their graphs are very different
from each other.

4
3
2
1

0
1
2
3
4
2
2

0
0
1

1
2

Figure 8: The field f = x2 y 2 represented as the surface Gf = {~r R3 s.t. z = f (x, y)}; see
Remark 1.16. Surfaces can be drawn in Matlab with the commands mesh and surf.
Remark 1.17 (Admissible smoothness of fields). In this notes we always assume that
scalar and vector fields we use are smooth enough to be able to take all the needed derivatives. Whenever we require a field to be smooth we can understand this as being described
8

A. Moiola

Vector calculus notes, version 3

by a C function, which guarantees the maximal possible smoothness; however in most cases
C 2 regularity will be enough for our purposes. For example, whenever we write an identity
involving the derivatives of a scalar field f , we will be able to consider f (~r) = |~r|2 , which
enjoys C regularity, but not f (~r) = |~r| which is not continuously differentiable (not C 1 )
in the origin. Note that studying the continuity (and thus the smoothness) of a field can be
2
y
r) = x2x
much more tricky than for a function; e.g. think at the fields f (~r) = x22xy
4 +y 2 ,
+y 2 and g(~
they are both discontinuous at the origin, can you see why? (You may learn more on this
in the analysis in several variables module; if you cannot wait take a look at Chapter 12
of [1].)
1.2.2

Vector fields

~ is a function of position whose output is a vector, namely a function


A vector field F
~ : D R3 ,
F

~ r) = F1 (~r)

F(~
+ F2 (~r) + F3 (~r) k,

where the domain D is a subset of R3 . The three scalar fields F1 , F2 and F3 are the
~ Some examples of vector fields are
components of F.
~ r) = 2x
F(~
2y
,

~ r) = yz

G(~
+ xz
+ xy k,

~ r) = |~r|2

H(~
+ cos y k.

Figure 9 shows two common graphical representations of (two-dimensional) vector fields.


y

1.5

1.5

0.5

0.5

0.5

0.5

1.5

1.5

2
2.5

1.5

0.5

0.5

1.5

2.5

2
2

1.5

0.5

0.5

1.5

~ r) = 2x
Figure 9: Two visualisations of the two-dimensional vector field F(~
2y
. In the left
plot, the direction of the arrow positioned in ~r indicated the direction of the field in that point
~ r). In the right plot the lines are tangent to
and its length is proportional to the magnitude of F(~
~
the field F(~r) in every point; these curves are called field lines or streamlines. Note that the
streamline representation does not give any information about the magnitude of the vector field.
These plots are obtained with Matlabs commands quiver and streamslice (unfortunately the
latter command is not available in Octave).

1.2.3

Vector functions and curves

Vector functions are vector-valued functions of a real variable ~a : I R3 , where I is either


the real line I = R or an interval. Since the value ~a(t) at each t I is a vector, we can
expand a vector function as:

~a(t) = a1 (t)
+ a2 (t)
+ a3 (t)k,
where a1 , a2 and a3 : I R (the components of ~a) are real functions.
If ~a is continuous (i.e. its three components are continuous real functions) then it is a
curve. If moreover ~a is injective (i.e. ~a(t) 6= ~a(s) whenever t 6= s) then it is called simple
curve. If the interval I is closed and bounded, namely I = [tI , tF ] R, and ~a : I R3 is
9

A. Moiola

Vector calculus notes, version 3

a curve satisfying ~a(tI ) = ~a(tF ) (i.e. it starts and ends in the same point), then ~a is called
loop. A few examples of curves are shown in Figure 10.
If we interpret the variable t as time, a curve may represent the trajectory of a point
moving in the space. Following this interpretation, curves are oriented, namely they possess
a preferred direction of travelling.
Note that, as opposed to the common non-mathematical meaning, the word curve
denotes a function ~a, whose image {~u R3 , s.t. ~u = ~a(t), for some t I} is a subset of R3 ,
and not the image itself. Indeed, different curves may define the same set. For example
~a(t) = cos t
+ sin t t [0, 2)

~b(t) = cos 2t
+ sin 2t t [0, )

and

are two different curves which have the same image: the unit circle in the xy-plane.

0.8

1.5

0.6

0.6

0.4

0.2

0.4

0.2

0.5

0.2

0.2

0
1

0.4

0.4

0.5
0.6

0.8

0.8

0.5

0
0
0.5

0.5

0.5

0.5
1

0.6

0.8

0.5

0.5

Figure 10: Three examples of the sets defined by curves. In the left plot the cubic ~a(t) = t
+t3 ,
for
plotted for the interval [1, 1]. In the centre plot the helix ~b(t) = cos t
+ sin t + 0.1tk,
t [0, 6]. In the right plot the clover ~c(t) = (cos 3t)(cos t)
+ (cos 3t)(sin t)
for t [0, ].
Note that this is not a simple curve as it intersects itself in the origin, so ~c is not injective. Since
~c(0) = ~c(), the curve ~c is a loop.
Note (Important warning!). At this point it is extremely important not to mix up the different
definitions of scalar fields, vector fields and vector functions. Treating vectors as scalars
or scalars as vectors is one of the main sources of mistakes in vector calculus
exams! We recall that scalar fields take as input a vector and returns a real number (~r 7
~ r)), vector functions
f (~r)), vector fields take as input a vector and returns a vector (~r 7 F(~
take as input a real number and returns a vector (t 7 ~a(t)):
Real functions (of real variable)
Vector functions, curves
Scalar fields
Vector fields

f :RR
~a : R R3
f : R3 R
~ : R3 R3
F

t 7 f (t)
t 7 ~a(t)
~r 7 f (~r)
~ r)
~r 7 F(~

Increasing complexity

Vector fields might be thought as combinations of three scalar fields (the components) and
vector functions as combinations of three real functions.
Remark 1.18 (Fields in physics). Fields are important in all branches of science and
model many physical quantities. For example, consider a domain D that models a portion
of Earths atmosphere. One can associate to every point ~r D several numerical quantities representing temperature, density, air pressure, concentration of water vapour or some
pollutant (at a given instant): each of these physical quantities can be mathematically represented by a scalar field (a scalar quantity is associated to each point in space). Other physical
quantities involve magnitude and direction (which can both vary in every point in space),
thus they can be represented as vector fields: for example the gravitational force, the wind
velocity, the magnetic field (pointing to the magnetic north pole). The plots you commonly
see in weather forecast are representations of some of these fields (e.g. level sets of pressure
at a given altitude). Also curves are used in physics, for instance to describe trajectories of
electrons, bullets, aircraft, planets, galaxies and any other kind of body.

10

A. Moiola

1.3

Vector calculus notes, version 3

Vector differential operators

We learned in the calculus class what is the derivative of a smooth real function of one
variable f : R R. We now study the derivatives of scalar and vector fields. It turns
out that there are several extremely important differential operators, which generalise the
concept of derivative. In this section we introduce them, while in the next one we study some
relations between them.
As before, for simplicity we consider here the three-dimensional case only; all the operators, with the relevant exception of the curl operator, can immediately be defined in Rn for
any dimension n N.
The textbook [1] describes partial derivatives in Sections 12.35, the Jacobian matrix in
12.6, the gradient and directional derivatives in 12.7, divergence and curl operator in 16.1
and the Laplacian in 16.2.
Remark 1.19 (What is an operator?). An operator is anything that operates on functions
or fields, i.e. a function of functions or function of fields. For example, we can define
the doubling operator T that, given a real function f : R R, returns its double, i.e. T f
is the function T f : R R, such that T f (x) = 2f (x).
A differential operator is an operator that involves some differentiation. For example,
d
d
the derivative dx
maps f : R R in dx
f = f : R R.
Operators and differential operators are rigorously defined using the concept of function
space, which is an infinite-dimensional vector space whose elements are functions. This is
well beyond the scope of this class and is one of the topics studied by functional analysis.
1.3.1

Partial derivatives and the gradient

~ (also called del and


The differentiation of fields is done using the nabla operator
often simply denoted ):

~ :=
.
(8)

+ + k
x
y
z

The symbol x
denotes the partial derivative with respect to the x-component of the

and z
).
position vector (and similarly for y
~
~ called gradient:
When is applied to a scalar field f , it produces a vector field f

f
f
f (~r).
~ (~r) = grad f (~r) =
f
(~r) + (~r) + k
x
y
z

(9)

Remark 1.20 (Definition of partial derivatives). We recall that the partial derivatives of a
are defined as limits of difference quotients:
smooth scalar field f in the point ~r = x
+ y
+ zk
f (x + h, y, z) f (x, y, z)
f
(~r) := lim
,
h0
x
h
f
f (x, y + h, z) f (x, y, z)
(~r) := lim
,
h0
y
h
f (x, y, z + h) f (x, y, z)
f
(~r) := lim
.
h0
z
h
r) can be understood as the derivative of the real functions x 7 f (x, y, z)
In other words, f
x (~
which freezes the y- and z-variables.
Example 1.21. The gradient of f (~r) = x2 y 2 is

~ (~r) =
(x2 y 2 ) = 2x
f
(x2 y 2 ) + (x2 y 2 ) + k
2y
;
x
y
z
thus the vector field in Figure 9 is the gradient of the scalar field depicted in Figures 6 and 8.
Proposition 1.22 (Properties of the gradient). Given two smooth scalar fields f, g : D R,
their gradients satisfy the following properties:

11

A. Moiola

Vector calculus notes, version 3

1. for any constant R, the following identities hold (linearity)


~ + g) = f
~ + g,
~
(f

~
~ ;
(f
) = f

(10)

2. the following identity holds (product rule or Leibniz rule)


~ g) = g f
~ + f g;
~
(f

(11)

3. for any smooth real function G : R R the chain rule4 holds:




~ (~r),
~
~ f (~r) = G f (~r) f
(G
f )(~r) = G

(12)

where G (f (~r)) is the derivative of G evaluated in f (~r) and Gf denotes the composition
of G with f ;

~ (~r) is perpendicular to the level surface of f passing through ~r (i.e. to {~r D s.t.
4. f
f (~r ) = f (~r)});
~ (~r) points in the direction of increasing f .
5. f
Proof of 1., 2. and 3. We use the definition of the gradient (9), the linearity, the product
rule
f
g
(f g)
f
g
(f g)
f
g
(f g)
=g
+f
,
=g
+f ,
=g
+f ,
(13)
x
x
x
y
y
y
z
z
z

g(f (x)) = g (f (x))f (x)):


and the chain rule for partial derivatives ( x

(f + g)
(f + g) (f + g)
~ + g) =
(f

+
+k
x
y
z
g
f
g f g
f
+

+ + + k
+k
=

x
x
y
y
z
z
~ + g;
~
= f

(f g) (f g)
(f g)
~ g) =
+
+k
(f

x
y
z






f
f
f
g
g
g

+ g
+k g
=
g
+f
+f
+f
x
x
y
y
z
z
~ + f g,
~
= g f

(G f )
(G f )
(G f ) (~r)
~
(~r) +
(~r) + k
(G
f )(~r) =

x
y
z



f
f
f (~r) f (~r)
=
G f (~r)
(~r) + G f (~r)
(~r) + kG
x
y
z


~
= G f (~r) f (~r).

~ = ~0) in
The second identity in (10) follows from choosing g(~r) = (which satisfies
(11).
Exercise 1.23. Compute the gradients of the following scalar fields:
p
xy
f (~r) = xyez ,
g(~r) =
,
h(~r) = log(1 + z 2 exy ),
(~r) = x2 + y 4 + z 6 .
y+z

Exercise 1.24. Verify that the gradients of the magnitude scalar field m(~r) = |~r| and of
its square s(~r) = |~r|2 satisfy the identities
~ r) = (|~
~ r|2 ) = 2~r,
s(~

~ r) = (|~
~ r|) = ~r
m(~
|~r|

~r R, ~r 6= ~0.

~ r| ) for a general R?
Represent the two gradients as in Figure 9. Can you compute (|~
4 We recall the notion of composition of functions: for any three sets A, B, C, and two functions
F : A B and G : B C, the composition of G with F is the function (G F ) : A C obtained
by applying F and then G to the obtained output. In formulas: (G F )(x) := G(F (x)) for all x A. If
A = B = C = R (or they are subsets of R) and F and G are differentiable, then from basic calculus we know
the derivative of the composition: (G F ) (x) = G (F (x))F (x) (this is the most basic example of chain rule).

12

A. Moiola

Vector calculus notes, version 3

y
2

2
1

1.5

0.5

0.5

0
2
2

1
2
1.5

3
1

0.5

1.5

0.5

2
1

1.5

Figure 11: A combined representation of the two-dimensional scalar field f (~r) = x2 y 2 (coloured
~ (~r) = 2x
level curves) and its gradient f
2y
(black streamlines). Note that the streamlines
are perpendicular to the level lines in every point and they point from areas with lower values of
f (blue) to areas with higher values (red) (compare with Proposition 1.22).
~ = 2ez
Exercise 1.25. Consider the scalar field f = (2x + y)3 and the vector field G
+ ez .
3
~ but G
~ is not the
Verify that, at every point ~r R , the level set of f is orthogonal to G,
~ to f
~ ?
gradient of f . Can you find a formula relating G
Hint: figure out the exact shape of the level sets of f , compute two (linearly independent)
tangent vectors to the level set in each point, and verify that both these vectors are orthogonal
~
to G.
Example 1.26 (Finding the unit normal to a surface). Consider the surface
o
n
p
S = ~r R3 s.t. y = 1 + x2

and compute the field n


: S R3 composed of the unit vectors perpendicular to S and
pointing upwards as in Figure 12.
From Proposition 1.22(4), we know that the gradient of a field that admits S as level set
indicates the direction perpendicular to S itself. First, we find a scalar field f such that S
is given as f (~r) =constant: we can choose f (~r) = x2 y 2 , which satisfies f (~r) = 1 for all
~r S. Taking the partial derivatives of f we immediately see that its gradient is
p
~ (~r) = 2x
~ | = 2 x2 + y 2 .
f
2y
,

|f
~ (~x), i.e.
Thus for every ~r S, n
(~r) is one of the two unit vectors parallel to f
n
(~r) =

~ (~r)
x
y

f
.
= p
~ (~r)|
x2 + y 2
|f

From Figure 12 we see that we want to choose the sign such that ~n(0, 1, 0) = , thus we
choose the minus sign in the last equation above and conclude
x
+ y

.
n
(~r) = p
x2 + y 2
13

A. Moiola

Vector calculus notes, version 3

y=

1 + x2

Figure 12: A section in the xy-plane (z = 0) of the surface S of Example 1.26 and its unit
normal vectors.
Given a smooth scalar field f and a unit vector u
, the directional derivative of f in
~ (~r). If n
f
is the unit vector orthogonal to a surface,
direction u
is defined as fu (~r) := u
f
n
is called normal derivative.
~ r) = x
Example 1.27. Let f (~r) = ex sin y + xz 2 and F(~
+ be a scalar and a vector
3
~ (This is a
field defined in R . Compute the directional derivative of f in the direction of F.
directional derivative whose direction may be different at every point).
~
We compute the gradient of f and the unit vector in the direction of F:
~ r)
F(~
x
+
=
.
~
x2 + 1
|F(~r)|

~ (~r) = (ex + z 2 )

f
cos y
+ 2xz k,
The directional derivative is

x
~ r)

+ z 2 ) cos y
+
F(~
f
~ (~r) = x
= x(e
(ex + z 2 )
cos y
+ 2xz k
.
f
(~r) =
~ r)|
x2 + 1
x2 + 1
F
|F(~

1.3.2

The Jacobian matrix

We have seen that the derivatives of a scalar field can be represented as a vector field (the
gradient); how to represent the derivatives of a vector field?
~ the Jacobian matrix J F
~ (or simply the Jacobian,
Consider a smooth vector field F,
~ is
named after Carl Gustav Jacob Jacobi 18041851) of F

F1
x

F
2
~ :=
JF

F3
x

F1
y
F2
y
F3
y

F1
z

F2

.
z

F3
z

(14)

This is a field whose values at each point are 3 3 matrices. Note that sometimes the noun
Jacobian is used to indicate the determinant of the Jacobian matrix.
Example 1.28. Compute the Jacobian of the following vector fields
~ r) = 2x
F(~
2y
,

~ r) = yz

G(~
+ xz
+ xy k,

We simply compute all the partial derivatives:

2 0 0
0 z y
~ r) = 0 2 0 ,
~ r ) = z 0 x ,
J F(~
J G(~
0 0 0
y x 0
14

~ r) = |~r|2

H(~
+ cos y k.

2x
2y
~ r) = 0
0
J H(~
0 sin y

2z
0 .
0

A. Moiola

1.3.3

Vector calculus notes, version 3

Second-order partial derivatives, the Laplacian and the Hessian

We recall that the second-order partial derivatives of a scalar field f are the partial derivatives
of the partial derivatives of f . When partial derivatives are taken twice with respect to the
same component of the position, they are denoted as
2f
f
=
,
2
x
x x

f
2f
=
2
y
y y

and

f
2f
=
.
2
z
z z

When they are taken with respect to two different components, they are called mixed
derivatives and denoted as
2f
f
=
,
xy
x y
f
2f
=
,
yz
y z

2f
f
2f
f
=
,
=
,
xz
x z
yx
y x
2f
f
2f
f
=
, and
=
.
zx
z x
zy
z y

If f is smooth, the order of differentiation is not relevant (Clairaults or Schwarz theorem):


2f
2f
=
,
xy
yx

2f
2f
=
xz
zx

and

2f
2f
=
.
yz
zy

(15)

The Laplacian f (or Laplace operator, sometimes denoted 2 f , named after PierreSimon Laplace 17491827)5 of a smooth scalar field f is the scalar field obtained as sum of
the pure second partial derivatives of f :
f := 2 f :=

2f
2f
2f
+ 2 + 2.
2
x
y
z

(16)

A scalar field whose Laplacian vanishes everywhere is called harmonic function.


Remark 1.29 (Applications of the Laplacian). The Laplacian is used in some of the most
studied partial differential equations (PDEs) such as the Poisson equation u = f and the
Helmholtz equation u k 2 u = f . Therefore, it is ubiquitous and extremely important in
physics and engineering, e.g. in the models of diffusion of heat or fluids, electromagnetics,
acoustics, wave propagation, quantum mechanics.
The Hessian Hf (or Hessian matrix, named after Ludwig Otto Hesse 18111874) is the
3 3 matrix field of the second derivatives of the scalar field f :

2f
x2

2
f
~
Hf := J(f ) =
xy

2
f
xz

2f
yx
2f
y 2
2f
yz

2f
zx

2f
.
zy

2
f
z 2

(17)

Note that (if the field f is smooth enough) the Hessian is a symmetric matrix in every point.
The Laplacian is equal to the trace of the Hessian, i.e. the sum of its diagonal terms:
f = Tr(Hf ).
~ is the Laplace operator applied componentwise to vector fields:
The vector Laplacian
~F
~ := (F1 )

(18)

+ (F2 )
+ (F3 )k
 2F




2 F1
2 F1
2 F2
2 F2
2 F2
2 F3
2 F3
2 F3 
1
=

+
+
+
+
+
+
+
k.
x2
y 2
z 2
x2
y 2
z 2
x2
y 2
z 2

5 The reason for the use of the notation 2 will be slightly more clear after equation (21); this is mainly
used by physicists and engineers.

15

A. Moiola

Vector calculus notes, version 3

Example 1.30. Compute the Laplacian and the Hessian of the following scalar fields:
f (~r) = x2 y 2 ,

h(~r) = |~r|4 .

g(~r) = xyez ,

We compute the gradients and all the second derivatives (recall the identity h(~r) = (x2 + y 2 +
z 2 )2 = x4 + y 4 + z 4 + 2x2 y 2 + 2x2 z 2 + 2y 2 z 2 ):
~ = 2x
~ = yez

f
2y
,
g
+ xez + xyez k,
~ = (x
~ 2 + y 2 + z 2 )2 = 2(x2 + y 2 + z 2 )(2x
= 4|~r|2~r,
h
+ 2y
+ 2z k)

2 0 0
0
ez
yez
z
Hf = 0 2 0 ,
Hg = e
0
xez ,
z
z
0 0 0
ye xe xyez

12x2 + 4y 2 + 4z 2
8xy
8xz
,
Hh =
8xy
4x2 + 12y 2 + 4z 2
8yz
2
2
2
8xz
8yz
4x + 4y + 12z
g = xyez = g,

f = 0,

h = 20|~r|2 .

Scalar fields whose Laplacian vanishes everywhere (as f in the example) are called harmonic
functions.
1.3.4

The divergence operator

~ its divergence is defined as the following scalar field:


Given a smooth vector field F,
~ F
~ := div F
~ := F1 + F2 + F3 .

x
y
z

(19)

~ F
~ is justified by the fact that the divergence can be written as the formal
The notation
~
scalar product between the gradient operator and F:

 


.
~ F
~ =
F1
+ F2 + F3 k

+ + k
x
y
z
Example 1.31. Compute the divergence of the following vector fields:
~ r) = 2x
F(~
2y
,

~ r) = yz

G(~
+ xz
+ xy k,

~ r) = |~r|2

H(~
+ cos y k.

We only have to compute three derivatives for each field and sum them:
~ F
~ = (2x) + (2y) + 0 = 2 2 + 0 = 0,

x
y
z
~
~
G = 0 + 0 + 0 = 0,
~ H
~ = 2x + 0 + 0 = 2x.

~ represents
The divergence can be interpreted as the spreading of a vector field. If F
~ F
~ > 0 means that the flow is
the velocity of an incompressible fluid (e.g. water), then
~ < 0 means that the flow is converging due to a sink. If F
~
~ F
diverging due to a source,
represents the velocity of an compressible fluid in absence of sources or sinks (e.g. air), then
~ F
~ > 0 means that the flow is dilating,
~ F
~ < 0 means that the flow is being compressed.

~ = x
Example 1.32 (Positive and negative divergence). The field F
+ y
has positive diver~
~
gence F = 1 + 1 + 0 = 2 > 0, thus (if it represents water velocity) to maintain the flow
~ = (x 2y)
+ (2x y)

water must be added in every point (see Figure 13 left). The field G
~
~
has negative divergence G = 1 1 + 0 = 2 < 0, thus to maintain the flow water must
be subtracted from every point (see Figure 13 right).
Remark 1.33. From the previous example it seems to be possible to deduce the sign of
the divergence of a vector field from its plot, observing whether the arrows converge or
diverge. However, this can be misleading, as also the magnitude (the length of the arrows)
~ a = |~r|a~r defined in R3 \ {~0}, where a is a real positive
matters. For example, the fields F
parameter, have similar plots but they have positive divergence if a < 3 and negative if a > 3.
(Can you show this fact? It is not easy!)
16

A. Moiola

Vector calculus notes, version 3

2
2
1.5

1.5

0.5

0.5

0.5

0.5

1.5

1.5

2
2
2.5

1.5

0.5

0.5

1.5

2.5

2.5

1.5

0.5

0.5

1.5

2.5

~ = x
~ = (x 2y)
Figure 13: A representation of the fields F
+ y
(left) and G
+ (2x y)

~ has positive divergence and G


~ negative.
(right) from Example 1.32. F
1.6
1.4
1.2
1
0.8
0.6
0.4
0.2
0
0.2
0.4
0.5

0.5

1.5

~ = (x2 x)
~ F
~ = 2(x + y 1) which is
Figure 14: The field F
+ (y 2 y)
has divergence
~ represents the velocity of
positive above the straight line x + y = 1 and negative below. If F
water, in the upper part of the space acts like a source and the lower part as a sink.
~ is not a vector, the symbol
Note (Warning on confusing notation). Since strictly speaking
~
~
F does not represent a scalar product and
~ F
~ 6= F
~ .
~

~ F
~ is a scalar field, while F
~
~
Indeed these are two very different mathematical objects:
is an operator that maps scalar fields to scalar fields



f
f
f
~ )f
~
+ F2
+ F3
+ F2
+ F3 .
f = F1
(F
= F1
x
y
z
x
y
z
~ = yz
and g = xyez , then
For example if G
+ xz
+ xy k


~ )g
~ = yz + xz + xy g = yz yez + xz xez + xy xyez = (x2 z + y 2 z + x2 y 2 )ez .
(G
x
y
z

~
~ can also be applied componentwise to a vector field (and in this case it
The operator F
~
~ = 1 then it
~ is often called advection operator; when |F|
returns a vector field). F
corresponds to the directional derivative we have already encountered in Section 1.3.1.
17

A. Moiola

1.3.5

Vector calculus notes, version 3

The curl operator

~ (often denoted curl and


The last differential operator we define is the curl operator
more rarely rot and called rotor and rotational), which maps vector fields to vector fields:




k


~ F
~ := curl F
~ :=

x y z
F
F2 F3
1
 F
 F
 F
F2 
F3 
F1 
1
2
3

+
+
k.

=
y
z
z
x
x
y

(20)

As in the definition of the vector product (4), the matrix determinant is purely formal, since
~ 6= F
~ ,
~ F
~ as the
it contains vectors, differential operators and scalar fields. Again,
left-hand side is a vector field while the right-hand side is a differential operator.
Of all the differential operators introduced so far, the curl operator is the only one which
can be defined only in three dimensions6 , since it is related to the vector product.
Example 1.34. Compute the curl of the following vector fields:
~ r) = 2x
F(~
2y
,

~ r) = yz

G(~
+ xz
+ xy k,

~ r) = |~r|2

H(~
+ cos y k.

For each field we have to evaluate six partial derivatives (the non-diagonal terms of the
~
Jacobian matrix J F):
~ F
~ = ~0,

~ = (x x)
~ G
= ~0,

+ (y y)
+ (z z)k

~ H
~ = ( sin y 0)
= sin y

+ (2z 0)
+ (0 2y)k
+ 2z
2y k.

2
2
1.5
1.5
1
1
0.5
0.5
0
0
0.5
0.5
1
1
1.5
1.5
2
2.5

1.5

0.5

0.5

1.5

2.5

2.5

1.5

0.5

0.5

1.5

2.5

~ = 2y
~ F
~ = 3k

Figure 15: The planar field F


x
depicted in the left plot has curl equal to
~ The planar field
which points into the page, in agreement with the clockwise rotation of F.
~ = x2 depicted in the right plot has curl equal to
~ = 2xk
~ G
which points into the page
G
for x < 0 and out of the page for x > 0. Indeed, if we imagine to immerse some paddle-wheels
in the field, due to the differences in the magnitude of the field on their two sides, they will rotate
clockwise in the negative-x half-plane and counter-clockwise in the positive-x half-plane.
How can we interpret the curl of a field? The curl is in some way a measure of the
rotation of the field. If we imagine to place a microscopic paddle-wheel at a point ~r in a
~ F(~
~ r)|
fluid moving with velocity ~f , it will rotate with angular velocity proportional to |
~
~
and rotation axis parallel to F(~r). If the fluid motion is planar, i.e. F3 = 0, and counterclockwise, the curl will point out of page, if the motion is clockwise it will point into the page
(as the motion of a usual screw). See also Figure 15.
6 Actually, in some applications, two-dimensional analogues of the curl are used, but they are not standard.
The 2D-curl of 2D-vector fields will be a scalar and the 2D-curl of a scalar will be a 2D-vector field.

18

A. Moiola

1.4

Vector calculus notes, version 3

Vector differential identities

In the two following propositions we prove several identities involving scalar and vector fields
and the differential operators defined so far. It is fundamental to apply the operators to the
appropriate kind of fields, for example we can not compute the divergence of a scalar field
because we have not given any meaning to this object. For this reason, we summarise in the
following table which kind of fields are taken and returned by the various operators. The last
column shows the order of the partial derivatives involved in each operator.
Operator name
Partial derivative
Gradient
Jacobian
Laplacian
Hessian
Vector Laplacian
Divergence
Curl (only 3D)

symbol

x , y or z
~

H
~

~
(or div)
~ (or curl)

field taken as input


scalar
scalar
vector
scalar
scalar
vector
vector
vector

field returned as output


scalar
vector
matrix
scalar
matrix
vector
scalar
vector

order
1st
1st
1st
2nd
2nd
2nd
1st
1st

Proposition 1.35 (Vector differential identities for a single field). Let f be a smooth scalar
~ be a smooth vector field, both defined in a domain D R3 . Then the following
field and F
identities hold true:
~ (f
~ ) = f,

~ (
~ F)
~ = 0,

~ (f
~ ) = ~0,

~ (
~ F)
~ = (
~
~ F)
~
~ F.
~

(21)
(22)
(23)
(24)

Proof. The two scalar identities (21) and (22) can be proved using only the definitions of
~ )1 and (
~ F)
~ 1 to denote the
the differential operators involved (note that we write (f
x-components of the corresponding vector fields, and similarly for y and z):

~ )1
(f

~ (f
~ ) (19)
=

+
x
f
(9) f
=
+
x x y y



~ )2
~ )3
(f
(f
+
y
z
f
+
z z

(16)

= f,



~ 1
~ 2
~ 3
~ F)
~ F)
~ F)
(
(
(19) (
~
~
~
( F) =
+
+
x
y
z






F3
F2
F3
F1
F1
F2

y
z
z
x
x
y
(20)
=
+
+
x
y
z
2 F1
2 F2
2 F2
2 F3
2 F3
2 F1

=
yz zy zx xz
xy yx
(15)

= 0.

The two remaining vector identities can be proved componentwise, i.e. by showing that each
of the three components of the left-hand side is equal to the corresponding component of the
right-hand side:


~ )3
~ )2

(f
(f
~ (f
~ ) (20)

1
y
z
f
(9) f

=
y z
z y
(15)

= 0,

19

A. Moiola

Vector calculus notes, version 3




~ F)
~ 3
~ F)
~ 2
 (20) (
(
~
~
~

( F) 1 =
y
z




F2
F1
F3
F1
(20)
=

y x
y
z z
x


2 F1
F2
2 F1
F3
(15)
=

+
x y
z
y 2
z 2


2
F1
2 F1
2 F1
F2
F3
F1

+
+
=
x x
y
z
x2
y 2
z 2
(19),(16)
~ F)
~ F1
=
(
x


(9),(18)
~
~ F)
~
~F
~ ,
=
(
1

and similarly for the y- and z-components.


Next Proposition 1.36 can be understood as an extension to vector fields and vector
differential operators of the product rule (13) for partial derivatives (which in turn extends
the usual formula (f g) = f g + g f for real functions, well-known from the calculus class).
Proposition 1.36 (Vector differential identities for two fields). Let f and g be smooth scalar
~ and G
~ be smooth vector fields, all of them defined in the same domain D R3 .
fields, F
Then the following identities hold true:
~ g) = f g
~ + g f,
~
(f
~ F
~ G)
~ = (F
~ )
~ G
~ + (G
~ )
~ F
~ +G
~ (
~ F)
~ +F
~ (
~ G),
~
(

~ (f G)
~ = (f
~ )G
~ + f
~ G,
~

~ G)
~ = (
~ G
~ F
~ (
~
~ (F
~ F)
~ G),

~ (f G)
~ = (f
~ )G
~ + f
~ G,
~

~ (F
~ G)
~ = (
~ G)
~ F
~ (
~ F)
~ G
~ + (G
~ )
~ F
~ (F
~ )
~ G,
~

~ g
~ + f (g).
(f g) = (f )g + 2f

(25)
(26)
(27)
(28)
(29)
(30)
(31)

Proof. Identity (25) has already been proven in Proposition 1.22 (but you can also show it
componentwise). We show here the proof of identities (26), (28) and (31) only. The other
identities can be proven with a similar technique; some of them will be shown in class and the
remaining ones are left as exercise. All the proofs use only the definitions of the differential
operators and of the vector operations (scalar and vector product), the product rule (13) for
partial derivatives, together with some smart rearrangements of the terms. In some cases, it
is convenient to start the proof from the expression at the right-hand side, or even to expand
both sides and match the results obtained. The vector identities are proven componentwise.
Identity (28) can be proved as follows:



~ G)
~ 1
~ G)
~ 2
~ G)
~ 3
(F
(F
(19) (F
~
~
~
(F G) =
+
+
x
y
z



F3 G1 F1 G3
F1 G2 F2 G1
(4) F2 G3 F3 G2
=
+
+
x
y
z
G3
G2
G1
G3
F3
F3
F1
(13) F2
=
G3 + F2

G2 F3
+
G1 + F3

G3 F1
x
x
x
x
y
y
y
y
G2
G1
F2
F1
G2 + F1

G1 F2
(now collect non-derivative terms)
+
z

 z  z
 z 

F3
F1
F2
F2
F3
F1
=
G1 +
G2 +
G3

y
z
z
x
x
y






G1
G2
G2
G3
G1
G3
F2
F3

F1
y
z
z
x
x
y
(20)

~ F)
~ 1 G1 + (
~ F)
~ 2 G2 + (
~ F)
~ 3 G3 F1 (
~ G)
~ 1 F2 (
~ G)
~ 2 F3 (
~ G)
~ 3
= (

(3)

~ F)
~ G
~ F
~ (
~ G).
~
= (
20

A. Moiola

Vector calculus notes, version 3

We prove (26) componentwise, starting from the first component of the expression at the
right-hand side:


~
~
~
~
~
~
~
~
~
~
~
~
(F )G + (G )F + G ( F) + F ( G)
1

(4)

~ )G
~ 1 + (G
~ )F
~ 1 + G2 (
~ F)
~ 3 G3 (
~ F)
~ 2 + F2 (
~ G)
~ 3 F3 (
~ G)
~ 2
= (F

 

G1
F1
G1
G1
F1
F1
(20)
= F1
+ G1
+ F2
+ F3
+ G2
+ G3
x
y
z
x
y
z








F1
G2
G1
F1
F3
G1
G3
F2
G3
+ F2
F3

+ G2
x
y
z
x
x
y
z
x

(collect the six non-derivative terms F1 , F2 . . . )


G1
G2
G3
F1
F2
F3
= F1
+ F2
+ F3
+ G1
+ G2
+ G3
(collect terms pairwise)
x
x
x
x
x
x
 
 


G2
G3
G1
F1
F2
F3
+ F2
+ F3
= F1
+ G1
+ G2
+ G3
x
x
x
x
x
x
(F2 G2 ) (F3 G3 )
(13) (F1 G1 )
=
+
+
x
x
x
~ G)
~
(3) (F
=
x

(9)
~
~ G)
~
= (F
,
1

and similarly for the y- and z-components. Finally, identity (31) follows from a repeated
application of the product rule (13):
2
2 (f g) 2 (f g)
(16) (f g)
+
+
(f g) =
x2
y 2
z 2






f
f
f
g
g
g
(13)
+
+
g+f
g+f
g+f
=
x x
x
y y
y
z z
z
 2


  2

2
2
2
f
f g
f
f g
f
f g
g
g
2g
(13)
=
g+2
g+2
g+2
+f 2 +
+f 2 +
+f 2
x2
x x
x
y 2
y y
y
z 2
z z
z
(16),(9),(3)

~ g
~ + f (g).
(f )g + 2f

Note that in the identities (30) and (26) we use the advection operator applied to a
vector field (which was mentioned in Section 1.3.4):

~ )
~ F
~ = (G
~ )F
~ 1
~ )F
~ 2 + (G
~ )F
~ 3k
(G
+ (G
 F


F2
F1
F1
F2
F2 
1
= G1

+ G1

+ G2
+ G3
+ G2
+ G3
x
y
z
x
y
z
 F
F3
F3 
3
+ G1
k.
+ G2
+ G3
x
y
z
Exercise 1.37. Prove identities (27), (29) and (30) .
Remark 1.38. The following identity involving the Jacobian matrix holds true:

~
~ F)
~ G
~ = JF
~ (J F)
~ T G,
(
where the symbol

denotes matrix transposition. (Try to prove it for exercise.)

Example 1.39. We demonstrate some of the results of Proposition 1.35 for the vector field
~ = xy

F
+ ey cos x + z 2 k:
~ F
~ = y + ey cos x + 2z,

~ F
~ = (0 0)
= (ey sin x x)k,

+ (0 0)
+ (ey sin x x)k
~
~ F)
~ = ey sin x

(
+ (1 + ey cos x)
+ 2k,
21

A. Moiola

Vector calculus notes, version 3

~
~ F
~ = ey sin x

(ey cos x 1)
,
~F
~ = (xy)
= 0 + (ey cos x + ey cos x)
= 2k

+ (ey cos x)
+ (z 2 )k
+ 2k
~ F
~ is parallel
and from the last three formulas we immediately demonstrate (24). Since

to k and independent of z, its divergence vanishes, thus also the identity (22) is verified.

1.5

Special vector fields and potentials

~ defined on a domain D R3 .
Definition 1.40. Consider a smooth vector field F
~ = ~0, then F
~ is called irrotational (or curl-free).
~ F
If
~ F
~ = 0, then F
~ is called solenoidal (or divergence-free or incompressible).
If
~ =
~ is called conservative and is called scalar
~ for some scalar field , then F
If F
~
potential of F.
~ =
~ A
~ for some vector field A,
~ then A
~ is called vector potential of F.
~
If F
7

~ is irrotational when
From the definition of the curl operator, we note that F
F3
F2
=
,
z
y

F3
F1
=
x
z

and

F1
F2
=
.
y
x

~ admits a scalar or a vector potential, then the potential is never unique: for all
If F
constant scalars R and for all scalar fields g (recall (23))
if

~ =
~ = (
~
~ + );
F
then F

if

~ =
~ then
~ A
F

~ =
~ + g);
~ (A
~
F

~ + g
~ are alternative potentials.
therefore + and A
Example 1.41. Consider the following vector fields:
~ r) = 2x
F(~
2y
,

~ r) = yz

G(~
+ xz
+ xy k,

~ r) = |~r|2

H(~
+ cos y k.

~ and G
~ are both irrotational and solenoidal while
In Examples 1.31 and 1.34 we showed that F
~
~ and G
~ admit both scalar and vector
H is neither irrotational nor solenoidal. Moreover, F
potentials (thus in particular they are conservative):
~ = (x
~ 2 y2) =
~ (2xy~k),
F

~ = (xyz)
~
~
G
=


z 2 x
+ x2 y
+ y2z k
.
2

(For the technique used to compute these potentials see Example 1.42 below.) Of course there
exist fields which are irrotational but not solenoidal and the other way round: verify this fact
for the following fields
~ A = 2x

F
+ 3y 2 + ez k,
~ D = (x + y)
F
y
,

~ B = ex2
F
+

1
k,
1 + z2

~ E = yz

F
+ xz
xy k,

~ C = x3 y 4

F
+ x4 y 3 + z 2 k,
~ F = (x2 + cos z)
F
2xy
.

Example 1.42 (Computation of the potentials). Given a certain field, how can we compute
a scalar or vector potential? We consider as example the irrotational and solenoidal field
~ = z

F
+ xk.
~ it must satisfy F
~ = ,
~
If is a scalar potential of F,
thus, acting componentwise,

=z
x

=0
y
7 In

= xz + f (y, z)
for some two-dimensional scalar field f,

xz + f (y, z)
f (y, z)
=
=0

= xz + g(z)
y
y
for some real function g,

~ = ,
~
physics, the scalar potential is often defined to satisfy F
this can be a big source of confusion!

22

A. Moiola

=x
z

Vector calculus notes, version 3


xz + g(z)
g(z)
= x+
=x
z
z

= xz + .

~
Thus, for every real constant , the fields xz + are scalar potentials of F.
~ must satisfy
~ A
~ =F
~ thus
A vector potential A
A3
A2
A1
A3
A2
A1

= z,

= 0,

= x.
y
z
z
x
x
y
~ are possible, the simplest one is to set A
~1=A
~ 3 = 0 and A
~ 2 (~r) = a(x, z)
Many choices for A
for some two-dimensional scalar field a(x, z) which now must satisfy
a
= z.
z

a
= x,
x
We integrate and obtain


12 x2 + b(z)
b(z)
z 2
x2
+ b(z)

=
= z

b(z) =
,
a=
2
z
z
2
~ = 1 (x2 z 2 )
~ (and many others are possible).
and finally A
is a vector potential of F
2
Note that what we are actually doing is solving differential equations, so in most practical
cases a closed form for the potential is not available.
As exercise consider the fields defined in Example 1.41 and compute a scalar potential of
~ A and F
~ C and a vector potential of F
~ D.
F
Exercise 1.43. Verify that the position field ~r is irrotational, conservative and not solenoidal.
Compute a scalar potential for ~r.
~ ()
~
~ (
~ A)
~ = 0) all conservative
= ~0,
Because of identities (23) and (22) (
fields are irrotational and all fields admitting a vector potential are solenoidal.
Remark 1.44 (Existence of potentials). Under some assumptions on the domain D of
~ (and on the smoothness of F)
~ the converse is true. If the domain D is simply
definition of F
connected (informally, it has no holes or tunnels passing through it) the converse of the first
statement above is true: every irrotational field on D is conservative, i.e. it admits some
scalar potential . If the domain D has no holes in its interior, the converse of the second
~ E.g. the whole
statement is true: every solenoidal field on D admits some vector potential A.
3
R , a ball, a cube, a potato satisfy both conditions, while a doughnut is not simply connected
and the complement of a ball does not satisfies the second condition. For more details and
the explicit construction of the potentials, see Theorems 4 and 5 in [1, Section 16.2].
Remark 1.45 (Relation between potentials and domains). The fact that a vector field
~ defined in D is conservative may depend on the choice of the domain D. For instance,
F
consider the irrotational field
x
~ = y
+ 2

F
2
2
x +y
x + y2
~ is irrotational). In the half space D = {~r R3 s.t. x > 0}, which is
(exercise: show that F
~ =F
~ in
simply connected, it admits the scalar potential = arctan xy (exercise: show that
1

~
D, using t arctan t = 1+t2 ). However, F may be defined on the larger domain E = {~r R3
s.t. x2 + y 2 > 0}, i.e. the whole space without the z-axis; E is not simply connected. The
potential can not be extended to the whole E and it is not possible to define any other scalar
~ in the whole of E; this will follow from some properties of conservative fields
potential for F
~ is irrotational on E, conservative on D
we will see in the next sections. To summarise: F
but not conservative on E.
~ defined in a bounded domain D can be expressed as the sum
Any smooth vector field F
of a conservative field and a field admitting a vector potential:
~ =
~ +
~ A
~
F

~ respectively. The potentials are not unique. This


for some scalar and vector field and A,
decomposition is termed Helmholtz decomposition after Hermann von Helmholtz (1821
1894).
Exercise 1.46. Show that the divergence of any conservative field is equal to the Laplacian
of its scalar potential.
23

A. Moiola

1.6

Vector calculus notes, version 3

Total derivatives and chain rule for fields

So far we have applied various differential operators to scalar and vector fields but we have
not considered the derivatives of the vector functions introduced in Section 1.2.3. These are
indeed easier than derivative of fields, since they can be computed as the usual derivatives
of the components (which are real-valued functions of a single variable) as learned in the first
calculus class:
da1
da2
da3
d~a
(32)
(t) := ~a (t) :=
(t)
+
(t)
+
(t)k.
dt
dt
dt
dt
d
No partial derivatives are involved here. Note that we used the total derivative symbol dt

instead of the partial derivative one t because ~a and its three component depend on one
variable only (i.e. on t). The derivative of a vector function is a vector function.

Note (Difference between partial and total derivatives). For f (~r) = f (x, y, z), the partial
derivative f
x is the derivative of f with respect to x when all the other variables are kept
df
takes into account also the possible dependence of the other
fixed. The total derivative dx
variables (y and z) on x. We will see some examples below.
Example 1.47. Given two vector functions ~a and ~b and a scalar constant , the following
identities hold (linearity and product rule):
d(~a)
dt
d(~a + ~b)
dt
d(~a ~b)
(~a ~b) =
dt
d(~a ~b)
dt

d~a
,
dt
d~a d~b
=
+
,
dt
dt
d~a ~
d~b
=
b + ~a
,
dt
dt
d~a ~
d~b
=
b + ~a
.
dt
dt
=

(33)

Note that t 7 (~a(t) ~


b(t)) is simply a real function of one real variable.
Now imagine we have a scalar field f evaluated on a smooth curve ~a, i.e. f (~a). This is a
real function of real variable t 7 (f ~a)(t) = f (~a(t)). Its derivative in t can be computed
with the chain rule:
df
df (~a)
f da1
f da2
f da3
=
=
+
+
dt
dt
x dt
y dt
z dt
 d~a
~ ~a(t)
(t).
= f
dt

(34)

Example 1.48. Consider the scalar field f = xyez and the curve ~a = t
+ t3 (see Figure
10). Compute the total derivative of f (~a) using either the chain rule or explicitly computing
the composition f (~a).
(i) We compute the total derivative of ~a, the gradient of f evaluated in ~a, and their scalar
product according to the chain rule (34):
d~a
=
+ 3t2 ,
dt

~ = yez
= t3

f
+ xez + xyez k
+ t
+ t4 k

df
~ d~a = t3 + 3t3 = 4t3 .
= f
dt
dt
(ii) Alternatively, we can compute the composition f (~a(t)) and then derive it with respect
to t:


df ~a(t)
df
3 0
4
f ~a(t) = t t e = t ,
=
= 4t3 .
dt
dt

Remark 1.49. (Differentials) As in one-dimensional calculus, we can consider differentials, infinitesimal changes in the considered quantities. The differential
df =

f
f
f
dx +
dy +
dz
x
y
z
24

A. Moiola

Vector calculus notes, version 3

of the scalar field f (~r) represents the change in f (~r) due to an infinitesimal change in ~r. If
we can express its vector differential as
~r depends on t as ~r(t) = x(t)
+ y(t)
+ z(t)k,


 dy 
 dz 

dy
dz 
= dx dt
= dx
d~r = dx
+ dy
+ dz k
dt.
+
dt +
dt k
+
+ k
dt
dt
dt
dt
dt
dt
Thus, if f is evaluated in ~r(t), its differential df can be expressed in terms of dt:
~ d~r = f dx dt + f dy dt + f dz dt.
df = f
x dt
y dt
z dt
Differentials can be used to estimate
(compute approximately) the values of a field. Conp
x2 + y 3 . We want to estimate its value at the point
sider for example the field f =
~r = 1.03
+ 1.99
knowing that f and its gradient in the nearby point ~r0 =
+ 2
take
values

+ 3y02 1
0
~ (~r0 ) = 2xp
f (~r0 ) = 1 + 8 = 3,
=
f
+ 2
.
3
2 x20 + y03
We approximate the desired value as


1
~ (~r0 ) = 3 + (0.03
+ 2
= 2.99,
f (~r) f (~r0 ) + ~ f
0.01
)
3

where ~ = ~r ~r0 is the discrete increment. (The exact value is f (~r) = 2.9902339 . . .)
Example 1.50 (Derivative of a field constrained to a surface). Often the coordinates of ~r
should be thought as dependent on each other. In this case the partial derivatives of a field
~f (~r) do not give the desired rate of change of f . For example, imagine that the point ~r is
constrained to the surface (the graph of h : R2 R)


.
Sh = ~r R3 , s.t. ~r = x
+ y
+ h(x, y)k
In this case, the component z depends on x and y through the two-dimensional field h. We
can compute the total derivative of f along x or y using the chain rule:



 f


df
h
f
x, y, h(x, y) =
(x, y, h(x, y) +
(x, y, h(x, y)
(x, y) ,
dx
x
z
x



 f


h
df
f
x, y, h(x, y) =
(x, y, h(x, y) +
(x, y, h(x, y)
(x, y) .
dy
y
z
y

Recall that f
z represents the derivatives of f with respect to its third scalar variable, regardless
of this being just z or a function of other variables.
Let us consider a concrete example. Let h(x, y) = x2 y 2 , so that the surface Sh =
is that depicted in Figure 16 (compare also with Figure 8). Let
{~r = x
+ y
+ (x2 y 2 )k}
f = x sin z. Then, from the formulas above, we compute the total derivatives of f with respect
to x and y:

df
x, y, h(x, y) = sin z + (x cos z)(2x) = sin z + 2x2 cos z = sin(x2 y 2 ) + 2x2 cos(x2 y 2 ),
dx

df
x, y, h(x, y) = 0 + (x cos z)(2y) = 2xy cos z = 2xy cos(x2 y 2 ).
dy

Note that even if f is independent of the y-coordinate, its derivative along the surface defined
by h (which does depend on y) is non-zero.
Example 1.51 (Derivative of a field constrained to a curve). Compute the total derivative

d
f a(z), b(z), z
dz

where f = exz y 4 , a(z) = cos z, b(z) = sin z (this is again a derivative along the curve
compare with the centre plot of Figure 10).
a(t)
+ b(t)
+ tk,
We combine the partial derivatives of f and the total derivatives of a and b:
 f da f db f
d
f a(z), b(z), z =
+
+
= zexz y 4 sin z + 4exz y 3 cos z + xexz y 4
dz
x dz
y dz
z

= ez cos z z sin5 z + 4 sin3 z cos z + cos z sin4 z .
25

A. Moiola

Vector calculus notes, version 3

1.5
4
1

3
2

0.5

0
0
1
2
0.5
3
4
2

1
2

1.5

0
0
1

1
2

described in Example 1.50. The colours


Figure 16: The surface Sh = {~r = x
+ y
+ (x2 y 2 )k}
represent the values of the field f (~r) = x sin z, which can be written as f (~r) = x sin(x2 y 2 )
when restricted to Sh . Note that f varies in the y coordinate if we move along Sh , while it is
constant in that direction if we move in free space.

1.7

Review exercises for Section 1

Consider the following scalar and vector fields:


f = xy 2 z 3 ,

~ = (y + z)

F
+ (x + z)
+ (x + y)k,

g = cos x + sin(2y + x),

~ = ez2
G
+ y 2 ,
~ = ~r
H
,

h = ex cos y,
= x|~r|2 y 3 ,

m = xy

(x > 0),

(35)

~ = |~r|2 ~r,
L
~ = yz 2 (yz

M
+ 2xz
+ 3xy k).

Consider also the three following curves, defined in the interval 1 < t < 1:

~a = (t3 t)
+ (1 t2 )k,

~b = t3

+ t2 + k,

~c = et cos(2t)
+ et sin(2t)
.

Answer the questions, trying to avoid brute force computations whenever possible.
1. Compute gradient, Hessian and Laplacian of the five scalar fields.
2. Compute Jacobian, divergence, curl and vector Laplacian of the five vector fields.
3. Which of the fields are solenoidal and which are irrotational? Which are harmonic?
~ a vector potential for H
~ (can (29) help
4. Compute a scalar and a vector potential for F,
~
~
you?) and a scalar potential for M. Can you guess a scalar potential for L?
~ does not admit neither a scalar nor a vector potential.
5. Show that G
~ r) and L(~
~ r) are orthogonal to each other at every point ~r R3 .
6. Show that H(~
7. Try to graphically represent the fields in (35). E.g. you can draw a qualitative plot like
~ on the plane y = 0, for F,
~ H
~ and L
~ on the plane z = 0.
those of Section 1.2.2 for G
8. Demonstrate some of the identities of Propositions 1.35 and 1.36 for the fields in (35).
(Simple and interesting examples are the demonstrations of identities (22) and (24) for
~ and H;
~ (27) and (29) for h and H.)
~
~ (23) for f ; (25) for f and h; (26) for G
G;
9. Compute the (total) derivatives of the curves (i.e.

d~
a d~
b
dt , dt

and

d~c
dt )

and try to draw them.


~

a) df (b)
10. Compute the following total derivatives of the scalar fields along the curves: dh(~
dt , dt
c)
and d(~
dt . (You can either use the chain rule (34) or first compute the composition.)

26

A. Moiola

Vector calculus notes, version 3

Vector integration

So far we have extended the concept of differentiation from real functions to vectors and
fields (differential vector calculus); in this section we consider the extension of the idea of
integral for the same objects (integral vector calculus).

2.1

Line integrals

Here we study how to integrate scalar and vector fields along curves. See also Sections 15.3
and 15.4 of [1], which contain several useful exercises.
2.1.1

Line integrals of scalar fields

Consider a curve ~a : [tI , tF ] R3 (where [tI , tF ] R is an interval) and a scalar field


f : R3 R. We denote with := ~a([tI , tF ]) R3 the image of the curve ~a, i.e. the set of
the points covered by ~a; we call the path (or trace) of ~a. The line integral of f along
is
r

Z tF
Z
Z tF
  da1 2  da2 2  da3 2
 d~a


f ~a(t)
f ~a(t) dt =
f ds =
+
+
dt,
(36)
dt
dt
dt
dt
tI
tI

where ds denotes the infinitesimal element of the path . The right-hand side of (36) is an
ordinary one-dimensional integral over the interval [tI , tF ]. If ~a is a loop (i.e. ~a(tI ) = H~a(tF )),
then the line integral along is called contour integral and denoted with the symbol f ds.

Example 2.1 (Length of a path). We can measure the length of a path by integrating the
constant field f = 1:
Z tF
Z
d~a
dt.
ds =
length() =
(37)
dt

tI

R
R
Note. Note that the line integral f ds is not the same as the integrals f dx and f dy
you have seen in the calculus methods module. These other integrals will appear in (40).
R

The line integral in (36) measures a quantity f distributed along , thus it must

abe

independent of the special parametrisation ~a. This is guaranteed by the factor d~
dt ,
which takes into account the speed of travelling along (if we think at the parameter t as
time). Similarly, the travel direction along (the curve orientation) does not affect the value
of the line integral. In other words, f and uniquely define the value of the integral
R in (36),
independently of the choice of ~a we use to compute it. This makes the notation f ds in
(36) well-posed, without the need of specifying ~a. Example 2.2 demonstrates the invariance
of the line integral with respect to the parametrisation.
Example 2.2 (Independence of parametrisation). Consider the scalar field f = y and the
unit half circle C centred at the origin and located in the half plane {~r = x
+ y
, y 0},
which can be defined by either of the two parametrisations
p
~b : [1, 1] R3 , ~b( ) =
~a : [0, ] R3 , ~a(t) = cos t
+ sin t
,
+ 1 2 ;

(see Figure 17). Verify that


Z



Z 1
 d~b
 d~a
f ~b( ) d.
f ~a(t) dt =
d
dt
1

We start by computing the total derivatives of the two parametrisations


d~b

,
=
+
d
1 2

d~a
= sin t
+ cos t
,
dt
and their magnitudes
p
d~a
= sin2 t + cos2 t = 1,
dt

r
d~b
1
2

=
.
= 1+
d
1 2
1 2

27

A. Moiola

Vector calculus notes, version 3

The values of the field f along the two curves are f (~a(t)) = sin t and f (~b( )) =
We compute the two integrals and verify they give the same value:


Z
Z

 d~a
sin t 1 dt = cos t = 1 (1) = 2,
f ~a(t) dt =
dt
0
0
0


Z
Z 1
Z 1
1 p
 d~b
1
2
~

f b( ) d =
1
d =
d = 2.
d
1 2
1
1
1

1 2.

Note that, not only


the two parametrisations travel along C with different speeds (~a has
a
~
constant speed d~
dt = 1, while b accelerates at the endpoints), but they also travel in
opposite directions: ~a from right to left and ~b from left to right.

C
~b

~a

Figure 17: The half circle parametrised by two curves ~a and ~b as in Exercise 2.2.
Exercise 2.3. Draw the following curves (helix, cubic, Archimedean spiral; compare with
Figure 10)
A

defined by ~aA (t) = cos t


+ sin t + tk,

0 t 2,

+ tk,
0 t 1,
3
C defined by ~aC (t) = t cos t
+ t sin t ,
0 t 10,
R
R
R
and compute A fA ds, B fB ds and C fC ds for the following fields:
B

defined by ~aB (t) =

fA (~r) = x + y + z,

p
fB (~r) = 3xz + z 2 + y 3 ,

fC =

x2 + y 2
.
1 + x2 + y 2

(Despite looking nasty, the third integral is quite easy to compute, find the trick!)
Solution. These are the final results to verify your computations are correct:
Z
Z
Z
2
3
fB ds = ,
fC ds = 50.
fA ds = 2 2 ,
4
B
C
A
Example 2.4 (Length of a graph). Formula (37) can be used to compute the length of the
graph of a real function. Given a function g : [tI , tF ] R, its graph can be represented by
the curve ~a(t) = t
+ g(t)
. Therefore, its length is
Z tF
Z tF p
Z tF
d~a



length of the graph of g =
1 + g (t)dt.
|
+ g (t)
|dt =
dt dt =
tI
tI
tI

Remark 2.5 (Line integrals of densities in physics). If f represents


the density of a wire
R
with shape , line integrals can be used to compute its total mass f ds, its centre of mass (or
R
R
R
and its moment of inertia about a certain
+ ( zf ds)k
+ ( yf ds)
barycentre) ( xf ds)
axis. You can find some examples in Section 15.3 of the textbook.

Remark 2.6 (Is formula (36) a definition or a theorem?). In this class we take the line
integral formula (36) (and all the similar formulas for double, triple, surface and flux integrals
in the rest of Section 2) as a definition. However, these can be derived as consequences of the
28

A. Moiola

Vector calculus notes, version 3

more general theory of Riemann integration. This theory relies on the approximation of the
domain with simpler straight domains (e.g. many short segments approximating a curve, or
many little rectangles approximating a surface) and the approximation of the integrand with a
simpler function or field (a piecewise constant one). You have probably seen this procedure for
integrals of real functions of a single variable. This procedure makes sure that the properties
we intuitively expect from the integrals are satisfied, for example: the integral of positive field
is a positive number; the integral of the unit function gives length, area or volume of the
domain; a dilation of the domain by a factor > 0 corresponds to a multiplication of the
integral by a factor if the domain is a curve, 2 if is a surface, and 3 if is a volume.
2.1.2

Line integrals of vector fields

~ : R3 R3 (we
Consider a curve ~a : I R3 as before, now together with a vector field F
~ along
denote again with := ~a(I) R3 the image of the curve ~a). The line integral of F
the path is
Z
Z
Z tF

~ ~a(t) d~a (t)dt.
~ d~r =
~ d~a =
(38)
F
F
F
dt

tI

a
~
This is an ordinary one-dimensional integral (note that ~a, d~
dt and F are vector-valued but the
~
integral is scalar since it contains a scalar product). If ~a is a loop, then the line integral of F
H
~ d~r.
along is called contour integral or circulation and denoted with the symbol F
Similarly to scalar fields, different parametrisations of give the same integral, if the
travel direction is the same. If the curve orientation is reversed, the sign of the
integral changes (recall that the sign does not change for line integrals of scalar fields!):
Z
Z
~ ~b) d~b = F(~
~ a) d~a.
if ~b(t) = ~a(1 t) then ~a([0, 1]) = ~b([0, 1]) = and
F(

Example 2.8 demonstrates the invariance of the line integral and the sign change. Since, up
to orientation, all the parametrisations
of give the same integral for a given field, we can
R
~ d~r, forgetting the dependence on the parametrisation ~a.
denote the line integral itself as F
In this notation, is an oriented path, namely a path with a given direction of travelling;
if
R
~ r=
we want to consider the integral taken in the reverse direction we can simply write Fd~
R
~ d~r.
F
~ tanThe integral in (38) represents the integral along of the component of F
3
gential to itself. To see this, for a smooth curve ~a : [tI , tF ] R (under the assumption
d~
a
~

dt (t) 6= 0 for all t), we define the unit tangent vector t:


= 1 d~a = q
t = t1
+ t2 + t3 k
d~a dt
dt

da1
2
3
+ da
+ da
dt
dt
dt k


 .
da1 2
da2 2
da3 2
+
+
dt
dt
dt

This is a vector field of unit length defined on and tangent to it. Indeed the line integral
~ is equal to the line integral of the scalar field (F
~ t), the projection of F
~
of the vector field F
on the tangent vector of :


Z tF
Z
Z tF 
Z tF
Z
d~
a 
~a (36)
~a
~a
(38)
dt
~
~ t) ds.
~
~



~

F
(F
F d~a dt =
(F t) dt =
dt =
F d~r =
t
t
t
tI

tI
t

I
dt
(39)
This an important relation between the main definitions of Sections 2.1.1 and 2.1.2.

Note. We define the line integrals of a scalar field f with respect to the elements dx, dy and
dz (over the oriented path = ~a([tI , tF ])) as
Z

f dx :=

f dy :=

(38)

f
d~r =

(38)

f d~r =

29

tF
tI
tF

tI

 a1
(t) dt,
f ~a(t)
t
 a2
f ~a(t)
(t) dt,
t

(40)

A. Moiola

Vector calculus notes, version 3


Z

f dz :=

d~r (38)
=
fk

tF
tI

 a3
f ~a(t)
(t) dt.
t

is a vector field, we can expand its line integral as the


~ = F1
Therefore, if F
+ F2 + F3 k
integrals of its components:
Z
Z
Z
Z
Z

~ d~r =
F1 dx + F2 dy + F3 dz .
(41)
F
F1 dx + F2 dy + F3 dz =

This notation will often be used in the Section 3. These are also the line integrals considered
in the calculus methods module (see Section 9.2 of the MA1CAL notes 201213).

~ along a curve ~a
Remark 2.7 (Work). In physics, the line integral of a force field F
~
(representing the trajectory of a body) is the work done by F.
Example 2.8 (Integrals of a vector field along curves with common endpoints). Consider
~ =
following five curves, all connecting the points ~u = ~0 and w
+ :
0 t 1,

0t ,
2
0 t < ,

~aA (t) = t
+ t
,
~aB (t) = sin t
+ sin t ,
~aC (t) = et
+ et ,
~aD (t) = t2
+ t4 ,
(
t
,
~aE (t) =

+ (t 1)
,

0 t 1,

0 t 1,
1 < t 2.

The curves ~aA and ~aB run along the diagonal of the square S = {~r s.t. 0 x 1, 0 y 1}
at different speeds; ~aC travels along the same diagonal but in the opposite direction and in
infinite time; ~aD and ~aE trace a different curves (see Figure 18). Note that ~aE is not
R
~ d~an for F
~ = 2y
x
and n = A, B, C, D, E, where n is the
smooth. Compute n F
trajectory described by ~an (thus A = B = C 6= D 6= E ).
We begin by computing the total derivatives of the vector functions
d~aB
= cos t
+ cos t ,
dt
0 t 1,
1 < t 2.

d~aA
=
+ ,
dt
(
d~aE

=
dt

d~aC
= et
et ,
dt

Now we insert them in formula (38):


Z 1
Z
Z

~ d~aA =
F
2y(t)
x(t)
(
+ )dt =
0

~ d~aB =
F

d~aD
= 2t
+ 4t3 ,
dt


2t
t
(
+ )dt =


2 sin t
sin t (cos t
+ cos t )dt

1
1
t2
tdt = = ,
2 0 2


1 1
cos 2t 2
1
1
= + = ,
sin 2t dt =

2
4
4 4
2
0
0
0

Z
Z
Z

e2t
1
~ d~aC =
(e2t )dt =
2et
et (et
et )dt =
F
= 2,
2
0
0
C
0
Z 1
Z 1
Z
~ d~aD =
F
(2t4
t2 ) (2t
+ 4t3 )dt =
(4t5 4t5 )dt = 0,
=

~ d~aE =
F

sin t cos t dt =

(t
)
dt +

As expected, we have
Z
Z
Z
~ d~aB =
~ d~aA =
F
F
A


2(t 1)
dt = 0 +

~ d~aC = 1 =
6
F
2

(1)dt = 1,

~ d~aD = 0 6=
F

~ d~aE = 1,
F

since the first and the second are different parametrisations of the same path, the third is
a parametrisation of the same path with reverse direction, while the last two correspond to
different paths.
30

A. Moiola

Vector calculus notes, version 3

~
F

~
w

0.8

0.6

A
D

0.4

0.2

~u

0.2

0.2

0.4

0.6

0.8

1.2

~
~ = 2y
Figure 18: The vector field F
x
and
R the curves described in Example 2.8. Note that F
~
is perpendicular to D in each point (thus D F d~aD = 0) and to E in its horizontal segment.
~ = x
Exercise 2.9. Compute the line integral of G
+ y 2 along the five curves described in
3
Example 2.8. (Hint: recall the derivative of sin t.)
R
R
R
R
~ d~aD =
~ d~aC =
~ d~aB =
~ d~aA =
G
G
G
Solution. You should obtain A G
D
C
B
R
5
~ d~aE = .
G
E

Remark 2.10 (Line integrals as limits of discrete sums). An intuitive explanation of the
relation between formula (38) and its interpretation as integral of the tangential component
~ can be obtained by considering a discrete approximation of the line integral. Consider
of F,
the times 0 = t0 < t1 < < tN 1 < tN = 1, for i = 0, . . . , N , and the corresponding points
~ai = ~a(ti ) . Define the increments ~ai := ~ai ~ai1 and ti := ti ti1 . Then, the integral
~ tangential to in the interval (ti1 , ti ) is approximated by F(~
~ ai ) ~ai
of the component of F
(recall the projection seen in Section 1.1.1), as depicted in Figure 19, and the integral on
is approximated by the sum of these terms:
Z

~ a) d~r
F(~

N
X
i=1

~ ai ) ~ai =
F(~

N
X

~ ai ) ~ai ti .
F(~
ti
i=1

ai
In the limit N (if the times ti are well spaced and ~a is smooth), ~
ti tends to the total
derivative of ~a and the sum of ti gives the integral in dt. Thus we recover formula (38).
A similar reasoning justifies formula (36) for the line integral of scalar fields. This is an
example of Riemann sum.

~a0

~a1

~ a1 )
F(~

~ ai )
F(~

~ai1

~ai ~ai

~aN
y

x
Figure 19: A schematic representation of the approximation of the line integral of the vector field
~ along using discrete increments (see Remark 2.10).
F

31

A. Moiola

2.1.3

Vector calculus notes, version 3

Independence of path and line integrals for conservative fields

In this section we show two results (which, at a first glance, might seem to be quite unrelated
to one another) regarding line integrals on closed paths and line integral of gradients, respectively. We then use both these results to prove Theorem 2.14 which characterises conservative
fields using line integrals.8
~ The following two conditions are equivalent:
Proposition 2.11. Consider a vector field F.
(i) for all pairs of paths A and B with identical endpoints
Z
Z
~ d~r;
~
F
F d~r =
B

(ii) for all closed paths

~ d~r = 0.
F

Proof. Assume condition (i) is satisfied. Given a closed path , take any two points ~p, ~q .
They split in two paths A and B with start point ~p and end point ~q, as in Figure 20.
Assuming that has the same orientation of A (and opposite to B ), the circulation can
be written as
I
Z
Z
Z
Z
~ d~r = 0.
~ d~r
~ d~r =
~ d~r +
~ d~r =
F
F
F
F
F

Similarly, if condition (ii) is satisfied, the two paths A and B can be combined in a closed
path = A B (the concatenation obtained travelling first along A and along B , the
latter run in the opposite direction, see Footnote 8) and condition (i) follows.
z
B
~p
A

~q
y

x
Figure 20: Two paths A and B connecting the points ~p and ~q and defining a closed path
= A B , as in Proposition 2.11.
We now consider the line integral of the gradient of a smooth scalar field along a
(piecewise smooth) curve ~a : [tI , tF ] R3 . Using the chain rule, we obtain
Z

tF


~ ~a(t) d~a (t)dt

dt
tI
Z tF

d
(34)
~a(t) dt
=
dt
tI


= ~a(tF ) ~a(tI )
(38)

~
()
d~r =

(42)

since the function t 7 (~a(t)) is a real function of real variable thus the usual fundamental
theorem of calculus applies. In particular, equation (42) implies that line integrals of
conservative fields are independent of the particular integration path, but depend
8 As we have done in Section 2.1.2, here we consider the path to be an oriented path, i.e. a path
equipped with a preferred direction of travel (orientation). We denote with the same path with the
opposite orientation. If A and B are paths such that the final point of A coincides with the initial point
of B , we denote with A + B their concatenation, i.e., the path corresponding to their union (obtained
travelling first along A and then along B ).

32

A. Moiola

Vector calculus notes, version 3

~ and
only on the initial and the final points of integration9. Thus, given a conservative field F
two points ~
p and ~q in its domain, we can use the notation
Z
Z ~q
~ d~r :=
~ d~r,
F
F

~
p

where is any path connecting ~


p to ~q. (Note that this is not well-defined for non-conservative
fields.)
Equation (42) is often called fundamental theorem of vector calculus, in analogy
to the similar theorem known from one-dimensional calculus. This important result is the
first relation between differential operators and vector integration we encounter; we will see
several others in the next sections.
~ =
Example 2.12. From Example 2.8 and formula (42), we immediately deduce that F
2y
x
is not conservative, since integrals on different paths connecting the same endpoints
~ = x
give different values. The field G
+ y 2 of Exercise 2.9 might be conservative as its
integral has the same value along three different curves; however, in order to prove it is
conservative we should proceed as we did in Example 1.42.
Exercise 2.13. Use formula (42), together with the results of Example 1.42 and Exercise
1.43, to compute
Z
Z
d~r,
~r d~r
(z
+ xk)
and

to the point ~q =

where is any path connecting the point ~p = + 2k


k.

R
R
5
d~r = 1 without
+ xk)
and (z
Solution. You should be able to obtain ~r d~r = 3
2
the need to compute any integral.

We now combine Proposition 2.11 with the fundamental theorem of vector calculus (42)
to obtain a characterisation of conservative fields. The proof of this theorem also provides a
construction of the scalar potential.
I
~ is conservative if and only if
~ d~r = 0 for
Theorem 2.14. A continuous vector field F
F

all closed paths .

~ is conservative, then F
~ =
~ for some scalar potential . If the closed path is
Proof. If F
3
parametrised by ~a : [tI , tF ] R with (tI ) = (tF ), then by the fundamental theorem
H
~ d~r = (~a(tF )) (~a(tI )) = 0.
of vector calculus (42) we have F
H
~ d~r = 0 for all closed paths . Fix
To prove the converse implication (if), assume F
~ Then, by Proposition 2.11, for all points ~r, the scalar field
a point ~r0 in the domain of F.
R ~r
~
(~r) := ~r0 F d~r is well-defined (does not depend on the integration path). Consider the
partial derivative

 Z ~r+h
Z ~r


1
1
~
~
F d~r .
F d~r
(~r) = lim
(~r + h
) (~r) = lim
h0 h
h0 h
x
~r0
~r0

Since in the integrals we can choose any path, in the first integral we consider the path
passing through ~r and moving in the x direction along the segment with endpoints ~r and
~r + h
. This segment is parametrised by ~a(t) = ~r + t
for 0 t h, thus

1
(~r) = lim
h0 h
x

~r+h

~r

~ d~r
F

Z
1 h~
F(~r + t
)
dt
h0 h 0
Z

1 h
dt
F1 (x + t)
+ y
+ zk
= lim
h0 h 0
= F1 (~r)
(38)

= lim

9 Recall from Definition 1.40 that a vector field F


~ is called conservative if F
~ =
~ for some scalar field
(the scalar potential).

33

A. Moiola

Vector calculus notes, version 3

(using, in the last equality, the continuity of F1 and the integral version of the mean value
Rb
1
theorem for real continuous functions ba
a f (t)dt = f ( ) for a [a, b]). Similarly,

~
~
~ is conservative and is a scalar
=
F
and
=
F
,
thus
F
=
.
The
vector
field F
2
3
y
z
potential.
We can summarise what we learned in this section and in Section 1.5 about conservative
fields in the following scheme:
~ =
~
F

(conservative)

~
q

~
p

~ d~r
F

is path independent

~ d~r = 0
F

closed path

~ F
~ = ~0

(43)
(irrotational)

~ is simply connected, the converse of the last implication is also true,


(if the domain of F
see Remark 1.44).
Remark 2.15 (An irrotational and non-conservative field). In Remark 1.45, we have seen
that the field
x
~ = y
F
+ 2

2
2
x +y
x + y2
is conservative in the half-space D = {~r R3 s.t. x > 0} and we have claimed (without
justification) that no scalar potential can be defined in its largest domain of definition E =
~ along the unit circle
{~r R3 s.t. x2 + y 2 > 0}. We can easily compute the circulation of F
3
2
2
C = {~r R , s.t. x + y = 1, z = 0} (parametrised by ~a(t) = cos t
+ sin t
with 0 t 2):
Z
Z 2
Z 2
~ d~r =
F
( sin t
+ cos t
) ( sin t
+ cos t
)dt =
1dt = 2.
C

~ is not conservative in the


Since this integral is not equal to zero, Theorem 2.14 proves that F
domain E. (We have completed the proof of the claim made in Remark 1.45.)
Remark 2.16. From the results studied in this section we can learn the following idea.
Any continuous real functions f : (tI , tF ) R can be written as the derivative of another
Rt
function F : (tI , tF ) R (the primitive of f , F (t) = t f (s)ds for some t ), even if
sometimes computing F may be very difficult. Thus, the integral of f = F can be evaluated
as the difference of two values of F . Similarly, a scalar field g : D R, D R3 , can be
written as a partial derivative (with respect to a chosen direction) of another scalar field.
On the other hand, only some vector fields can be written as gradients of a scalar field (the
potential) and their line integral computed as differences of two evaluations of the potential.
These are precisely the conservative fields, characterised by the conditions in box (43).

2.2

Multiple integrals

We have learned how to compute integrals of scalar and vector fields along curves. We now
focus on integrals on (flat) surfaces and volumes.
2.2.1

Double integrals

For a two-dimensional domain D R2 and a two-dimensional scalar field f : D R, we


want to compute
ZZ
f (x, y)dA,
(44)
D

namely the integral of f over D. The differential dA = dxdy represents the infinitesimal
area element, which is the product of the infinitesimal length elements in the x and the y
directions. Since the domain is two-dimensional, this is commonly called double integral.
For simplicity, we now consider only two-dimensional domains which can be written as
n
o
D = x
+ y
R2 , s.t. xL < x < xR , a(x) < y < b(x) ,
(45)

for two functions a(x) < b(x) defined in the interval (xL , xR ). The domains that can be
expressed in this way are called y-simple. These are the domains such that their intersection
34

A. Moiola

Vector calculus notes, version 3

f (x, y)

y
D
x
RR
Figure 21: The double integral D f dA over a domain D (a triangle in the picture) represents
the (signed) volume delimited by the graph of the two-dimensional field f (i.e. a surface) and the
xy-plane. (Signed means that the contribution given by the part of D in which f is negative
is subtracted from the contribution from the part where f is positive.)
with any vertical line is either a segment or the empty set. A prototypical example is
shown in Figure 22. Examples of non-y-simple domains are the C-shaped domain {~r s.t.
2y 2 1 < x < y 2 }, the doughnut {~r s.t. 1 < |~r| < 2} (draw them!) and any other domain
containing a hole. These and other more complicated non-y-simple domains can often be
decomposed in the disjoint union of two or more y-simple or x-simple (similarly defined)
domains.
y
y = b(x)
D
dy

dA = dxdy

dx

y = a(x)
xL

xR

Figure 22: A y-simple two-dimensional domain D, bounded by the functions a and b in the
interval (xL , xR ), and the infinitesimal area element dA.
In order to compute the integral (44), we view it as an iterated or nested integral. In
other words, for each value of x (xL , xR ), we first consider the one-dimensional integral of
f along the segment parallel to the y-axis contained in D, i.e.
If (x) =

b(x)

f (x, y)dy.

a(x)

Note that both the integrand and the integration endpoints depend on x. The integral If is
a function of x, but is independent of y; we integrate it along the direction x:
ZZ

f (x, y)dA =

xR

If (x)dx =

xR

xL

xL

Z

b(x)

a(x)


f (x, y)dy dx.

Of course, the role of x and y are exchanged in x-simple domains.


We consider as concrete example the isosceles triangle


E = ~r R2 , s.t. 0 < y < 1 |x| ;
35

(46)

(47)

A. Moiola

Vector calculus notes, version 3

see Figure 23. In this case, xL = 1, xR = 1, a(x) = 0, b(x) = 1 |x|. If we first integrate a
field f along the y-axis and then along the x-axis, we obtain:

Z 1  Z 1|x|
ZZ
f (x, y)dy dx
f (x, y)dA =
1
0

Z

1+x


Z
f (x, y)dy dx +

Z

1x


f (x, y)dy dx.

We can also consider the same triangle as an x-simple domain and first integrate along the
x-axis and then along the y-axis (right plot in Figure 23):

Z 1  Z 1y
ZZ
f (x, y)dx dy.
f (x, y)dA =
0

y1

Of course, the two integrals will deliver the same value.


yR = 1
x=y1

y =1x

y =1+x

x=1y

E
y=0

xL = 1

yL = 0

xR = 1

RR
Figure 23: The computation of the double integral E f (x, y)dA on the triangle E. The left
R 1 R 1|x|
plot represents 1 ( 0
f dy)dx, where the inner integral is taken along the shaded band (of
infinitesimal width) and the outer integral corresponds to the movement of the band along the x
axis.
The right plot represent the integration where the variables are taken in the opposite order
R 1 R 1y
(
f dx)dy.
0
y1

To have a more concrete example, we compute the integral of f = x y 2 on the triangle


E in both the ways described above:


ZZ
Z 0  Z 1+x
Z 1  Z 1x
2
2
2
(x y )dA =
(x y )dy dx +
(x y )dy dx
1
0

1+x
1x
Z 1

1 
1 
dx +
xy y 3
dx
=
xy y 3
3
3
0
1
y=0
y=0
Z 0
Z 1

1
1 3
1
x3 + 1 dx +
=
dx
x 2x2 + 2x
3
3
3
1
0
0
1
1

2 3
1 
1 4 1 
4
2
x x +x x
= x x +
12
3
12
3
3
Z

1
1
2
1
1
1
+
+1
= ;
=
12 3 12 3
3
6

Z 1  Z 1y
ZZ
(x y 2 )dx dy
(x y 2 )dA =
0

y1

1
Z 1
 1y
1 4 2 3
1
3
2

x xy
dy =
2(y y )dy = y y = .
2
2
3 0
6
0
x=y1

1

Exercise 2.17. Verify the following identities (drawing a sketch of the domain might be
helpful)
ZZ
2
e3y sin x dxdy = (e6 e3 )
3
R


where R = (0, ) (1, 2) = x
+ y
s.t. 0 < x < , 1 < y < 2 ,
ZZ
5
where Q is the triangle with vertices ~0, , 5
+ ,
y dxdy =
3
Q
36

A. Moiola

Vector calculus notes, version 3

ZZ

cos x dxdy =

2.2.2

n
o

where S = x
+ y
s.t. 0 < x < , 0 < y < sin x .
2

1
2

Change of variables

We learned in first-year calculus how to compute one-dimensional integrals by substitution


of the integration variable, for example
Z

1+

=1+ex
d=ex dx
x
e dx =

1+e

1+e
p
3
4 2
2
2 3
2
2
= (1 + e)
2.894.
d =
3
3
3
2

How do we extend this technique to double integrals? In this one-dimensional example, the
interval of integration x (0, 1) is mapped10 into a new interval (2, 1+e) by the change of
variable T : x 7 = 1 + ex , which is designed to make simpler (better: easier to integrate)
the function to be integrated. The factor dx/d takes care of the stretching of the variable.
In two dimensions, we can use a change of variables not only to make the integrand simpler,
but also to obtain a domain of integration with a simpler shape (e.g. a rectangle instead of
a curvilinear domain).
~ :
Consider a planar domain D R2 on which a two-dimensional, injective vector field T
2
D R is defined. We can interpret this field as a change of variables, i.e. a deformation
~
~ a transformation
of the domain D into a new domain T(D).
We will informally call T
or mapping of the domain D. Departing from the usual notation, we denote (x, y) and
~ We understand and as the Cartesian
(x, y) (xi and eta) the components of T.
coordinates describing the transformed domain: as D is a subset of the plane described by x
~
and y, so T(D)
is a subset of the plane described by and . We denote x(, ) and y(, )
~ 1 from the -plane to the xy-plane11 :
the component of the inverse transformation T
~ : D T(D)
~
T
x
+ y

~ 1 : T(D)
~
T
D

+
7

(x, y) + (x, y)

x(, )
+ y(, )
,

or, with a different notation,



(x, y) 7 (x, y), (x, y)


(, ) 7 x(, ), y(, ) .

~
For example, we consider the following (affine) change of variables12 T:
x y + 1
,
2
x(, ) = + ,

(x, y) =
~ 1 corresponds to
whose inverse T

xy+1
,
2
(48)
y(, ) = + 1.
(x, y) =

~ associated to and is a combination of a translation of length 1


(The transformation T
in the
negative y direction, a counter-clockwise rotation of angle 3/4, and a shrinking of
~ into
factor 2 in every direction.) The triangle E in (47) in the xy-plane is mapped by T

~
the triangle with vertices 0, and
in the -plane, as shown in Figure 24. Other examples
of changes of coordinates are displayed in Figure 25.
~ : (x, y) 7 (, ) warps and stretches the plane, if we want to compute
A transformation T
an integral in the transformed variables we need to take this into account. The infinitesimal
area element dA = dxdy is modified by the transformation in the -plane.
10 In case you are not familiar with the mathematical use of the verb to map, it is worth recalling it.
Given a function T defined in the set A and taking values in the set B (i.e. T : A B), we say that T
maps an element a A into b B if T (a) = b, and maps the subset C A into T (C) B. Thus, a
function is often called a map or a mapping; we will use this words to denote the functions related to
changes of variables or operators. (If you think about it, a map in the non-mathematical meaning is nothing
else than a function that associates to every point in a piece of paper a point on the surface of the Earth.)
11 Here and in the following, we use the vectors
and .
They can be understood either as the unit vectors
of the canonical basis of the -plane (exactly in the same role of
and in the xy-plane) or as vector
fields with unit length, defined in D, pointing in the direction of increase of and , respectively. The first
interpretation is probably easier to use at this stage, while the second will be useful in Section 2.3.
12 This change of variables is affine, meaning that its components and are polynomials of degree one
in x and y. They translate, rotate, dilate and stretch the coordinates but do not introduce curvature: straight
lines are mapped into straight lines.

37

A. Moiola

Vector calculus notes, version 3

=0

=0

x+y =1

y=0

E
x

+ =1

x + y = 1

Figure 24: The triangle E in the xy-plane and in the -plane. Along the edges are shown the
equations of the corresponding lines.

0.8

0.8
0.6

0.6

0.4

0.4
0.2

0.2
0

0
0

0.5

0.5

1.5

0.5
0

0.5
1

0.5

1.5
0

0.5

1.5

2
1

Figure 25: The unit square 0 < x, y < 1 (upper left) under three different mappings: =
x, = y + 15 sin(2x) (upper right); = (1 + x) cos y, = (1 + x) sin y (lower left); =
exy cos(2y), = exy sin(2y) (lower right).
~ and T
~ 1 are 2 2 matrix fields containing
We recall that the Jacobian matrices of T
their first order partial derivatives (see Section 1.3.2). We denote their determinants as
(, )
~ = ,
:= det(J T)
(x, y)
x y y x

(x, y)
~ 1 ) = x y y x . (49)
:= det(J T
(, )

These are called Jacobian determinants13 ; their absolute values are exactly the factors
~
needed to compute double integrals under the change of coordinates T:


Z
ZZ
 (x, y)
dd,
f x(, ), y(, )
f (x, y) dxdy =
(50)
(, )
~
T(D)
D

for any scalar field f : D R. This is the fundamental formula for the change of variables
in a double integral. In other words, we can say that the infinitesimal surface elements in
the xy-plane and in the -plane are related to each other by the formulas:




(, )
(x, y)

dxdy.


dd,
dd =
dxdy =
(, )
(x, y)
In general, we have

(, )
=
(x, y)

(x,y)
(,)

(51)

13 In some books, when used as noun, the word Jacobian stands for Jacobian determinant, as opposed to
Jacobian matrix. Again, this can be a source of ambiguity.

38

A. Moiola

Vector calculus notes, version 3

Formula (51) is often useful when the partial derivatives of only one of the transformations
~ and T
~ 1 are easy to compute. In other words, if you need (x,y) for computing an integral
T
(,)
~ is easier to obtain than J(T)
~ 1 , you can compute
via a change of variable as in (50), but J T
(,)
(x,y)

and then apply (51).

~
Remark 2.18. Note that here we are implicitly assuming that the Jacobian matrices of T
~ 1 are never singular, i.e. their determinants do not vanish in any point of the domain,
and T
otherwise equation (51) would make no sense. Under this assumption, it is possible to prove
~ 1 ) = (J T)
~ 1 , namely the Jacobian matrix of the inverse transform is equal to the
that J(T
~ itself; this is part of the Inverse Function Theorem you
inverse matrix of the Jacobian of T
might study in a future class.
We return to the affine transformations (48). Their Jacobian matrices are (recall (14))



 1
1
~ 1 ) = 1 1 .
~ = 12 12 ,
J(T
JT
2
1 1
2
Note that this case is quite special: since the transformations are affine (namely polynomials
of degree one), their Jacobian are constant in the whole plane R2 . From this, we compute
the Jacobian determinants
(, )  1  1  1  1  1
(x, y)

= ,
=
= (1)(1) 1(1) = 2.
(52)
(x, y)
2
2
2
2
2
(, )

~
= 2 is the ratio between the areas of E and T(E)
(since in this
~
simple case the Jacobian of T and its determinant are constant in E). Now we can easily
integrate on the triangle E in (47) the scalar field f = (1 x y)5 , for example, using the
~
as in Figure 24:
change of variables (48) which maps E into T(E)


ZZ
ZZ
(x, y)
(48),(50)
dd
f ( + , + 1)
f (x, y)dxdy =
(, )
~
E
T(E)
ZZ
(52)
(1 + + + 1)5 2 dd
=
~
T(E)
ZZ
=
(2)5 2 dd
Note that, as expected,

(x,y)
(,)

~
T(E)

(44)

= 64

= 64

0
1

Z


1d d

( 5 6 )d

1
1 7 
64
1.524.
= 64 =
6
7
42
0
1

2
Example 2.19 (Areas of curvilinear domains). The area
RR of a domain D R can be
computed by integrating on it the constant one: Area(D) = D 1dxdy. If D is y-simple, one
can use the iterated integral (46). Otherwise, if D is complicated but can be mapped into a
simpler shape, the change of variable formula (50) can be used to compute its area.
We now want to use formula (50) to compute the areas of the three curvilinear domains
plotted in Figure 25. Each of them is obtained from the unit square S = {0 < x < 1, 0 <
y < 1} (upper left plot) from the transformations listed in the figure caption. We denote the
domains as DUR , DLL , DLR (as upper right, lower left and lower right plot) to distinguish one
another, and use a similar notation for the change of variable transformations and coordinates
~ UR : S DUR ). In the first case (upper right plot), the transformation, its Jacobian
(e.g. T
matrix and determinant are


(UR , UR )
1
1
0
~
,
J TUR = 2
= 1.
UR = x,
UR = y+ sin(2x),

cos(2x)
1
5
(x, y)
5

From this we obtain


ZZ
Area(DUR ) =

DU R

dUR dUR =

ZZ

~ 1 (DU R )
T
UR

39

(UR , UR )
dxdy =
(x, y)

1dxdy = 1,

A. Moiola

Vector calculus notes, version 3

(which we could have guessed from the picture!). For the second picture we have

~ LL
JT
which leads to
ZZ
Area(DLL ) =

LL = (1 + x) cos y,
LL = (1 + x) sin y,


(LL , LL )
cos y (1 + x) sin y
=
,
= 1 + x,
sin y (1 + x) cos y
(x, y)

dLL dLL =

ZZ

DLL

(LL , LL )
dxdy =
(x, y)

(1+x)dydx =

(1+x)dx =

3
.
2

In the last case, the change of variable reads


LR = exy cos(2y),

LR = exy sin(2y).

As exercise, you can verify that the area of DLR is equal to 2(e 2) (if you do it in the
right way, i.e. with no mistakes, despite looking nasty the computation is actually very easy).
Remark 2.20 (Intuitive justification of the change of variable formula). Where does the
change of variable formula (50) come from? Why are the infinitesimal surface elements
related to the Jacobian determinant?
Consider the square S with vertices
~pSW = x
+y
,

~pSE = (x+h)
+y
,

~pN W = x
+(y+h)
,

~pN E = (x+h)
+(y+h)
,

where h > 0 is very small. Of course, it has area equal to |~pSE ~pSW ||~pN W ~pSW | = h2 .
Under a transformation (x, y) 7 (, ), the first three vertices are mapped into
~qSW = (x, y) + (x, y)
,





~qSE = (x + h, y) + (x + h, y)
(x, y) + h (x, y) + (x, y) + h (x, y)
,
x
x





~qN W = (x, y + h) + (x, y + h)


,
(x, y) + h (x, y) + (x, y) + h (x, y)
y
y

where we have approximated the value of the fields and with their Taylors expansions
centred a x
+ y
(which is a good approximation if h is small).
We then approximate the image of S in the -plane with the parallelogram with edges
[~qSE , ~qSW ] and [~qN W , ~qSW ], see Figure 26. Using the geometric characterisation of the
vector product described in Section 1.1.2, we see that this parallelogram has area equal to the
magnitude of the vector product of two edges:

 




(~qSE ~qSW ) (~qN W ~qSW ) = h + h

+ h

x
x
y
y



= h2
x y
x y


(49) 2 (, )
.
= h
(x, y)

Therefore, the Jacobian determinant is the ratio between the area of the parallelogram approximating the image of S, and the area of the square S itself. In the limit h 0, the
approximation error vanishes and the Jacobian determinant is thus equal to the ratio between
the infinitesimal areas in the xy-plane and in the -plane.
When we perform the change of variables (x, y) 7 (, ) inside an integral, in order to
preserve the value of the integral itself, we need to replace the contribution given by every
infinitesimally small square S with the contribution given by its transform, which is the
(,)
.
same as multiplying with (x,y)
Example 2.21 (Change of variables for y-simple domains). The change of variables formula (50) can be immediately applied for computing integrals in y-simple domains, as an
alternative to (46). Indeed, the transformation with components
(x, y) = x,

(x, y) =
40

y a(x)
b(x) a(x)

A. Moiola

Vector calculus notes, version 3

y
h

~pN E

~pSW
= x
+ y

~pSE

~pN W

~qN W

~qN E

~qSW
~qSE

Figure 26: As described in Remark 2.20, the square S of area h2 in the xy-plane is mapped into
a curvilinear shape in the -plane. The Jacobian determinant (multiplied by h2 ) measures the
area of the dashed parallelogram in the -plane, which approximates the area of the image of S.
maps the domain D of (45) into the rectangle


(xL , xR ) (0, 1) = x
+ y
s. t. xL < x < xR , 0 < y < 1 .

The inverse transformation is


y(, ) = b() a() + a(),

x(, ) = ,

whose Jacobian determinant reads



(x, y)
1
=
b () + (1 )a ()
(, )



0
= b() a().
b() a()

For instance, we compute the integral of f (x, y) = y1 in the smile domain (see left plot
in Figure 27) in the -plane:
n
o
D = x
+ y
R2 , s.t. 1 < x < 1, 2x2 2 < y < x2 1 ;

(note that a(x) = 2x2 2, b(x) = x2 1, xL = 1 and xR = 1, in the notation of (45)).


Using the change of variables formula (50), we have
ZZ
ZZ

 (x, y)
dd
f x(, ), y(, )
f (x, y)dxdy =
(, )
(1,1)(0,1)
D
ZZ




f , b() a() + a() b() a() dd
=
(1,1)(0,1)
ZZ


=
f , (1 2 ) + 2 2 2 (1 2 )dd
(1,1)(0,1)
ZZ
1
(1 2 )dd
=
2
(1,1)(0,1) (1 )( 2)
Z 1
Z 1
1
=
d
d
2

1
0
1

= 2 log(2 ) = 2 log 2 1.386.
0

Example 2.22 (Integrals in domains bounded by level sets). In order to apply the change
of variable formula to the computation of areas of domains defined by four curvilinear paths,
it is sometimes useful to express the paths as level sets of some field. For instance, we want
to compute the area of the domain D bounded by the parabolas
y = x2 ,

y = 2x2 ,

x = y2,

x = 3y 2

(see the sketch in Figure 28). An important observation is to note that these parabolas can
be written as level curves for the two scalar fields
(x, y) =

x2
y

and
41

(x, y) =

y2
.
x

A. Moiola

Vector calculus notes, version 3

y0
0.2
0.4
0.6

0.8

1.5

1.2

2.5

1.4
3
1.6

1
3.5

1.8
1

0.5

x
1

0.5

0.5

4
4.5
0
5
0

0.5

0.5
1
1.5

Figure 27: The smile domain described in Example 2.21 (left plot) and the field f = y1 (right
plot). (Note that the values attained
R by the field approach at the two tips of the domain,
however the value of the integral D f dxdy is bounded since the tips are thin.)
In other words, and constitute a change of variables that transforms D in the rectangle
1
1
2 < < 1 and 3 < < 1 in the -plane. The corresponding Jacobian determinants are


2 x x2
1
(x, y)
(, )
2
y
y
= 2
= ,

= 3,
(x, y) y2
(, )
3
2y
x

from which we compute the area of D


Z
ZZ
dxdy =
D

1
1
2

1
1
3

1
1
dd = .
3
9

(More examples of this kind are in the MA1CAL notes.)


1.4

1.2

y = 2x2

y = x2

0.8

x=y

0.6

x = 3y 2

0.4

0.2

0.2

0.4

0.6

0.8

1.2

1.4

Figure 28: The curvilinear quadrilateral bounded by four parabolas as described in Example 2.22.

2.2.3

Triple integrals

All what we said about double integrals immediately extend to triple integrals, namely integrals on three-dimensional domains.
As before, triple integrals can be computed as iterated integrals. For instance, consider a
domain D R3 defined as
n
o
s. t. xL < x < xR , a(x) < y < b(x), (x, y) < z < (x, y)
D = x
+ y
+ zk

for two real numbers xL and xR , two real functions a and b, and two two-dimensional scalar
fields and . Then, the integral in D of a scalar field f can be written as
 !
Z xR Z b(x)  Z (x,y)
ZZZ
f (x, y, z)dxdydz =
f (x, y, z)dz dy dx.
(53)
D

xL

a(x)

42

(x,y)

A. Moiola

Vector calculus notes, version 3

The infinitesimal volume element is defined as dV = dxdydz.


depicted in Figure 29 can be
For example, the tetrahedron B with vertices ~0,
, and k
written as
n
o
s. t. 0 < x < 1, 0 < y < 1 x, 0 < z < 1 x y .
B = x
+ y
+ zk

In this example, xL = 0, xR = 1, a(x) = 0, b(x) = 1 x, (x, y) = 0 and (x, y) = 1 x y.

~0

Figure 29: The tetrahedron with vertices ~0,


, and k.
The change of variable formula (50) extends to three dimensions in a straightforward
~ : D T(D)
~
fashion. If the transformation T
has components
(x, y, z),

(x, y, z),

(x, y, z),

and its Jacobian determinant is





x


(, , )
~ =
= det(J T)
x
(x, y, z)




x

then the change of variable formula reads




z

,
z


z



 (x, y, z)

ddd,
f x(, , ), y(, , ), z(, , )
f (x, y, z)dxdydz =
(, , )
~
D
T(D)

ZZZ

(54)

for any scalar field f defined in D.


Returning to the previous example, the tetrahedron B is the image of the unit cube
~ with components
0 < x, y, z < 1 under the transformation T
= (1 z)(1 y)x,

= (1 z)y,

= z.

The corresponding Jacobian is




(1 z)(1 y) (1 z)x (1 y)x


(, , )
= (1 y)(1 z)2 .
~ =
0
(1 z)
y
= det(J T)


(x, y, z)


0
0
1

In order to demonstrate the formulas of this section, we compute the volume of the
tetrahedron B integrating the constant 1 first with the iterated integral (53) and then with
the change of variables (54):
ZZZ
1 dxdydz
Vol(B) =
B
Z
 !
Z
Z
1

(53)

1xy

1x

1 dz dy dx

43

A. Moiola

Vector calculus notes, version 3

1x

!

1 x y dy dx

1
1
1
1 
1
x x2 + x3 = ,
(1 x)2 dx =
2
2
3
6
0
Z0Z Z
Vol(B) =
1 ddd
B
ZZZ
(, , )
(54)
=
dxdydz
~ 1 (B) (x, y, z)
T
Z 1Z 1Z 1
=
(1 y)(1 z)2 dxdydz
0
0
0
 Z 1  Z 1
 Z 1

1
1 1
2
=
dx
= .
(1 y)dy
(1 z) dz = 1
2 3
6
0
0
0
=

2.2.4

Surface integrals

There is another kind of integrals we have not encountered yet. We have seen integrals along
curves, on flat surfaces, and on volumes, what is still missing are integrals on curved surfaces.
This section considers surface integrals of scalar fields; Section 2.2.5 considers surface integrals
of vector fields.
Surfaces are two-dimensional subsets of R3 , their definition in the general case is beyond
the scope of this class. We now consider only a small class of surfaces, namely those that are
graphs of smooth, two-dimensional scalar fields g : D R (with D R2 ). The graph of g
is the surface


S = ~r R3 s.t. x
+ y
D, z = g(x, y) R3 .
(55)

The graph surface can be written in brief as S = {z = g(x, y)}, meaning that S is the set
whose coordinates x, y and z are solutions of the equation in
of the points x
+ y
+ zk
braces. Some examples are the paraboloid of revolution {z = x2 + y 2 } (e.g. a satellite dish
2
2
is a portion of it), the hyperbolic
pparaboloid {z = x y } (e.g. Pringles crisps, see also
2
2
2
2
Figure 8), the half sphere {z = 1 x y , x + y < 1}. The entire sphere is not a
graph, since to each point x
+ y
in the plane, with x2 + y 2 < 1, correspond two different
points on the sphere; on the other hand the sphere can be written as union of two graphs or
as {x2 + y 2 + z 2 = 1}.
The integral on S of a scalar field f can be computed as
ZZ

f (x, y, z) dS =

s
2  g
2
 g

f x, y, g(x, y) 1 +
(x, y) +
(x, y) dA,
x
y
D

ZZ

(56)

where the integral at the right-hand side is a double integral on the (flat) domain D as those
studied in Section 2.2.1. The symbol dS denotes the infinitesimal RR
area element on S, it is
the curvilinear analogue of dA. Note that we used the symbol
, since the domain of
integration is two-dimensional.
q
p
~ 2 should recall you the similar coefficient 1 + g (t)
Note. The measure factor 1 + |g|
we found in the computation of integrals along the graph of a real function in Example 2.4.
Example 2.23 (Surface area). Compute the area of the surface
o
n
3
2 3
S = ~r R3 , z = x 2 + y 2 , 0 < x < 1, 0 < y < 1 .
3

We simply integrate the field f = 1 over S using formula (56). Here the domain D is the
unit square (0, 1) (0, 1).
s
ZZ
ZZ
 ( 2 x 32 + 2 y 23 ) 2  ( 2 x 32 + 2 y 23 ) 2
3
3
3
3
+
dA
1dS =
Area(S) =
1+
x
y
(0,1)(0,1)
S
Z 1Z 1p
1 + x + y dydx
=
0

44

A. Moiola

Vector calculus notes, version 3

1
2
3
2
=
dx
(1 + x + y)
0 3
y=0
Z

3
2 1
3
(2 + x) 2 (1 + x) 2 dx
=
3 0
 1
5
5
2 2
(2 + x) 2 (1 + x) 2
=
35
x=0

5
5
4  25
2
2
=
3 2 2 +1
15



4
=
243 2 32 + 1 1.407.
15
We will examine other examples of surface integrals in the next sections.
Z

2.2.5

Unit normal fields, orientations and flux integrals

For every point ~r on a smooth surface S, it is possible to define exactly two unit vectors n
A
and n
B that are orthogonal to S in ~r. These two vectors are opposite to each other, i.e.
n
A =
nB , and they are called unit normal vectors. If for every point ~r S we fix a unit
normal vector n
(~r) in a continuous fashion (i.e. n
is a continuous unit normal vector field
defined on S), then the pair (S, n
) is called oriented surface. Note that the pairs (S, n
)
and (S,
n) are two different oriented surfaces, even though they occupy the same portion
of the space.14
Not all surfaces admit a continuous unit normal vector field, the most famous example of
non-orientable surface is the M
obius strip. On the other hand, the surfaces in the following
three families admit an orientation.
(i) If S is the graph of a two-dimensional scalar field g as in equation (55), then it is
possible to define a unit normal vector field using the formula
g
g

+ k
x
y
n
(
r) = s
 2  2 .
g
g
+
1+
x
y

(57)

Since the z-component of n


in this formula is positive, this is the unit normal that
points upwards.
(ii) Another family of surfaces admitting a unit normal vector fields are the boundaries
of three-dimensional domains. The boundary of a domain D R3 is commonly denoted D and is a surface (if D is smooth enough). In this case, we usually fix the
orientation on D by choosing the outward-pointing unit normal vector field.
~ 6= ~0 near
(iii) If the surface S is the level surface of a smooth scalar field f satisfying f
~ /|f
~ | is an admissible unit normal field and (S, n
= f
)
S (see Section 1.2.1), then n
is an oriented surface15 . This is a consequence of Part 4 of Proposition 1.22. We have
seen a special case of this situation in Example (1.26). These surfaces are those defined
by the equation S = {f (~r) = }.
Note that if S is the graph of g as in item (i), then it is also the level set of the field
g
~ = g
So, the unit normal vector
f (x, y, z) = z g(x, y), whose gradient is f
+ k.
x y
fields defined in items (i) and (iii) in the list above are the same.
Remark 2.24 (Normal and tangential vectors on a graph surface). It is easy to verify that
g
k) and
+ x
n
defined by in formula (57) has length one and is orthogonal to the vectors (
g
(
+ y k). These two vectors are tangential to the surface graph of ~g. Try to visualise this
fact geometrically with some examples and comparing with the lower-dimensional case (i.e.
graphs of real functions).
14 Oriented surfaces share some properties with the oriented paths seen in Section 2.1. Also in that case, a
path supports two different orientations, corresponding to travel directions, leading to two different oriented
paths. In both cases, the orientation is important if we want to integrate vector fields, while is not relevant
for the integration of scalar fields; can you guess why?
15 Can you imagine what happens if f
~ = ~0?

45

A. Moiola

Vector calculus notes, version 3

Remark 2.25 (Normal unit vectors on piecewise smooth surfaces). The situation is more
complicated when the surface S is not smooth but only piecewise smooth, for instance is the
boundary of a polyhedron. In this case it is not possible to define a continuous unit normal
vector field. For instance, on the boundary C = {max{|x|, |y|, |z|} = 1} of the unit cube
C = {max{|x|, |y|, |z|} 1}, the outward-pointing unit normal vectors on two faces meeting
at an edge are orthogonal to each other, so when crossing the edge they suddenly jumps,
i.e. they are not continuous (to properly define the continuity of a function or a vector field
on a surface we need the definition of a parametrised surface, which we have not considered
in this notes). However, it is possible to give a precise definition of orientation also in this
case, formalising the idea that n
stays on the same side of S (see [1, Page 881]). In all
practical cases, with a bit of geometric intuition it should be clear how to define a normal
field in such a way that it stays on the same side of the surface (and so it determines the
surface orientation).
~ defined on S we call flux of F
~
Given an oriented surface (S, n
) and a vector field F
through (S, n
) the value of the integral
ZZ
~ n
(58)
F
dS.
S

RR
~ d~S.
The flux is sometimes denoted S F
~
Note that the integrand F n
is nothing else than a scalar field defined on S, so the flux
can
be
computed
as
a
surface
integral. On the graph surface (55) of g, the area element
q

g 2
g 2
) + ( y
) in (56) and the denominator in the unit normal (57) cancel each other
1 + ( x
and the integral in the flux (58) simplifies to
s
ZZ
ZZ
g

 g 2  g 2

g
+ k
x
y
~
~
+
dA
F~
n dS =
F q
1
+

2
x
y
g 2
S
D
1 + x
+ g
y

ZZ 
g
g
=
F1
(59)
F2
+ F3 dA
x
y
D


ZZ
 g

 g
(x, y) F2 x, y, g(x, y)
(x, y) + F3 x, y, g(x, y) dxdy.
F1 x, y, g(x, y)
=
x
y
D

If the surface is a domain boundary D, the surface integral and the flux are often denoted
with the symbols
ZZ
ZZ
~ n
f dS
and
F
dS.
D

~ = ~r
through the
Example 2.26. Compute the flux of the vector field F
= z
yk
2
2
hyperbolic paraboloid S = {z = x y , 0 < x, y < 1} (see Figure 8).
~ and g(x, y) = x2 y 2 :
We simply use formula (59) together with the expression of F


ZZ
ZZ
(x2 y 2 )
(x2 y 2 )
~ ~n dS =
|{z}
0
F
z(x, y)
+ (y) dx dy
| {z }
| {z }
x
y
S
(0,1)(0,1)
=F1

=F2

=F3


(y 2 x2 )(2y) y dx dy
0
0

Z 1
2
2 1 1
2
3
=
2y + y y dy = + = .
3
4 3 2
3
0
=

A surface S may have a boundary S, which is the path of a curve. For instance, the
boundary of the upper half sphere {|~r| = 1, z 0} is the unit circle {x
+ y
, x2 + y 2 = 1}
(the equator), while the complete sphere {|~r| = 1} has no boundary. The boundary of
the open cylindrical surface C = {x2 + y 2 = 1, |z| < 1} is composed of the two circles
{x2 + y 2 = 1, |z| = 1}. An oriented surface S induces an orientation on its boundary:
we fix on S the path orientation (a travel direction) such that walking along S, on the same
side of S defined by n
, we leave S itself on our left. This definition may be very confusing,
try to visualise it with a concrete example, e.g. using a sheet of paper as model for a surface;
see also Figure 30.
46

A. Moiola

Vector calculus notes, version 3

Remark 2.27 (The boundary of a boundary). Note that if the surface S is the boundary
of a volume D R3 , i.e. S = D, then S has empty boundary S = . You may find this
fact stated as 2 = , where the symbol here stands for the action taking the boundary.
Try to think at some geometric examples. (Warning: not mix up the different objects, the
boundary D of a volume D is a surface, the boundary S of a surface S is a path!)
z

S
y

x
Figure 30: The relation between the orientation of an oriented surface, given by the continuous
unit normal field n
, and the orientation of the boundary S (the direction of travel along the
path). Imagine to walk along the path S, according to the arrows, and staying on the same side
of the vector n
(i.e. above the surface). Then S is always at our left. So the orientations of S
and S in the figure are compatible.
Remark 2.28 (Boundaries and derivatives). Why are boundaries and derivatives denoted
with the same symbol ? This notation comes from the deep theory of differential forms,
however, we can see that these two apparently unrelated objects share some properties. A very
important property of derivatives is the product rule: (f g) = f g +f g for scalar functions, or
~ g) = (f
~ )g + f g
~ for scalar fields, (25). Now consider the Cartesian product of the two
(f
segments (a, b) R and (c, d) R, namely the rectangle R = (a, b) (c, d) = {x
+ y
, a <
x < b, c < y < d}. Its boundary is the union of four edges (a, b){c}, (a, b){d}, {a}(c, d),
and {b} (c, d). The boundaries of the segments are (a, b) = {a, b} and (c, d) = {c, d}.
Therefore we can write





(a, b) (c, d) = (a, b) {c} (a, b) {d} {a} (c, d) {b} (c, d)


= (a, b) {c, d} {a, b} (c, d)


= (a, b) (c, d) (a, b) (c, d)

which closely resembles the product rule for derivatives. Here the Cartesian product ()
plays the role of the multiplication, the set union () that of addition and the action take
the boundary plays the role of the differentiation! The same kind of formula holds true for
much more general domains; try to see what happens if you consider the tensor product of a
circle and a segments, two circles, or a planar domain and a segment.

2.3

Special coordinate systems

Among the possible changes of coordinates described in Section 2.2.2, some are particularly
important. Polar coordinates are used for two-dimensional problems with some circular
symmetry around a centre, while cylindrical and spherical coordinates are used in three
dimensions. Several other special (more complicated) coordinate systems exist, we do not
consider them here. Several exercises on the use of polar coordinates are available in the
MA1CAL notes. In Table 1 at the end of this section we summarise the most important
facts to keep in mind for the three system of coordinates studied here16 .
16 The page http://en.wikipedia.org/wiki/Del_in_cylindrical_and_spherical_coordinates contains a
useful summary of these and many other identities and formulas (in 3D only). However, the notation used
there is completely different from that we have chosen in this notes: to compare those formulas with ours,
r and must be exchanged with each other and similarly and ! (See also footnote 18.) Our notation is
chosen to be consistent with that of [1, Section 10.6].

47

A. Moiola

2.3.1

Vector calculus notes, version 3

Polar coordinates

The polar coordinates r and are defined by the relations17


p
(
(
r = x2 + y 2 ,
x = r cos ,
y
= tan1 .
y = r sin ,
x

(60)

The xy-plane R2 is mapped by this transformation into the strip r 0, < . Given a
point x
+ y
, its value of r is the distance from the origin, while the value of is the angular
distance between the direction
of the axis x and the direction of the point itself. If r = 0,
i.e. for the origin of the Cartesian axes, the angular coordinate is not defined.
y = /2
=

3
4

r sin

~r

x
=0

r cos

r=2

r=1
= 41

Figure 31: The polar coordinate system.


We compute the Jacobian of the direct and the inverse transformations:
! 

x
x
cos r sin
r

=
,
y
y
sin r cos
r

!
!
!


r
r
y
x 1
x
x
1 r cos r sin
(51)
x
y
r

r
r
.
=
=
=

y
y
r sin cos
ry2 rx2
x
y
r

(The second matrix can also be calculated directly using


determinants are immediately computed as

(tan1 t)
t

1
1+t2 .)

The Jacobian

(r, )
1
1
.
= = p
(x, y)
r
x2 + y 2

(x, y)
= r(cos2 + sin2 ) = r,
(r, )

(61)

(62)

Therefore, the infinitesimal area element in polar coordinates is


dA = dxdy = r drd.

Example 2.29 (Area of the disc). As a simple application of the polar coordinate system,
we compute the area of the disc of radius R centred at the origin, i.e. D = {x
+ y
s.t.
x2 + y 2 < 1}:
R
ZZ
Z RZ
Z R
r2
Area(D) =
dxdy =
r ddr =
2r dr = 2 = R2 ,
2 0
D
0

17 Here we are being quite sloppy. The inverse of the tangent function is usually called arctan and takes
y
y
is meant to be equal to arctan x
only when x > 0 and to take values in
values in ( 2 2 ). Here tan1 x
(, 2 ] [ 2 , ] otherwise. For the exact definition, see http://en.wikipedia.org/wiki/Atan2 . However,
(x, y) can be easily computed visually as the angle determined by the point x
+ y
. E.g. if x > 0, y > 0,
then x
+ y
is in the first quadrant so belongs to the interval (0, 2 ); if x
e = x < 0, ye = y < 0 then x
e
+ ye
belongs to the third quadrant and e must be in the interval (, ), even if ye = y and arctan ye = arctan y .
2

48

x
e

x
e

A. Moiola

Vector calculus notes, version 3

which agrees with the formula we know from elementary geometry.


In the punctured plane R2 \ {~0}, we define two vector fields r and of unit magnitude,
which correspond to the unit vectors of the canonical basis in the r-plane, see Figure 31
(compare with Footnote 11). They are related to
and by the formulas
~r
y
x

+ p
=
r(x, y) = cos
+ sin
= p
,
2
2
2
2
|~r|
x +y
x +y
y
x
y) = sin
,
(x,
+ cos
= p

+ p
2
2
2
x +y
x + y2

(r, ) = cos
r sin ,

(r, ) = sin
r + cos .
(63)

In every point ~r 6= ~0 the two unit vectors r and are orthogonal one another and point in
the direction of increase of the radial coordinate r and of the angular one , respectively. In
other words, r points away from the origin and points in the counter-clockwise direction.
Example 2.30 (Polar curves). The expression of some planar curves in polar coordinates
is simpler than in Cartesian coordinates. The circles centred at the origin corresponds to
Conversely, halfcurves with r(t) = C for some constant C, for instance ~a(t) = C~r + t.
~
lines with starting point at the origin 0 satisfy (t) = C for some constant C, for instance
~b(t) = t~r + C .
The clover curve ~c(t) = (cos 3t)(cos t)
+ (cos 3t)(sin t)
in Figure 10
(Note that in this case
satisfies |~c(t)| = | cos 3t|, so in polar form it reads ~c(t) = cos 3t
r + t.
we are allowing negative values for the radial component! What does it mean? Compare with
the curve plot.)
Example 2.31 (Area of domains delimited by polar graphs). If a curve ~a can be expressed
in the form ~a(t) = g(t)
r +t for < t and for some positive function g, it is sometimes
called a polar graph. In this kind of curves, the magnitude is function of the angle (i.e.
r = g() for all points r
r + in the path of ~a) and their path can be seen as the graph of
the function g in the r-plane. The domain D delimited by the curve ~a is r-simple in the
r-plane (compare with the definition of y-simple domain in (45)):
n
p
o
D = ~r = x
+ y
R2 , s.t. x2 + y 2 < g (x, y)
n
o
= ~r = r
r + R2 , s.t. < , 0 r < g() .
The area of D can be computed as
ZZ

Area(D) =

dxdy =

Z

g()


Z
r dr d =

g()
Z
1 2
1 2
r d =
g ()d.
2 0
2

(64)

For example we compute the area of the domain delimited by the propeller curve ~a(t) =
(2 + cos 3t)
r + t depicted in left plot of Figure 32:
Z


2
sin 3 cos 3 2
1
9
2 + cos 3 d = 2 + +
+ sin 3 = ,
Area(D) =
2
4
12
3
2

where we used cos2 t = 12 (t + sin t cos t) . (We can deduce from this computation that the area
delimited by (2 + cos nt)
r + t is independent of n N, i.e. all the flower domains in the
centre plot of Figure 32 have the same area.)
We can also compute the area of more complicated shapes, for instance the domain between
the two spirals t
r + t and 2t
r + t (for 0 < t 2) in the right plot of Figure 32:
Z

Z

r dr d =


1
(2)3
42 2 d =
= 4 3 124.
2
2

Any two-dimensional scalar field f (x, y) can be expressed in polar coordinates as F (r, ) =
f (r cos , r sin ). Here F , is a function of two variables (as f ) which represents the field f
but has a different functional expression (we have already used this fact several times without

49

A. Moiola

Vector calculus notes, version 3

120
150

90 4
60
2
30

180

0
330

210
240
120

90 4

120
90 4
120
60
2
150
30

30

180

180

210
210

330
240

270

300

60

150

270

240

90 20

60

10

150

30

180

330
270

210

300

300

330
240

270

300

90 4
120
60
2
150
30
180

210
240

330
270

300

Figure 32: The polar graph (2 + cos nt)


r + t for n = 3 (left), n = 2 (upper centre), n = 7
(centre centre), n = 12 (lower centre). As seen in Example 2.31, they all have the same area. In
the right plot, the two spirals t
r + t and 2t
r + t (with 0 < t 2).
spelling it out). The gradient and the Laplacian of f can be computed in polar coordinates
using the chain rule:
f (x, y)
~ (x, y) = f (x, y)
+

f
x
y


F r(x, y), (x, y)
F r(x, y), (x, y)
=

x
y




F r
F
F
F r
+
+

=
r x
x
r y
y




F r
r
F

+
+

r x
y
x
y




x
x
y
F y
(61) F

+ +
=
r r
r
r2
r2
(63)

(chain rule)

1 F ~
F
= F (r, ).
r +
r
r

(65)

Thus, whenever we have a scalar field F expressed in polar coordinates, we can directly
compute its gradient using formula (65) (do not forget the r1 factor!) without the need of
transforming the field in Cartesian coordinates.
2.3.2

Cylindrical coordinates

There are two possible extensions of the polar coordinates to three dimensions. The first
one is used to treat problems with some cylindrical symmetry. By convention, the axis of
symmetry is fixed to be the axis z. The cylindrical coordinate r, and z corresponds to the
polar coordinates r, in the xy-plane and the height z above this plane:

x = r cos ,
y = r sin ,

z = z,

x2 + y 2 ,
r
=

y
= tan1 ,
x

z = z,

r 0,
< ,
z R.

(66)

Proceeding as in (62), it is immediate to verify that


(x, y, z)
= r,
(r, , z)

(r, , z)
1
= ,
(x, y, z)
r
50

(67)

A. Moiola

Vector calculus notes, version 3

and the infinitesimal volume element is


dV = dxdydz = r drddz.
This can be used to compute the volume of the solids of revolution


Dg = ~r R3 s.t. zbot < z < ztop , r < g(z) ,

(68)

where g : (zbot , ztop ) R is a non-negative function, as


Vol(Dg ) =

ZZZ

dV =

Dg

ztop
zbot
ztop
zbot

Z

Z

ztop

Z

g(z)
0

 
r dr d dz


1 2
g (z)d dz
2

g 2 (z)dz.

(69)

zbot

Example
2.32 (Volume of solids of revolution).

Compute the volume of the pint domain
P = ~r R3 s.t. 0 < z < 1, r < 13 + 21 z 2 31 z 3 in Figure 33:
(69)

Vol(P ) =

0
1

1 1 2 1 3
+ z z
3 2
3

2

dz


1 1 4 1 6 1 2 2 3 1 5
+ z + z + z z z dz
9 4
9
3
9
3

0
223
1
1
1
1
1
1
=
+
+
+

0.177.
=
9 20 63 9 18 18
1260

r
~r
z
y

x
Figure 33: The domains described in Examples 2.32, 2.33 and Exercise 2.34: the pint P (left),
the ellipsoid x2 + y 2 + 4z 2 < 1 (upper centre), the rugby ball B (lower centre) and the funnel F
(right).
Example 2.33 (Ellipsoid volume). Compute the volume of the oblate ellipsoid of revolution
2
E bounded by the surface x2 + y 2 + zc2 = 1, for a real number c > 0.
2
The ellipsoid E = {~r R3 s.t. x2 + y 2 + zc2 < 1}, in polar coordinates reads
r
 


z2
z2
3
3
2
E = ~r R s.t. r < 1 2 = ~r R s.t. r < 1 2 .
c
c
51

A. Moiola

Vector calculus notes, version 3

Thus, it is a solid of revolution as in equation (68), with c < z < c (the admissible interval
2
for z corresponds to the largest interval which guarantees 1 zc2 0). From formula (69),


 c
Z c
ZZZ
z3
4
z2
1 2 dz = z 2 = c.
dV =
Vol(E) =
c
3c
3
E
c
c
Note that this result agrees with the formula for the volume of the sphere when c = 1.

Exercise 2.34. Calculate the volume of the rugby ball B and the (infinite) funnel F in
Figure 33:




F = ~r R3 s.t. < z < 0, r < ez .
B = ~r R3 s.t. 1 < z < 1, r < 1 z 2 ,
Solution. You should be able to obtain Vol(B) =

16
15

and Vol(F ) =

2.

Exercise 2.35. Cylindrical coordinates can be used also to deal with domains which are not
solids of revolution.
Can you draw the shape of D = ~r R3 s.t. 2 < z < 2 , r <

(cos z)(2 + sin 3) ? Can you compute its volume? You need to derive a slightly more general
formula than (69).

~r R3 s.t. 0 < z <
Exercise
2.36
(Triple
integrals
on
a
cone).
Consider
the
cone
C
=

1, r < z . Verify the following integral computations:
ZZZ
ZZZ
ZZZ
3

x dV = 0,
(x2 + y 2 + z 2 ) dV =
.
z dV = ,
4
10
C
C
C
(The domain is easily defined in cylindrical coordinates, while the last two integrands are
defined in Cartesian coordinates, figure out how to deal with this fact.) Prove that
Z 1
ZZZ
g(z)dV =
g(z)z 2 dz
0

for any function g : (0, 1) R.


As we did in (63) for the polar coordinates, also for the cylindrical system we can compute
the vector fields with unit length lying in the direction of increase of the three coordinates:
y
x

+ p
,

= cos
r sin ,
r = p
2
2
2
x +y
x + y2
y
x

(70)
= p

+ p
,
= sin
r + cos ,
2
2
2
x +y
x + y2

= z.
z = k,
k
in the centre plot of Figure 10, expressed in cylindrical
The helix ~b(t) = cos t
+ sin t
+ 0.1tk
coordinates, has the simple expression ~b(t) = r + t + 0.1t
z.
Remark 2.37 (Vector differential operators in cylindrical coordinates). As we did in
equation (65) for polar coordinates, we can compute the gradient of a scalar field f expressed
in cylindrical coordinates. Since we are now considering a three-dimensional space, we also
~ = Gr r + G + Gz z. We only write the final
compute divergence and curl of a vector field G
results (they can be derived from (66), (70) and the chain rule, but the computation are quite
messy):
~ (r, , z) = f r + 1 f + f z,
f
r
r
z
(rG
)
F
1
Fz
Gr
1
1 F
Fz
1
r

~
~ G(r,
+
+
=
+ Gr +
+
,
(71)

, z) =
r r
r
z
r
r
r
z
 F
 1 F
F 
Fz   F
1
1 Fr 
z
r
~ G(r,
~
r +
z.
+

+ F

, z) =
r
z
z
r
r
r
r

~ = y
Show that in polar coordinates it
Exercise 2.38. Consider the vector field F
+ x
+ k.
~
~

reads F = r + z. Use (71) to prove that F is solenoidal. This may be the field that gave the
~ are the helices (see Figure 10), which have
name to all solenoidal fields: the streamlines of F
the shape of a solenoid used in electromagnetism (although Wikipedia suggests a different
origin of the name).
52

A. Moiola

Vector calculus notes, version 3

Example 2.39 (Surface integrals in cylindrical coordinates). Polar and cylindrical coordinates can be used to compute surface integrals. Consider the conic surface of unit radius and
unit height (written in Cartesian and cylindrical coordinates):
n
o n
o
p
S = ~r R3 , z = x2 + y 2 , x2 + y 2 < 1 = ~r R3 , z = r, r < 1 .

(Note that S is part of boundary of the cone C in Exercise 2.36.) The surface S is the graph
r < 1}. Thus,
of the two-dimensional field g(r, ) = r defined on the unit disc D = {r
r + ,
its area can be computed using the surface
f = 1. We also
p integral formula (56), choosing
~ = r r = r, when written
exploit the fact that the gradient of g = x2 + y 2 = r satisfies g
r
in polar coordinates, see (65), so it has length one. We obtain:
ZZ q
Z Z 1
Z 1

(56)
~ 2 dA =
1 1 + |g|
1 + 1 r dr d = 2 2
r dr = 2 4.443,
Area(S) =

~ = |
where in the second equality we used |g|
r| = 1.
2.3.3

Spherical coordinates

In order to deal with three-dimensional situations which involve a centre of symmetry, we


introduce the spherical coordinate system. The three coordinates, denoted , and , are
called radius, colatitude and longitude (or azimuth), respectively18 . (The latitude commonly used in geography corresponds to 2 , hence the name colatitude.) For a vector ~r,
the radius 0 is simply its distance from the origin (the magnitude of ~r); the colatitude
on the axis z; the
0 is the angle between the direction of ~r and the unit vector k
longitude < is the polar angle of the projection of ~r on the xy-plane. In formulas:

2
2
2

0,
x = sin cos ,
= |~r| = (x + y + z ),
= arccos z = arccos 2 z 2 2 ,
0

, (72)
y
=

sin

sin
,
(x +y +z )

<
.
= tan1 y ,
z = cos ,
x

They are depicted in Figure 34. Spherical and cylindrical coordinate systems are related one
another by the following formulas (note that the variable plays exactly the same role in the
two cases):

p
2
2

= (r + z ),
r = sin ,
(73)
= ,
= arctan zr ,

= ,
z = cos .
The corresponding Jacobian matrix and the two determinants are:
x x x

sin cos cos cos sin sin


y y y
= sin sin cos sin sin cos ,
z
z
z
cos
sin
0

(74)

(x, y, z)
= 2 sin ,

(, , )

(, , )
1
= 2
,
(x, y, z)
sin

dV = dxdydz = 2 sin ddd.

Exercise 2.40. Verify the derivation of the Jacobian determinant in (74).


Remark 2.41. The unit vectors in the direction of the spherical coordinates are

1
,

= p
x
+ y
+ zk
(x2 + y 2 + z 2 )

18 We assign to colatitude and longitude the same symbols used in the textbook (see [1, Page 598]). However,
in many other references, the symbols and are swapped, e.g. in Calvin Smiths vector calculus primer you
might have used. According to http://en.wikipedia.org/wiki/Spherical_coordinates, the first notation
is more common in mathematics and the second in physics. Different books, pages of Wikipedia and websites
use different conventions: this might be a continuous source of mistakes, watch out!
Also the naming of the radial variable can be an issue: we use for spherical coordinates and r for
cylindrical as in [1, Section 10.6], but, for instance, most pages of Wikipedia swap the two letters.

53

A. Moiola

Vector calculus notes, version 3

~r

cos

sin
sin cos

sin sin

Figure 34: A representation of the spherical coordinates , and for the position vector ~r R3 .
Note the angles and , it is important to understand well their definition. The big sphere is
the set of points with equal radius . The dashed circle is the set of points with equal radius
and equal colatitude . The dashed half-circle is the set of points with equal radius and
equal longitude . The set of points with equal colatitude and equal longitude is the half line
and point in the directions
starting at the origin and passing through ~r. The unit vectors ,

of increase of the corresponding coordinates (see formulas (75)).

1
,
= p
p
xz
+ yz
(x2 + y 2 )k

(x2 + y 2 + z 2 ) x2 + y 2

1
= p
y
+ x
.
2
2
x +y

(75)

They are pictured in Figure 34. These may be used to draw curves: for example, ~a(t) =
+ 10t,
which is constrained to the unit sphere (since the component of each of its

+ t
points is one), is displayed in Figure 35.
Remark 2.42 (The gradient in spherical coordinates). The gradient of a scalar field f
expressed in polar coordinates reads:
1 f
1 f
~ (, , ) = f
f
+
+
.


sin

(76)

Since their expression is quite messy, for the formulas of divergence and the curl of a vector
~ in polar coordinates we refer to Section 16.7 of [1] (compare with those in cylindrical
field F
coordinates shown in Remark 2.37).
Example 2.43 (Domain volume in spherical coordinates). Consider the domain
n
o
+ R3 , s.t. < R(, ) ,
D = ~r =
+
where R : [0, ] (, ] R is a positive two-dimensional field. Its volume is
Z Z
ZZZ
 Z R(,)
 
2
dV =
sin
Vol(D) =
d d d
D
0

0

Z Z
1
3
R (, ) sin d d.
=
3
0
54

A. Moiola

Vector calculus notes, version 3

0.5

0.5

0.5
0.5

0
0
0.5

0.5

+ 10t for 0 t , lying on the unit sphere.


Figure 35: The curve ~a(t) = + t
For R(, ) = 1, we recover the volume of the unit sphere 34 (write down the detailed
computation as exercise).
For example, we compute the volume for the pumpkin-shaped domain P of Figure 36
n
o
+ R3 , s.t. < (sin )(5 + cos 7) .
P = ~r =
+
From the formula above, we have

Z Z
1
sin3 (5 + cos 7)3 sin d d
Vol(P ) =
3
0
Z
 Z

1
4
sin d
=
(5 + cos 7)3 d
3
0

Z 
 Z

1 cos 2 2
1
2
3
d
(125 + 25 cos 7 + 5 cos 7 + cos 7) d
=
3
2
0

Z 

1 cos 2 cos2 2 
1
d

+
=
3
4
2
4
 0Z 
 
1 + cos 14
2
+ cos 7(1 sin 7) d

125 + 25 cos 7 + 5
2


Z 


1 cos 2 cos 4 + 1 
255
26
5
1
1
3

+
+
sin 7 +
sin 14
sin 7
d
=
3
4
2
8
2
7
28
21
0






1 1
sin 2 sin 4 1
=

+
+ 255
3 4
4
32
8
0
 255 2
13 
=
255 =
315,
3 8
8

where we used twice the double-angle formula cos 2t = 1 2 sin2 t = 2 cos2 t 1 to expand
the square of a sine or a cosine, while the cubic power is integrated using cos3 t = cos t
cos t sin2 t = (sin t 13 sin3 t) .
Exercise 2.44. Compute the integral of the field f = 4 (sin2 + cos 12) on the unit ball B
(i.e. the ball centred at the origin with radius 1).
Example 2.45 (Surface integrals in spherical coordinates). Not surprisingly, spherical coordinates can be useful to compute surface integrals. For example we compute the area of the
sphere S = {~r R3 , = R} of radius R > 0 centred at the origin. On S, the coordinate
55

A. Moiola

Vector calculus notes, version 3

2
1
0
1
2

5
5
0
0
5

o
Figure 36: The pumpkin domain P = {~r R3 , s.t. < (sin )(5 + cos 7) whose volume was
computed in Example 2.43.
does not vary and the curvilinear area element dS is equal to R2 sin dd:

ZZ
ZZ
Z Z
Area(S) =
dS =
R2 sin dS =
R2 sin d d = 4R2 .
S

~ = zk
= cos k
through S, equipped with the
We compute the flux (58) of the vector field F
~r
outward pointing unit normal vector n
(~r) = = :
ZZ

ZZ



(75) z (72)

zk
dS
using k
=
= cos


Z S  Z
(R cos )(cos )R2 sin d d
=
0

Z

 1
4
= 2 R3 .
= 2R3
cos2 sin d = 2R3 cos3
3
3
0
=0

~ ~n dS =
F

~ and the sphere S, can you deduce the sign of


(Draw the field F

R2

Coordinate system
Cartesian
Polar
Cartesian
Cylindrical

R3
Spherical

Coordinates
x, y
x = r cos
y = r sin
x, y, z
x = r cos
y = r sin
z=z
x = sin cos
y = sin sin
z = cos

RR

~ ~n dS from the plot?)


F

Measure
dA = dx dy
dA = dx dy = r dr d
dV = dx dy dz
dV = dx dy dz = r dr d dz
dV = dxdydz = 2 sin drdd

Domain
x, y R
r0
(, ]
x, y, z R
r0
(, ]
zR
0
[0, ]
(, ]

Table 1: Summary of the coordinate systems described in Section 2.3

56

A. Moiola

Vector calculus notes, version 3

Greens, divergence and Stokes theorems

The first two sections of these notes were mainly devoted to the study of derivatives (better:
differential operators) and integrals of scalar and vector fields. This third section will focus
on some important relations between differential operators and integrals.
We recall that the fundamental theorem of calculus gives the basic relation between
differentiation and integration of real functions. It states that, for any real smooth function
f : R R, and for any two real numbers a < b, the formula
Z

df
(t) dt = f (b) f (a)
dt

(77)

holds. This is often understood as integration reverses differentiation. The fundamental theorem of vector calculus (42) studied in Section 2.1.3 extends this result to the
integration of gradients along curves:
Z

~
q

~
p

~ r) d~r = (~q) (~p).


(~

(78)

Here the integral at the left-hand side is the line integral along any path going from ~p to ~q.
How can we extend this to multiple integrals and partial derivatives? We will see several
different extensions. The integral at the left-hand side of equation (77) will become a double
or a triple integral and the derivative will become a vector differential operator involving
partial derivatives. How is the evaluations of f at the domain endpoints (i.e. f (b) f (a))
generalised to higher dimensions? The set of the two values {a, b} R can be thought as
the boundary of the interval (a, b), and similarly the points {~p, ~q} R3 are the boundary of
a path . In the same way, when integrating a differential operator applied to a field, over a
two- or three-dimensional domain D, we will obtain a certain integral of the same field over
the boundary D of D.19
In the fundamental theorem of calculus (either the scalar one (77) or the vector version (78)), the values at the endpoints are summed according to a precise choice of the signs:
the value at the initial point (f (a) or (~q)) is subtracted from the value at the final point
(f (b) or (~p)). This suggests that the boundary integrals will involve oriented paths (for
the boundaries of two-dimensional domains) and oriented surfaces (for the boundaries of
three-dimensional domains); see Section 2.2.5. The unit tangent field t and the unit normal field n
will play an important role in assigning a sign to the integrals of vector fields
on the boundaries of two- and three-dimensional domains, respectively.
The most important results of this sections are collected in three main theorems. Greens
theorem 3.4 allows to compute the double integral of a component of the curl of a vector
field as a path integral. The divergence theorem 3.12 (see also 3.8) states that the volume
integral of the divergence of a vector field equals the flux of the same fields through the
domain boundary. This theorem holds in any dimensions, is probably the most used in
applications, and is the most direct generalisation of the fundamental theorem of calculus to
multiple integrals. Finally, Stokes theorem 3.22 generalises Greens theorem to oriented
surfaces, equalling the flux of the curl of a field to the boundary circulation of the same
field20 . All their proof are quite similar to each other and rely on the use of the fundamental
theorem of calculus. We will also prove several other important identities. Table 2 at the
end of the section collects the main formulas obtained.
To fix the notation, from now on we will use the letter R to denote two-dimensional
domains (or regions) and the letter D to denote three-dimensional domains. We will always
assume that they are piecewise smooth, namely their boundaries are unions of smooth
parts (R is union of smooth paths and D is union of smooth surfaces). Greens, divergence
and Stokes theorems concern planar regions R, three-dimensional domains D and surfaces
S, respectively.
19 You may think at the difference f (b) f (a) as the signed integral of f over the zero-dimensional set
{a, b}, or as the integral of f over the oriented set {a, b}.
20 These theorems receive their names from George Green (17931841) and Sir George Gabriel Stokes
(18191903). The divergence theorem is sometimes called Gauss theorem from Johann Carl Friedrich Gauss
(17771855). Stokes theorem is also known as KelvinStokes theorem (from William Thomson, 1st Baron
Kelvin, 18241907) or curl theorem.

57

A. Moiola

Vector calculus notes, version 3

Remark 3.1 (Smoothness and integrability). As in the previous sections, when stating and
proving theorems and formulas, we will not be precise with the assumptions on the regularity
of the fields. Even assuming smooth fields, meaning C , may not be enough. For instance,
R1
the function f (t) = 1t is perfectly smooth in the open interval (0, 1), but its integral 0 f (t)dt
is not bounded (i.e. is infinite). Roughly speaking, possible assumptions for the theorems we
will prove are that all the derivatives involved in the formulas are continuous in the closure of
the considered domains (even if we usually consider the domains to be open sets). However,
we have already seen in Example 2.21 (the smile domain, see also Figure 27) that we can
easily integrate fields tending to infinity on the domain boundary. An even more complicated
issue is related to the regularity of domains, curves and surfaces. We will ignore this problem
and always implicitly assume that all the geometric object considered are sufficiently wellbehaved (or smooth).

3.1

Greens theorem

In this section, we fix a two-dimensional, piecewise smooth, connected, bounded region R


R2 . Connected means that for any two points ~p, ~q R there exists a path lying entirely
in R with endpoints ~p and ~q. Bounded means that sup{|~r|, ~r R} < , i.e. the region
is contained in some ball of finite radius (R does not go to infinity). The boundary R
of R is composed by one or more loops (closed paths); in the first case, the domain has no
holes and is called simply connected21 .
We consider the boundary as an oriented path, with the orientation induced by R after
(see the last paragraph of Section 2.2.5). In
setting the unit normal field on R to be equal to k
other words, if we draw R on the ground and walk on the path R according to its orientation,
we will see the region R at our left. The external part of R is run anticlockwise, the
boundary of every hole (if any is present) is run clockwise; see Figure 37.
y

t
R

t
n

x
Figure 37: The shaded area represents a two-dimensional region R. R is connected (composed
of only one piece), piecewise smooth (its boundary is the union of four smooth paths), but not
simply connected since it contains a hole (its boundary is composed by two loops). Its boundary
R is composed by two oriented paths which inherit the orientation from R: the external path
is run anticlockwise and the inner one clockwise. In both cases, if we proceed along the path we
see the region R on our left.
t is the unit tangent vector to R and n
is the outward-pointing unit normal vector.
Remark 3.2. The line integrals we are using here are slightly more general than those
seen in the previous sections. If R is not simply connected, its boundary is composed of two
R case the integrals on R are meant as sums of integrals:
R loops 1 , . .R. , n . In this
Ror more
R = 1 ...n = 1 + + n .
Since all boundaries are loops (i.e. paths starting and ending at the same points), the
initial point of integration is irrelevant.
21 Note that the definition of simply-connected domains in three dimensions is quite different from the
two-dimensional definition we use here.

58

A. Moiola

Vector calculus notes, version 3

We prove an important
lemma,
which contains the essence of Greens theorem. We recall
R
R
that the notation f dx and f dy for an oriented path and a scalar field f was defined
in equation (40).
Lemma 3.3. Consider a smooth scalar field f defined on a two-dimensional region R R2 .
Then
ZZ
I
f
f dx,
dA =
(79)
R y
R
I
ZZ
f
f dy.
dA =
(80)
R
R x
Note that in (79) the double integral of the derivative in y is associated to the line integral
in dx, as defined in (40); the roles of x and y are swapped in (80). The asymmetry in the
sign is due to our choice of the anticlockwise orientation of R.
Proof of Lemma 3.3. We prove only the first identity (79); the second (80) follows in a similar
way. The main ingredient of the proof is the use of the fundamental theorem of vector calculus
in the y-direction to reduce the double integral to a line integral. We split the proof in two
main steps.
Part 1. We first consider a y-simple domain R, as in formula (45):
n
o
R = x
+ y
R2 , s.t. xL < x < xR , a(x) < y < b(x) ,

where xL < xR , a, b : (xL , xR ) R, a(x) b(x) for all x (xL , xR ). We compute the double
integral at the left-hand side of (79) using the formula for the iterated integral (46) and the
fundamental theorem of calculus (77) applied to the partial derivative of f in the y-direction:
ZZ

f
(46)
dA =
y
(77)

Z
Z

xR
xL

b(x)

a(x)

xR
xL
xR

xL

Z


f
(x, y) dy dx
y

b(x)

f (x, y)

dx

(81)

y=a(x)




f x, b(x) f x, a(x) dx.

We now consider the boundary integral at the right-hand side of the assertion (79). We split
it in four components, corresponding to the four oriented paths in which the boundary R
is divided, as in the left plot of Figure 38:
Z
Z
Z
Z
I
f dx.
f dx +
f dx +
f dx +
f dx =
R

The corresponding paths are parametrised by the following curves22 :


S : ~cS (t) = (xL + t)
+ a(xL + t)
,

E : ~cE (t) = a(xR ) + t ,

N : ~cN (t) = (xR t)


+ b(xR t)
,

W : ~cW (t) = b(xL ) t ,

d~cS
(t) =
+ a (xL + t)
,
dt
d~cE
(t) = ,
dt
d~cN
(t) =
b (xL + t)
,
dt
d~cW
(t) =
,
dt

0 < t < xR xL ,
0 < t < b(xR ) a(xR ),
0 < t < xR xL ,
0 < t < b(xL ) a(xL ).

We first notice that the two vertical paths E and W do not


R give any contribution to the
integral at the right-hand side of (79). Indeed, the integrals dx take into account only
22 Note that the path
W collapses to a point if a(xL ) = b(xL ) and similarly the path E if a(xR ) = b(xR );
see an example in Figure 22.

59

A. Moiola

Vector calculus notes, version 3

the horizontal component of the curve total derivative (see the definition in (40)) which is
zero on these two segments:
Z

(40)

f dx =

b(xRZ
)a(xR )

(38)

f
d~r =

 d~c
(t) dt =
f ~cE (t)
dt

b(xRZ
)a(xR )
0


dt = 0;
f ~cE (t)

and similarly on W . Similarly, on the two remaining sides, the horizontal component of the
curve total derivative is 1, which simplifies the computation:
Z
Z
Z
I
Z
f dx
f dx +
f dx +
f dx +
f dx =
W
N
E
R
S
| {z }
| {z }
=0

(40)

xR xL

xR xL

 ~cS (t)
f ~cS (t)

dt +
| {zt }
=1
Z xR xL

=0
xR xL

 ~cN (t)
f ~cN (t)

dt
| {zt }
=1


f ~cN (t) dt
=
0
0
Z xR xL
Z xR xL


f xR t, b(xR t) dt
f xL + t, a(xL + t) dt
=
0
Z0 xR 


f x, a(x) f x, b(x) dx
=
xL
ZZ
f
(81)
=
dA,
R y

f ~cS (t) dt

which is the assertion (79). We have concluded the proof for y-simple domains.
y
N
R1

W
S

R2

x
xL

xR

Figure 38: The proof of Lemma 3.3: the decomposition in four paths of the boundary of a
y-simple domain (left plot, part 1 of the proof), and the decomposition of the region R in two
subregions R1 and R2 (right plot, part 2 of the proof). The interface = R1 R2 has the
same orientation of R1 and opposite to R2 .
Part 2. Every bounded, piecewise smooth region can be split in a finite number of ysimple subregions (this needs to be proved, we take it for granted here). Thus, to conclude the
proof, we only need to show that if a region R is the union of two non-overlapping subregions
R1 and R2 (i.e. R1 R2 = R and R1 R2 = , being quite imprecise with open and closed
sets) such that on both R1 and R2 equation (79) holds true, then the same equation holds
also on the whole of R.
We denote with = R1 R2 the intersection of the boundaries of the two subregions.
This is a path that cuts R in two; if the domain R is not simply-connected, i.e. it contains
some holes, then the interface might be composed of two or more disconnected paths. We
fix on the same orientation of R1 and we note that this is the opposite of the orientation
of R2 ; see the right plot of Figure 38. Combining the integrals over different parts of
the boundaries involved and using the relations between the sign of line integrals and path

60

A. Moiola

Vector calculus notes, version 3

orientations we conclude:
ZZ
ZZ
ZZ
f
f
f
dA =
dA +
dA
y
y
R
R1
R2 y
I
I
f dx
f dx
=
R2
R1
Z
 Z
Z
=
f dx + f dx
R1 \

from R = R1 R2
since (79) holds in R1 , R2

Z
f dx +
f dx

R2 \


 Z
Z
f dx +
f dx
f dx +
f dx
=

R1 \
R2 \
{z
}
|
Z
=0
f dx
=
(R1 \)(R2 \)
Z
=
f dx
since R = (R1 \ ) (R2 \ ).
Z

(To understand this part of the proof observe carefully in the right plot of Figure 38 how the
different geometric objects are related to each other.)
Greens theorem immediately follows from Lemma 3.3.
~
Theorem 3.4 (Greens theorem). Consider a smooth vector field F(x,
y) = F1 (x, y)
+
F2 (x, y)
defined on a two-dimensional region R R2 . Then
ZZ 
R

F1
F2

x
y

dA =


F1 dx + F2 dy ,

(82)

which can also be written as


ZZ


~ F
~ dA =

~ d~r.
F

(83)

Proof. Equation (82) follows from choosing f = F1 in (79), f = F2 in (80) and summing the
two identities obtained. Equation (83) is obtained from the definition of the curl (20) and
the expansion of the line integral (41).
Note (Greens theorem for three-dimensional fields in two-dimensional domains). We have
~
~ is any
stated Greens theorem for a two-dimensional vector field F(x,
y) = F1
+ F2 . If F
~

three-dimensional, smooth vector field, i.e. F(x, y, z) = F1


+ F2 + F3 k may depend also on
the z coordinate and have three non-zero components, formula (83) still holds true. In this
case, we think at R R2 as lying in the xy-plane of R3 (the plane {z = 0}). Indeed, the
left-hand side of the equation is not affected by this modification as (from the definition of the
~ F)
~ 3 = F2 F1 does not involve neither F3
curl (20)) the third component of the curl (
x
y
nor the partial derivative in z. Similarly, the right-hand side of (83) is not modified because
~ along R is the integral of the scalar product of F
~ and the total derivative
the circulation of F
of the curve defining R (see (38)), whose z-component vanish, thus the component F3 does
not contribute to the line integral.
~ = (2xy + y 2 )
Example 3.5. Use Greens theorem to compute the circulation of F
+ x2
along the boundary of the bounded domain R delimited by the line y = x and the parabola
y = x2 .
The domain R can be written as R = {~r R2 , 0 < x < 1, x2 < y < x}; see the left plot
~ = (2x 2x 2y)k
~ F
= 2y k.
Then, Greens
in Figure 39. The curl of the given field is
theorem gives

I
ZZ
Z 1 Z x
Z 1
2
1 1
~ r (83)
~ 3 dA (46)
~ F)
Fd~
=
(
=
(2y) dy dx =
(x2 +x4 ) dy = = .
5 3
15
R
R
0
x2
0
Of course, it is possible to directly compute the circulation, but the calculation is a bit
longer, as we have to split the boundary in the straight segment S (e.g. parametrised by
61

A. Moiola

Vector calculus notes, version 3

~a(t) = (1 t)(
+ ), 0 < t < 1) and the parabolic arc P (e.g. parametrised by ~b(t) = t
+ t2 ,
0 < t < 1). (Recall that the boundary must be oriented anticlockwise.) Then we have
I
Z
Z
~ d~r
~ d~r +
~ d~r =
F
F
F
P

S
1

= 4


3(1 t)2
+ (1 t)2 (
)dt +

(1 t)2 dt +


(2t3 + t4 )
+ t2 (
+ 2t
)dt

4
1
2
(4t3 + t4 )dt = + 1 + = .
3
5
15

R
x

Figure 39: Left plot: the domain R = {~r R2 , 0 < x < 1, x2 < y < x} in Example 3.5.
Right plot: the half ellipse R = {x
+ y
R2 , 14 x2 + y 2 < 1, y > 0} in Example 3.6.
~ = y 2
Example 3.6. Demonstrate Greens theorem for F
on the half-ellipse R = {x
+ y

2
2 1 2
R , 4 x + y < 1, y > 0} (right plot in Figure 39).
Since the domain R is y-simple, we compute the double integral at the left-hand side of
(83) as an iterated integral:
ZZ


~ F
~ dA (20)

=
3

ZZ

(46)

(2y) dA =

Z

2
2 

q
2
1 x4

1+


(2y)dy dx


x2 
8
8
dx = 2 2 +
= .
4
12
3

The boundary of R is composed of two parts. The horizontal segment {2 < x < 2, y = 0}
~
does not give any contribution to the line integral at the right-hand side of (83), since F
~
vanishes on the x axis. The upper arc is parametrised by the curve a(t) = 2 cos t
+ sin t

for 0 < t < (recall that it needs to be oriented anticlockwise), whose total derivative is
d~
a
+ cos t
. So the line integral reads:
dt (t) = 2 sin t
I
Z
~ d~r =
~ d~r
F
F
R

(38)

R{y>0}
Z
2

=
y
(2 sin t
+ cos t
)dt
Z 0
=
2y 2 sin t dt
(now use y = a2 (t) = sin t on R {y > 0})
0
Z
Z

 2
8
3
3
2
(2 sin t cos t 2 sin t)dt = cos t + 2 cos t = ,
2 sin t dt =
=
3
3
0
0
0

and we have proved that the right- and the left-hand sides of formula (83) are equal to each
~
other, for this choice of domain R and field F.
Example 3.7 (Measuring areas with line integrals). A (perhaps) surprising application of
Greens theorem is the possibility of measuring the area of a planar region R by computing
~ such that the volume
only line integrals on its boundary23 . To do this, we need a vector field F
23 The possibility of computing areas with line integrals should not be a surprise: we already know that the
area of a y-simple domain (45), for example, can be computed as
Z xR
Z xR  Z b(x) 
ZZ

dy dx =
dA =
b(x) a(x) dx
Area(R) =

which is nothing else than

xL

a(x)

y dx.
R

62

xL

A. Moiola

Vector calculus notes, version 3

F1
2
integrand in (82) is one, namely F
x y = 1. Several choices are possible, for instance
~ = y
~ = x
~ = 1 (y
F
, F
, or F
+ x
). Each of these gives rise to a line integral that
2
computes the area of R:
I
ZZ
I
I
1
(ydx + dy).
Area(R) =
dA =
y dx =
x dy =
2 R
R
R
R
a
a d~

In Section 2.1.2, we defined the unit tangent vector field t = d~
dt / dt of an oriented path
defined by the curve ~a. If = R is the boundary of R (with the orientation convention
described above), then the outward pointing unit normal vector n
on R is related to the
unit tangent t by the formulas

= t2
n
= t k
t1 ,

n
t = k
= n2
+ n1 ,

(n1 = t2 , n2 = t1 );

(84)

see Figure 37. (Note that, if R contains a hole, then on the the corresponding part of the
boundary the vector field n
points out of R and into the hole.) For a smooth vector field
~ = F1
F
+ F2 , choosing f = F1 in (80), f = F2 in (79), and summing the resulting equations
we obtain

I
ZZ 

F2
F1
(79),(80)
F1 dy F2 dx
=
dA
+
x
y
R
IR

(40)
=
F1 F2
d~r
IR

(4)
~ d~r
F
k
=
IR

(39)
~ t ds
F)
(k
=
IR

Ex. 1.14
~ ds
F
(t k)
=
IR

(84)
~ n
F
ds.
=
R

This is the proof of the divergence theorem in two dimensions.

~ = F1
Theorem 3.8 (Divergence theorem in two-dimensions). Let F
+ F2 be a smooth
2
vector field defined in a region R R with outward pointing unit normal vector field n
.
Then

I
ZZ 
F2
F1
~ n
(85)
dA =
F
ds.
+
x
y
R
R
The integrand in the double integral at the left-hand side of (85) is the two-dimensional
~ F
~ of F.
~
divergence
~ = (2xy + y 2 )
+ x2 and the
Example 3.9. As in Example 3.5, consider the vector field F
2
bounded
domain R delimited by the line y = x and the parabola y = x . Compute the flux24
H
~
~ through R.
F~
n ds of F
R
We simply apply the divergence theorem (85):
I

~ ~
F
n ds =

ZZ 
R

F2
F1
+
x
y

dA =

ZZ

2y dA =

Z

x2


2
,
2y dy dx =
15

with the same computation as in the previous example. The direct computation of the contour
integral is slightly more complicated in this case than in Example 3.5, in that it requires the
calculation of the outward-pointing unit normal vector field.
Exercise 3.10. Prove the second formula in (84) without using the expansion of the vectors
in components. You can find a helpful formula in the exercises in Section 1.1.
24 Note that here the word flux is used to denote the contour integral of the normal component of a
two-dimensional field, as opposed to the surface integral of the normal component in three dimensions.

63

A. Moiola

Vector calculus notes, version 3

R
Note. In section 2.1.2 we introduced the notation f dx, representing the line integral of a
~ = f
vector field with only one non-zero component (the component in the direction
, i.e. F
).
If the path can be parametrised by the coordinate x, then it is possible to write this integral
in more explicit way.
More in concrete, assume is parametrised by ~a(x) = x
+ y(x)
for xL < x < xR and
for a continuous function y : (xL ,RxR ) R. It is important that the parametrisation satisfies
da1
dx = 1. Then, the line integral f dx can be written as the one-dimensional integral in x
of the field evaluated along the graph of y(x):
Z
Z xR
Z xR
Z xR
 d~a(x)
 da1 (x)

(40)
f dx =

f ~a(x)
f x, y(x)
f x, y(x) dx.
dx =
dx =
dx
dx

xL
xL
xL
R
In other words, the integral f dx is a standard integral in x if the path is parametrised
by a curve whose first component is a1 (x) = x + for some R.
This fact also extends to curves in three dimensions. If the path is run in the opposite
direction (right to left, or from higher to lower values of x) with a similar parametrisation,
then the sign of the integral is reversed, as expected.

3.2

The divergence theorem

In this section we consider a three-dimensional domain D R3 and we assume it to be


piecewise smooth, bounded and connected.
Lemma 3.11. Let f be a smooth scalar field on D. Then we have
ZZZ

f
dV =
z

ZZ

n
fk
dS,

(86)

where n
is the outward-pointing unit normal vector field on the surface D.
Proof. The proof is very similar to that of Lemma 3.3 (in a sense it is its extension to three
dimensions). As in that case, we divide the proof in two parts: the first one is devoted
to simple domains, the second extends the result to general domains by partitioning these
in simpler parts. Again, the basic instrument in the proof is the fundamental theorem of
calculus.
Part 1. We consider a z-simple domain as that introduced at the beginning of Section 2.2.3:
n
o
s. t. xL < x < xR , a(x) < y < b(y), (x, y) < z < (x, y)
D = x
+ y
+ zk

where xL and xR are two real numbers, a, b : [xL , xR ] R two real functions (with a < b),
and , : R R two two-dimensional scalar fields (with < ). We denote by R the planar
region R = {x
+ y
s. t. xL < x < xR , a(x) < y < b(y)}. See Figure 40 for a representation
of D and R.
The boundary D is composed of six (curvilinear) faces25 . On four of these faces the
n
outward-pointing
has zero vertical component k
, so the correRR unit normal vector field n

sponding term
fk n
dS in the surface integral at the right-hand side of (86) vanish:

n
W =

on x = xL , a(xL ) < y < b(xL ), (xL , y) < z < (xL , y) ,





n
S = a (x) 2 on xL < x < xR , y = a(x), x, a(x) < z < x, a(x) ,
1+(a (x))


n
(~r) =

on
x
=
x
,
a(x
)
<
y
<
b(x
),
(x
,
y)
<
z
<
(x
,
y)
,
E
R
R
R
R
R





b (x)
+

n
on xL < x < xR , y = b(x), x, b(x) < z < x, b(x) .
N =

2
1+(b (x))

(Verify these expressions of n


.) On the other hand, the top face ST is the graph of the
two-dimensional field : R R. The contribution to the surface integral at the right (whose first two components
hand side of (86) given by ST is the flux of the vector field f k
25 Actually, if a(x ) = b(x ) or a(x ) = b(x ), then the left and right faces are collapsed to a segment and
L
L
R
R
the total number of faces is reduced to 5 or 4.

64

A. Moiola

Vector calculus notes, version 3

are constantly zero) through ST itself, and can be reduced to a double integral on R using
formula (59):
ZZ
ZZ


f x, y, (x, y) dA.
f (x, y, z) k n
dS =
R

ST

Similarly, the bottom face SB is the graph of the two-dimensional field : R R. However,
on this face, the outward-pointing unit normal vector field n
points downward, opposite to
the convention stipulated in Section 2.2.5 for graph surfaces. Thus the sign in (59) needs to
be reversed and the contribution of SB to the right-hand side of (86) reads
ZZ
ZZ


f x, y, (x, y) dA.
f (x, y, z) k n
dS =
R

ST

To conclude the proof of the first part we expand the right-hand side of the identity in
the assertion in a sum over the faces of D and use the fundamental theorem of calculus to
transform it into a triple integral:
ZZ
ZZ
ZZ
n
n
n
fk
dS
fk
dS +
fk
dS =
SB
ST
D
ZZ
ZZ
ZZ
ZZ
n
n
n
n
fk
dS
fk
dS +
fk
dS +
fk
dS +
+
SN
SE
SS
SW
{z
}
|
=0

ZZ


f x, y, (x, y) dA

Z ZR 
R

(77)

(53)

(x,y)

(x,y)

ZZZ

n=0)
(k


f x, y, (x, y) dA



f x, y, (x, y) f x, y, (x, y) dA

ZZ  Z
R

ZZ


f
(x, y, z)dz dA
z

f
(x, y, z)dV
z

(fundamental theorem of calculus)

(triple integral as an iterated integral).

n
T
z
ST = (R)
SN
D

n
N

SE
n
E

xL
a(x)

xR
x

b(x)
W
N

S
E

Figure 40: A representation of the the z-simple domain in the first part of the proof of
Lemma 3.11. The bottom face SB is that underneath, the west face SW is behind D and
the south face SS is on the left of the figure. The outward-pointing unit vector field n
on SW ,
The fundamental theorem of calculus is
SS , SE and SN , has zero vertical component n
k.
applied along all vertical segments connecting the bottom face with the top face.
Part 2. Any regular domain can be decomposed in a finite union of disjoint z-simple
domains (of course, this should be proved rigorously, here we only assume it since we have
65

A. Moiola

Vector calculus notes, version 3

not even described in detail the regularity of the domains). Thus, to conclude the proof
of the lemma, we only need to show the following: if a domain D is disjoint union of two
subdomains D1 and D2 , and if identity (86) holds in both of them, then the same identity
hold in D.
We denote with = D1 D2 the interface between the subdomains and with n
1, n
2
the outward-pointing unit normal vector fields on D1 and D2 , respectively. Then D1 =
(D1 D) (and D2 = (D2 D)), n
1 =
n2 on and n
1 = n
on D1 D
(similarly n
2 = n
on D2 D). Combining everything we have the assertion:
ZZZ
ZZZ
ZZZ
f
f
f
dV =
dV +
dV
z
D1 z
D z
Z Z D2
ZZ
n
n
fk
1 dS +
fk
2 dS
=
D1
D2
ZZ
ZZ
ZZ
ZZ
n

fk
2 dS
fkn
2 dS +
fkn
1 dS
fkn
1 dS +
=

D2 D

D1 D
ZZ
ZZ
n
n
n
=
fk
dS +
(f k
1 + f k
2 ) dS
{z
}
(D1 D)(D2 D)
|
=0
ZZ
n
fk
1 dS.
=
D

Proceeding as in Lemma 3.11, we can prove that


ZZ
ZZ
ZZZ
ZZZ
f
f
dV =
dV =
f
n
dS,
f n
dS.
D
D y
D
D x

(87)

From identities (86) and (87) we obtain the following fundamental theorem.
~ be a smooth vector field
Theorem 3.12 (Divergence theorem in three-dimensions). Let F
defined on D. Then
ZZZ
ZZ
~
~
~ n
F dV =
(88)
F
dS,
D

~ on D is equal to the flux of F


~ itself through
namely, the triple integral of the divergence of F
the boundary of D.
~
Proof. We simply sum (86) and (87) applied to the three components of F:

ZZZ
ZZZ 
F1
F2
F3
~
~
dV
+
+
F dV =
x
y
z
D
ZZZ
ZZZ
Z Z ZD
F2
F3
F1
dV +
dV +
dV
=
x
y
z
ZZ D
Z ZD
ZDZ
n
=
dS
F1
n
dS +
F2 n
dS +
F3 k
D
D
D
ZZ

n
=
dS
F1
+ F2 + F3 k
Z ZD
~ n
F
dS.
=
D

The divergence theorem, together with its corollaries, is one of the most fundamental
results in analysis and in vector calculus. It holds in any dimensions, appears in every sort
of theoretical and applied setting (PDEs, differential geometry, fluid dynamics, electromagnetism, . . . ), has a deep physical meaning, is the starting point for the design of many
numerical methods, and its extension to extremely irregular fields and domains is very challenging (and indeed it is currently an object of research after two centuries from the first
discovery of the theorem!).
66

A. Moiola

Vector calculus notes, version 3

RR
Example 3.13. Use the divergence theorem to compute S (x2 +y 2 )dS, where S = {|~r| = R}
is the sphere of radius R centred at the origin.
~ defined in the ball B = {|~r| R} such that F
~ n
We want to find a vector field F
= x2 +y 2
1
on S = B. The outward-pointing unit vector field n
on B is n
= r = R~r, so a suitable
~
vector field is F = Rx
+ Ry
. Thus we conclude:
ZZZ
ZZZ
ZZ
ZZ
8
(88)
~ F
~ dV =
~ n
2R dV = 2RVol(B) = R4 .

(x2 + y 2 )dS =
F
dS =
3
B
B
S
S
Example 3.14. Consider
the cone C = {0 < z < 1, x2 + y 2 < z 2 }. Use the divergence
RRR
|~r|2 dV .
theorem to compute
C
We first decompose the boundary C in the disc B = {x2 +y 2 < 1, z = 1} and in the lateral
~ such that
~ F
~ = |~r|2
part L = {x2 + y 2 = z 2 , 0 < z < 1}. We need to find a vector field F
~ n
and F
is easy to integrate on D. Since the divergence must be a polynomial of degree
two, the easiest choice is to look for a field whose components are polynomials of degree three.
Moreover, the position field ~r has zero normal component on the lateral boundary L of C
~ may be a multiple of ~r. It is easy to figure
(draw a sketch to figure out why), so a good F
1
2~
~
out that a possible candidate is F = 5 |~r| r, whose divergence is |~r|2 , as desired. From the
divergence theorem we have:
ZZZ
ZZZ
2
~ F
~ dV
|~r| dV =

C
C
ZZ
(88)
~ n
=
F
dS
C
ZZ
ZZ
~ n
~ n
=
dS
+
F

} dS
|{z}
|F{z
B

ZZ

=k

=0

1 2
(x + y 2 + z 2 )z dS
5
B
ZZ
1 2
(r + 1) dS
using z = 1 on B and cylindrical coordinates
=
B 5



Z Z 1
1
3
1
1 1
=
=
+
,
(r2 + 1)r dr d = 2
5
5
4
2
10
0
=

which agrees with the result found in Exercise 2.36.


In the examples 3.13 and 3.14 we used the divergence theorem for different purposes: to
compute a flux through a boundary (by finding a vector field whose normal component on
the boundary is given) and to compute a volume integral (by finding a vector field whose
divergence is given).
Remark 3.15 (The divergence theorem in one dimension is the fundamental theorem of
calculus). The divergence theorems 3.8 and 3.12 directly generalise the fundamental theorem
of calculus to higher dimensions. Indeed, a bounded, connected (open) domain in R must be
an interval I = (a, b), whose boundary is composed by two points I = {a, b} with outwardpointing normal vector n(a) = 1 and n(b) = 1 (a one-dimensional vector is simply a scalar).
The zero-dimensional integral of a function f on I reduces to the sum of the two values
of f in b and a, and the flux to their difference. A one-dimensional vector field is a real
function and its divergence is its derivative. So we have
Z
Z
Z

~
f (x)n(x)dx = f (b)n(b) + f (a)n(a) = f (b) f (a).
f (x)dx = f (x)dx =
I

Remark 3.16 (Divergence theorem in two and three dimensions). Under the assumptions
of the two-dimensional divergence theorem 3.8, we define the Cartesian product domain D =
s.t. x
R (0, 1) = {~r = x
+ y
+ zk
+ y
R, 0 < z < 1}, and fix F3 = 0. Then we see
that the assertion (85) is a special case of the three-dimensional divergence theorem (88):
!
! Z


ZZ 
ZZ 
1
F1
F2
F2
F1
dA =
dxdy
dz
+
+
x
y
x
y
R
0
R
| {z }
=1

67

A. Moiola
ZZZ 

Vector calculus notes, version 3




F2
F1
+
x
y

dxdydz
| {z }
=dV

ZZZ 
F2
F3
F1
dV
+
+
=
x
y
z
D
|{z}
=0
ZZ
~ n
F
dS
(applying the 3D divergence theorem (88) in D)
=
D
ZZ
ZZ
ZZ
~
~
~ n
=
Fn
dS +
Fn
dS +
F
dS
(decomposing D)
=

R(0,1)

~ n
F
ds

R{0}

! Z

dz
| {z }
0

=1

~ n
F
ds.

R{1}

ZZ

dS +
(F1
+ F2 ) (k)
{z
}
R{0} |
=0

ZZ

R{1}

dS
(F1
+ F2 ) k
{z
}
|
=0

The contributions to the surface integral in (88) of the upper and lower faces of D are zero,
~ n
~ vanishes). For the same reason, the
so F
since there n
= k,
= 0 (the z-component of F
F3
~
term z is zero in the divergence of F.
In the next corollary we show several other identities which follow from equations (86)
and (87). On the boundary D (or more generally on any oriented surface), we denote the
scalar product of the gradient of a scalar field f and the unit normal field n
as
f
~
:= n
f,
n
and we call it normal derivative.
~ be a smooth vector field, all
Corollary 3.17. Let f and g be smooth scalar fields and F
defined on a domain D. Then
ZZZ
ZZ
~
fn
dS,
(89)
f dV =
ZZZ D
Z ZD
~ dS,
~ F
~ dV =
n
F
(90)

D
D
ZZ
ZZZ
g
~ f
~ ) dV =
(f g + g
dS
(Greens 1st identity), (91)
f
D
D n


ZZ
ZZZ
g
f
f
dS (Greens 2nd identity). (92)
g
(f g gf ) dV =
n
n
D
D
Note that identities (89) and (90) are vectorial, i.e. the values at the two sides of the
equal sign are vectors.
Proof. The proof of identity (89) is very easy:

ZZZ
ZZZ 
f
f
f
~ dV
k dV

+
+
f
=
x
y
z
D
Z Z D

(86),(87)
n
dS
=
f (
n
)
+ f (
n
)
+ f (k
)k
Z ZD
fn
dS.
=
D

We prove identity (90) for the first component:



ZZZ 
ZZZ
F2
F3
(20)
~ F)
~ 1 dV
dV

=
(
y
z
D
D
ZZ

(86),(87)
n
dS
F3 n
F2 k
=
D

68

A. Moiola

Vector calculus notes, version 3


ZZ

Z ZD

(4)


F3 n2 F2 n3 dS

~ 1 dS;
(
n F)

and the same holds for the second and third components.
Using the vector identities of Section 1.4, we have
~ g
~ + f
~ (g)
~ (21)
~ g
~ + f g.
~ (f g)
~ (27)
= f
= f

Then, Greens first identity (91) follows immediately from the application of the divergence
~ = f g.
~
theorem 3.12 to the field F
Greens second identity (92) follows from subtracting
from (91) the same identity with f and g interchanged.
Example 3.18. Show that if a smooth scalar field f is harmonic in a domain D and its
normal derivative vanishes everywhere on the boundary D, then f is constant.
We recall that harmonic means that f = 0 (Section 1.3.3), and the normal derivative
f
~ . From Greens first identity with g = f we see that
=n
f
is n
ZZZ
ZZZ
ZZZ
ZZ
f
~ |2 dV =
~ f
~ dV =
|f
f
f f dV +
dS = 0.
f
n
D
D
D |{z}
D |{z}
=0

=0

~ |2 is non-negative (|f
~ (~r)|2 0 for all ~r D), this equation implies
Since the scalar field |f
~
~ 6= ~0 was true in a portion of D, this
~
that f = 0 everywhere in D. More in detail, if f
RRR
2
~ 2
~
would imply that |f | > 0 gave a positive contribution to the integral
D |f | dV that
~ |2 is
can not be cancelled by negative contributions from other parts of the domain as |f
never negative, so the integral in the formula above could not vanish. Since the gradient of f
vanishes, all the partial derivatives of f are zero, so f can not depend on any of the variables
x, y and z, which means it is constant. (This is only the main idea of the proof, to make
rigorous all the steps we need some tools from analysis.)

Remark 3.19. The two Greens identities (91) and (92) are particularly important in
the theory of partial differential equations (PDEs). In particular, several second order PDEs
involving the Laplacian operator can be rewritten using (91) in a variational form or weak
form, which involves a volume integral and a boundary one. This is the starting point for
the definition of the most common methods for the numerical approximation of the solutions
on a computer, in particular for the finite element method (FEM, you might see it in a future
class). Another common numerical scheme, the boundary element method (BEM), arise from
the second Green identity (92).
Remark 3.20 (Integration by parts in more dimensions). In the first calculus class you
learned how to integrate by parts:
Z

f (t)g(t) dt +

b
a

f (t)g (t) dt = f (b)g(b) f (a)g(a),

for real functions f, g : [a, b] R. This is a straightforward consequence of the fundamental


theorem of calculus (77) and the product rule (f g) = f g+f g . How does this extend to higher
dimensions? The product rule was extended to partial derivatives in (13) and to differential
operators in Proposition 1.36; we saw in Remark 3.15 that the divergence theorem extends
the fundamental theorem of calculus. Several formulas arising from different combinations of
these ingredients are usually termed multidimensional integration by parts, for example
ZZZ
ZZZ
ZZ
~
~
~ n
~
~
f G dV +
f G dV =
fG
dS,
D
D
D
ZZ
ZZZ
ZZZ
~ dV =
~ )g dV +
f g
(f
fg n
dS,
D

~ is a vector field on D.
where f , g are scalar fields and G

69

A. Moiola

Vector calculus notes, version 3

Remark 3.21 (Differential and integral form of physical laws). The main reason of the
importance and ubiquity of the divergence theorem in physics and engineering is that it constitutes the relation between the two possible formulation of many physical laws: the differential
form, expressed by a partial differential equation, and the integral form, expressed by an
integral equation. The integral form usually better describes the main physical concepts and
contains the quantities that can be measured experimentally, while the differential form allows an easier mathematical manipulation. The two forms may lead to different numerical
algorithms for the approximation of the solutions of the equation.
For example, we consider Gauss law of electrostatics, you might have already encountered
~ (which is a vector field)
elsewhere. Its integral form states that the flux of an electric field E
through any closed surface D (in vacuum) is proportional to the total electrical charge in
the volume D bounded by that surface:
ZZ
ZZZ

~
dV,
En
dS =

D
D 0
where is the charge density (whose integral gives the total charge) and 0 is a constant of
proportionality (the vacuum permittivity). Its differential form reads
~ E
~ = .

0
The divergence theorem immediately allows to deduce the integral form of Gauss law from
the differential one; since the former holds for any domain D, also the converse implication
can be proved. Gauss law is one of the four celebrated Maxwells equations, the fundamental
laws of electromagnetism: all of them have a differential and an integral form, related to each
other either by the divergence or Stokes theorem.

3.3

Stokes theorem

Stokes theorem extends Greens theorem to general oriented surfaces. It states that the
flux of the curl of a vector field through an oriented surface equals the circulation along the
boundary of the same surface.
Theorem 3.22 (Stokes theorem). Let S R3 be a piecewise smooth, bounded, oriented
~ be a smooth vector field defined on S. Then
surface with unit normal field n
, and let F
I
ZZ
~
~ d~r,
~
( F) n
dS =
F
(93)
S

where the boundary S is understood as the oriented path with the orientation inherited
from (S, n
).
Proof. As in Section 2.2.4, we only consider a surface defined as the graph of a two-dimen R3 s.t. x
sional field g, i.e. S = {~r = x
+ y
+ zk
+ y
R, z = g(x, y)} where R R2 is
a planar region as in Section 3.1. The proof of the theorem in its generality requires pasting
together two or more simpler surfaces.
The main idea of the proof is to reduce the two integrals in (93) to similar integrals on
the flat region R and its boundary, using the fact that S is the graph of g, so the variables on
S are related to each other by the equation z = g(x, y). Then one applies Greens theorem
on R.
The surface integral (flux) at the left-hand side of the assertion (93) can easily be transformed into a double integral over the planar region R by using formula (59):

ZZ
ZZ 
(59)
~ 2 g + (
~ 3 dA
~ n
~ 1 g (
~ F)
~ F)
~ F)
~ F)
(
dS =
(
x
y
S
R

ZZ  


F
g
F
F3  g  F2
F1 
F
(20)
3
2
1

=
dA.

y
z x
z
x y
x
y
R
We denote with ~a : [tI , tF ] S a curve that parametrises S. The surface S is the graph
of g, thus the components of ~a are related one another by the relation a3 (t) = g(a1 (t), a2 (t)).
g da1
g da2
3
From the chain rule (34) we have da
+
dt = x dt + y dt . Moreover, the planar curve a1 (t)
70

A. Moiola

Vector calculus notes, version 3

H
H
a2 (t)
is a parametrisation of R, thus S f (x, y, z)dx = R f (x, y, g(x, y))dx for any scalar
field f (and similarly for the integral in dy). Putting together all the pieces:

I
Z tF 
 da1 (t)
 da2 (t)
 da3 (t)
~ d~r (38)
F
F1 ~a(t)
=
dt
+ F2 ~a(t)
+ F3 ~a(t)
dt
dt
dt
S
tI

Z tF 
 da1 (t)
 da2 (t)
 g da1 (t) g da2 (t) 
F1 ~a(t)
dt
=
+ F2 ~a(t)
+ F3 ~a(t)
+
dt
dt
x dt
y dt
tI

Z tF 
 g  da1 (t) 

 g  da2 (t)

F1 ~a(t) + F3 ~a(t)
=
dt
+ F2 ~a(t) + F3 ~a(t)
x
dt
y
dt
tI
I 


 
g
g
(40)
F1 (x, y, z) + F3 (x, y, z) (x, y) dx + F2 (x, y, z) + F3 (x, y, z) (x, y) dy
=
x
y
S
I 

 g 
=
F1 x, y, g(x, y) + F3 x, y, g(x, y)
dx
x
R


 g 

dy
+ F2 x, y, g(x, y) + F3 x, y, g(x, y)
y
ZZ 



 g

(79),(80)
=

(x, y)
F1 x, y, g(x, y) + F3 x, y, g(x, y)
y
x
R




g

F2 x, y, g(x, y) + F3 x, y, g(x, y)
(x, y) dA (Greens theorem)
+
x
y
ZZ 
F1
F1 g F3 g
F3 g g

y
z y
y x
z y x
R

F2 g
F3 g F3 g g
F2
dA
(chain rule)
+
+
+
+
x
z x
x y
z x y

Z Z 
F2
F3  g  F3
F1  g F2
F1
=

dA.
z
y x
x
z y
x
y
R
Therefore, the right-hand side of (93) equals the left-hand side and we have proved the
assertion.
We note an immediate consequence of Stokes theorem. If the surface S is the boundary
of a three-dimensional domain D, then S is empty (see Remark 2.27). Thus, (93) implies
that, for every bounded, connected, piecewise smooth domain D and for every smooth vector
~ defined on D, the flux of
~ on D vanishes:
~ F
field F
ZZ
~ F)
~ n
(
dS = 0.
D

Note that we could have derived the same identity from the divergence theorem 3.12 and the
~ needs to be defined (and smooth)
~ (
~ F)
~ = 0 (22). The vector field F
vector identity
in the whole domain D, and not only on the boundary, otherwise the flux above does not
necessarily vanish.
Example 3.23. Demonstrate Stokes theorem for the upper half sphere S = {|~r| = 1, z > 0}
~ = (2x y)

and the field F


yz 2 y 2 z k.
As usual, we parametrise the unit circumference S with the curve ~a(t) = cos t
+ sin t
,
0 < t < 2. The circulation in (93) reads
I

~ d~r =
F
=


( sin t
(2x y)
yz 2 y 2 z k
+ cos t
)dt

(2 cos t sin t + sin2 t)dt

since on S x = cos t, y = sin t, z = 0,


1
sin 2t + (1 cos 2t) dt = .
2

p
~ is
~ F
~ =k
and the surface S is the graph of the field g(x, y) = 1 x2 y 2
The curl of F
over the disc R = {x
+ y
, x2 + y 2 < 1}. Thus, using formula (59) for the flux through a
71

A. Moiola

Vector calculus notes, version 3

graph surface, we have


ZZ
ZZ 

(59)
~ F)
~ 1 g (
~ F)
~ n
~ F)
~ 2 g + (
~ F)
~ 3 dA
(
(
dS =
x
y
S
ZZR
(0 + 0 + 1)dA = Area(R) = ,
=
R

and both sides of (93) give the same value.


H
Example 3.24. Compute S |~r|2 dx, with S = {z = x2 y 2 , 0 < x, y < 1}.
The desired integral is the circulation of the field |~r|2
along the boundary of the paraboloid S, so we apply Stokes theorem and exploit the computation already done in Exercise 2.26:
I
I
ZZ
ZZ

4
(40)
(29)
(93)
Ex. 2.26
~ (|~r|2
|~r|2 dx =

) n
dS =
|~r|2
d~r =
2(~r
) n
dS = .
3
S
S
S
S

Remark 3.25. With the fundamental theorem of vector calculus we have seen that the
line integral fromR a point ~p to a point ~q of a gradient is independent of the integration path.
~ d~r depends only on the scalar field and the endpoints of , i.e. its
In other words,
boundary (as opposed to the entire path ).
Stokes theorem may be interpreted similarly: the flux of the curl of a vector field through
a surface S only depends on the field itself and the boundary of S (as opposed to the entire
surface S). If two surfaces share the boundary (e.g. the north and the south hemispheres of
the same sphere), the flux of a curl through them will give the same value. The equivalences
of (43) can be translated to this setting:
ZZ
ZZ
~
~ n
~
~
~
~ G
~ =0
G

dS
G
dS = 0
G= A

=
S
D
( vector potential)
(solenoidal)
is independent of S
closed surfaces D
RR
RR
~ n
~ n
dS if SA and SB
dS = SB G
(where, with independent of S, we mean that SA G
are two surfaces with SA = SB ). The converse of the last implication is true if the domain
~ does not contain holes.
of G
Remark 3.26 (An alternative definition of divergence and curl). The divergence and the
~ are often defined as the following limits (whenever the limits exist):
curl of a vector field F
ZZ
1
~ r0 ) = lim
~ n
~ F(~

F
dS,
R0 Vol(BR )
BR
I

1
~ F(~
~ r0 ) = lim
~ d~r,
a

F
R0 Area(DR (
a)) DR (a)

where BR = {~r R3 , |~r ~r0 | < R} is the ball of centre ~r0 and radius R, a
is a unit vector,
and Dr (
a) is the disc of centre ~r0 , radius R and perpendicular to a
. (Note that the curl is
~ is differentiable,
defined by its scalar product with all unit vectors.) Then one proves that, if F
~ is constant in a neighbourhood of
these definitions agree with those given in Section 1.3. If F
~r0 , then this proof is immediate, using divergence and Stokes theorem, otherwise it requires
the mean value theorem (attempt to write the proof !).
This also provides an explanation of the geometric interpretations of divergence and curl
as spreading and rotation of a vector field, respectively: they are the infinitesimal averages
of flux and circulations on infinitesimal surfaces and paths.
Remark 3.27. Often the name Stokes theorem is referred to a much more general
version of it, coming from the branch of mathematics known as differential geometry. This
involves the integration of a differential operator called exterior derivative and denoted d.
This acts on differential forms, which are generalisations of scalar and vector fields defined
on a manifold , which in turn is an object that generalises domains, paths and surfaces
to any dimension. This extremely
general
and deep theorem is usually described with a very
R
R
simple and elegant formula: d = . The fundamental theorem of (vector) calculus,
Greens, divergence and Stokes theorems are special instances of this result.
72

A. Moiola

1D

(77)

3D*

(42)
(78)

Vector calculus notes, version 3

Fundamental theorem
of calculus
Fundamental theorem
of vector calculus

2D

(79)

Lemma 3.3

2D

(80)

Lemma 3.3

2D

(82)

Greens theorem

2D

(83)

Greens theorem

d
dt = (b) (a)
dt

~ d~r = f (~q) f (~p)


f

I
f
f dx
dA =
R
R y
I
ZZ
f
f dy
dA =
R
R x

Z 
I

F2
F1
F1 dx + F2 dy
dA =

x
y
R
R
I
ZZ
ZZ
~
~
~
~
~ d~r

( F) kdA =
( F)3 dA =
F
ZZ

2D

(85)

Divergence theorem

3D
3D

(86)
(87)

Lemma 3.11

(88)

Divergence theorem

~ F)dA
~
(
=

ZZZ

ZZZ

3D

(89)

Corollary 3.17

ZZZ

3D

(90)

Corollary 3.17

ZZZ

3D

(91)

Greens 1st identity

ZZZ

3D

(92)

Greens 2nd identity

ZZZ

3D*

(93)

Stokes theorem

ZZ

ZZ

f
dV =
z

ZZ

~ F)dV
~
(
=
~ dV =
f

(F1 F2 )ds =

~ F
~ dV =

~ n
F
ds

n
fk
dS

ZZ

ZZ

~ n
F
dS

fn
dS

ZZ

~ n
F
dS

~ f
~ )dV =
(f g + g
(f g gf )dV =

~ F
~ n

dS =

ZZ

ZZ

f
D

g
dS
n

 g
f 
f
dS
g
n
n
D

~ d~r
F

Table 2: A summary of the important integro-differential identities proved in Section 3. Here:


is a path with starting point ~
p and end point ~q;
R R2 is a two-dimensional, bounded, connected, piecewise smooth region;
D R3 is a three-dimensional, bounded, connected, piecewise smooth domain;
(S, n
) is a piecewise smooth, bounded, oriented surface;
is a real function;
f and g are smooth scalar fields;
~ is a smooth vector field.
F
(* Note that the first column of the table denotes the dimension of the space in which the
domain of integration is defined; the intrinsic dimension of the domain of integration for the
fundamental theorem of vector calculus is one (oriented path), and for Stokes theorem is
two (oriented surface).)

73

A. Moiola

Vector calculus notes, version 3

General overview of the notes


The three main sections of these notes discuss the following general topics:
1. differential vector calculus (differentiation of vector quantities);
2. integral vector calculus (integration of vector quantities);
3. generalisations of the fundamental theorem of calculus (relation between differentiation
and integration of vector quantities).
~ R3 and scalars R:
Section 1.1 recalls some operations involving vectors ~u, ~v, w
inputs
~
2 vectors ~u, w
a vector ~u and a scalar
~
2 vectors ~u, w
~
2 vectors ~u, w
~
3 vectors ~u, ~v, w

Vector addition:
Scalar-vector multiplication:
Scalar product:
Vector product:
Triple product:

7
7
7
7

output
~
vector ~u + w
vector ~u
~
scalar ~u w
~
vector ~u w
~)
scalar ~u (~v w

Sections 1.2 introduces different kinds of functions (or fields), taking as input and returning as output either scalar values (in R) or vectors (in R3 ):
f :RR
~a : R R3
f : R3 R
~ : R3 R3
F

Real functions (of real variable)


Vector functions, curves
Scalar fields
Vector fields

t 7 f (t)
t 7 ~a(t)
~r 7 f (~r)
~ r)
~r 7 F(~

The fundamental differential operators for scalar and vector fields are the partial deriva

, y
and z
. They can be combined to construct several vector differential operators,
tives x
as described in Section 1.3. The most important are:
Partial derivatives
Gradient
Divergence
Curl
Laplacian

x , y

or

~ (or div)

~ (or curl)

scalar
scalar
vector
vector
scalar

7
7

scalar
vector
scalar
vector
scalar

Proposition 1.35 describes the result of the composition of differential operators.


Proposition 1.36 describes the result of the application of differential operators to different
products of scalar and vector fields (generalising the product rule).
~ lying in the kernel (or
Section 1.5 describes some relations between vector fields F
~ F
~ = 0 and
nullspace) of the divergence and the curl operators (i.e. solenoidal fields
~ = ~0) and those in the image of the gradient and the curl (i.e. fields
~ F
irrotational fields
~ =
~ or a vector potential F
~ =
~ A).
~
admitting a scalar potential F
Section 1.6 investigates the differentiation of fields evaluated along curves, and of the
composition of different kinds of vectorial functions.
Sections 2.1 and 2.2 extends the definition of integrals of real functions to:
Type of integral
Integrals of real functions
Line integrals of scalar fields
Line integrals of vector fields
Double integrals
Triple integrals
Surface integrals
Fluxes

domain of integration
interval (a, b)
path
oriented path
planar domain R R2
domain D R3
surface S
oriented surface (S, n
)

Several theorems establish important


see Table 2. The most relevant are:
The integral
of
on a(n)
FTC
interval (a, b)
derivative
FTVC oriented path t gradient
curl
Green 2D region R
k
Stokes or. surface (S, n
) n
curl
Diver. 3D domain D
divergence

integrand
real function
scalar field
vector field
scalar field
scalar field
scalar field
vector field

notation
Rb
Ra f dt
f ds
R
~
F
RR d~r
f dA
R
RRR
RR D f dV
f dS
RRS
~ n
F
dS
R

relations between integration and differentiation,


of a
function f
scalar field f
~
vector field F
~
vector field F
~
vector field F
74

is equal
to the
difference
difference
circulation
circulation
flux

of

on the

f
f
~
F
~
F
~
F

endpoints
endpoints
boundary R
boundary S
boundary D

A. Moiola

Vector calculus notes, version 3

Vectors, scalar product


Vector and triple product
Scalar fields
Vector fields
Vector functions
Partial derivatives
Gradient
Jacobian matrix
Laplacian
Divergence and curl operator
Vector differential identities
Conservative fields, scalar potentials
Irrotational and solenoidal fields, potentials
Chain rule
Line integral of scalar fields
Line integral of vector fields
Path independence and conservative fields
Double integrals
Change of variables
Triple integrals
Surface integrals
Flux integrals
Polar coordinates
Cylindrical and spherical coordinates
Greens theorem
Divergence theorem
Stokes theorem

Note sections
1.1, 1.1.1
1.1.2, 1.1.3
1.2.1
1.2.2
1.2.3
1.3.1
1.3.1
1.3.2
1.3.3
1.3.4
1.4
1.5
1.5
1.6
2.1.1
2.1.2
2.1.3
2.2.1
2.2.2
2.2.3
2.2.4
2.2.5
2.3.1
2.3.2, 2.3.3
3.1
3.2
3.3

Book sections
10.2
10.3
12.1
15.1
11.1
12.35
12.7, 16.1
12.6
16.2
16.1
16.2
15.2
16.2
12.5
15.3, (11.3)
15.4
15.4
14.1, 14.2
14.4, 14.6
14.5
15.5
15.6
8.5, 8.6, 14.4
10.6, 14.6, 16.7
16.3
16.4
16.5

Table 3: References to the relevant sections of the textbook [1].

75

A. Moiola

Vector calculus notes, version 3

Contents
1 Fields and vector differential operators
1.1 Review of vectors in 3-dimensional Euclidean space . . . .
1.1.1 Scalar product . . . . . . . . . . . . . . . . . . . .
1.1.2 Vector product . . . . . . . . . . . . . . . . . . . .
1.1.3 Triple product . . . . . . . . . . . . . . . . . . . .
1.2 Scalar fields, vector fields and vector functions . . . . . .
1.2.1 Scalar fields . . . . . . . . . . . . . . . . . . . . . .
1.2.2 Vector fields . . . . . . . . . . . . . . . . . . . . . .
1.2.3 Vector functions and curves . . . . . . . . . . . . .
1.3 Vector differential operators . . . . . . . . . . . . . . . . .
1.3.1 Partial derivatives and the gradient . . . . . . . .
1.3.2 The Jacobian matrix . . . . . . . . . . . . . . . . .
1.3.3 Second-order partial derivatives, the Laplacian and
1.3.4 The divergence operator . . . . . . . . . . . . . . .
1.3.5 The curl operator . . . . . . . . . . . . . . . . . . .
1.4 Vector differential identities . . . . . . . . . . . . . . . . .
1.5 Special vector fields and potentials . . . . . . . . . . . . .
1.6 Total derivatives and chain rule for fields . . . . . . . . . .
1.7 Review exercises for Section 1 . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

2
2
4
4
6
7
7
9
9
11
11
14
15
16
18
19
22
24
26

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

27
27
27
29
32
34
34
37
42
44
45
47
48
50
53

3 Greens, divergence and Stokes theorems


3.1 Greens theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.2 The divergence theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.3 Stokes theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

57
58
64
70

General overview of the notes

74

References to the relevant sections of the textbook (table)

75

Contents

76

. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
the Hessian
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .

2 Vector integration
2.1 Line integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1.1 Line integrals of scalar fields . . . . . . . . . . . . . . . . . .
2.1.2 Line integrals of vector fields . . . . . . . . . . . . . . . . . .
2.1.3 Independence of path and line integrals for conservative fields
2.2 Multiple integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2.1 Double integrals . . . . . . . . . . . . . . . . . . . . . . . . .
2.2.2 Change of variables . . . . . . . . . . . . . . . . . . . . . . .
2.2.3 Triple integrals . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2.4 Surface integrals . . . . . . . . . . . . . . . . . . . . . . . . .
2.2.5 Unit normal fields, orientations and flux integrals . . . . . . .
2.3 Special coordinate systems . . . . . . . . . . . . . . . . . . . . . . . .
2.3.1 Polar coordinates . . . . . . . . . . . . . . . . . . . . . . . . .
2.3.2 Cylindrical coordinates . . . . . . . . . . . . . . . . . . . . .
2.3.3 Spherical coordinates . . . . . . . . . . . . . . . . . . . . . . .

76

.
.
.
.
.
.
.
.
.
.
.
.
.
.

También podría gustarte