Está en la página 1de 79

Contents

Articles
Fourier transform Convolution Convolution theorem Laplace transform Dirac delta function 1 23 35 37 54

References
Article Sources and Contributors Image Sources, Licenses and Contributors 76 77

Article Licenses
License 78

Fourier transform

Fourier transform
Fourier transforms Continuous Fourier transform Fourier series Discrete-time Fourier transform Discrete Fourier transform Fourier analysis Related transforms

The Fourier transform (English pronunciation: /frie/), named after Joseph Fourier, is a mathematical transformation employed to transform signals between time (or spatial) domain and frequency domain, which has many applications in physics and engineering. It is reversible, being able to transform from either domain to the other. The term itself refers to both the transform operation and to the function it produces. In the case of a periodic function over time (for example, a continuous but not necessarily sinusoidal musical sound), the Fourier transform can be simplified to the calculation of a discrete set of complex amplitudes, called Fourier series coefficients. They represent the frequency spectrum of the original time-domain signal. Also, when a time-domain function is sampled to facilitate storage or computer-processing, it is still possible to recreate a version of the original Fourier transform according to the Poisson summation formula, also known as discrete-time Fourier transform. See also Fourier analysis and List of Fourier-related transforms.

Definition
There are several common conventions for defining the Fourier transform of an integrable function (Kaiser 1994, p.29), (Rahman 2011, p.11). This article will use the following definition: , for any real number . When the independent variable x represents time (with SI unit of seconds), the transform variable represents frequency (in hertz). Under suitable conditions, is determined by via the inverse transform: , for any real numberx. The statement that can be reconstructed from is known as the Fourier inversion theorem, and was first

introduced in Fourier's Analytical Theory of Heat (Fourier 1822, p.525), (Fourier & Freeman 1878, p.408), although what would be considered a proof by modern standards was not given until much later (Titchmarsh 1948, p.1). The functions and often are referred to as a Fourier integral pair or Fourier transform pair (Rahman 2011, p.10). For other common conventions and notations, including using the angular frequency instead of the frequency , see Other conventions and Other notations below. The Fourier transform on Euclidean space is treated separately, in which the variable x often represents position and momentum.

Fourier transform

Introduction
The motivation for the Fourier transform comes from the study of Fourier series. In the study of Fourier series, complicated but periodic functions are written as the sum of simple waves mathematically represented by sines and cosines. The Fourier transform is an extension of the Fourier series that results when the period of the represented function is lengthened and allowed to approach infinity (Taneja 2008, p.192). Due to the properties of sine and cosine, it is possible to recover the amplitude of each wave in a Fourier series using an integral. In many The Fourier transform relates the function's time cases it is desirable to use Euler's formula, which states that e2i = domain, shown in red, to the function's frequency cos(2) + i sin(2), to write Fourier series in terms of the basic domain, shown in blue. The component waves e2i. This has the advantage of simplifying many of the frequencies, spread across the frequency spectrum, are represented as peaks in the formulas involved, and provides a formulation for Fourier series that frequency domain. more closely resembles the definition followed in this article. Re-writing sines and cosines as complex exponentials makes it necessary for the Fourier coefficients to be complex valued. The usual interpretation of this complex number is that it gives both the amplitude (or size) of the wave present in the function and the phase (or the initial angle) of the wave. These complex exponentials sometimes contain negative "frequencies". If is measured in seconds, then the waves e2i and e2i both complete one cycle per second, but they represent different frequencies in the Fourier transform. Hence, frequency no longer measures the number of cycles per unit time, but is still closely related. There is a close connection between the definition of Fourier series and the Fourier transform for functions f which are zero outside of an interval. For such a function, we can calculate its Fourier series on any interval that includes the points where f is not identically zero. The Fourier transform is also defined for such a function. As we increase the length of the interval on which we calculate the Fourier series, then the Fourier series coefficients begin to look like the Fourier transform and the sum of the Fourier series of f begins to look like the inverse Fourier transform. To explain this more precisely, suppose that T is large enough so that the interval [T/2,T/2] contains the interval on which f is not identically zero. Then the n-th series coefficient cn is given by:

Comparing this to the definition of the Fourier transform, it follows that

since f(x) is zero

outside [T/2,T/2]. Thus the Fourier coefficients are just the values of the Fourier transform sampled on a grid of width 1/T, multiplied by the grid width 1/T. Under appropriate conditions, the sum of the Fourier series of f will equal the function f. In other words, f can be written:

where the last sum is simply the first sum rewritten using the definitions n = n/T, and = (n + 1)/T n/T = 1/T. This second sum is a Riemann sum, and so by letting T it will converge to the integral for the inverse Fourier transform given in the definition section. Under suitable conditions this argument may be made precise (Stein & Shakarchi 2003). In the study of Fourier series the numbers cn could be thought of as the "amount" of the wave present in the Fourier series of f. Similarly, as seen above, the Fourier transform can be thought of as a function that measures how much of each individual frequency is present in our function f, and we can recombine these waves by using an integral (or

Fourier transform "continuous sum") to reproduce the original function.

Example
The following images provide a visual illustration of how the Fourier transform measures whether a frequency is present in a particular function. The function depicted f(t) = cos(6t) et2 oscillates at 3 hertz (if t measures seconds) and tends quickly to 0. (The second factor in this equation is an envelope function that shapes the continuous sinusoid into a short pulse. Its general form is a Gaussian function). This function was specially chosen to have a real Fourier transform which can easily be plotted. The first image contains its graph. In order to calculate we must integrate e2i(3t)f(t). The second image shows the plot of the real and imaginary parts of this function. The real part of the integrand is almost always positive, because when f(t) is negative, the real part of e2i(3t) is negative as well. Because they oscillate at the same rate, when f(t) is positive, so is the real part of e2i(3t). The result is that when you integrate the real part of the integrand you get a relatively large number (in this case 0.5). On the other hand, when you try to measure a frequency that is not present, as in the case when we look at , the integrand oscillates enough so that the integral is very small. The general situation may be a bit more complicated than this, but this in spirit is how the Fourier transform measures how much of an individual frequency is present in a function f(t).

Original function showing oscillation 3 hertz.

Real and imaginary parts of integrand for Fourier transform at 3 hertz

Real and imaginary parts of integrand for Fourier transform at 5 hertz

Fourier transform with 3 and 5 hertz labeled.

Properties of the Fourier transform


Here we assume f(x), g(x) and h(x) are integrable functions, are Lebesgue-measurable on the real line, and satisfy:

We denote the Fourier transforms of these functions by

and

respectively.

Basic properties
The Fourier transform has the following basic properties: (Pinsky 2002). Linearity For any complex numbers a and b, if h(x) = af(x) + bg(x), then Translation For any real number x0, if Modulation For any real number 0 if Scaling then then

Fourier transform

For a non-zero real number a, if h(x) = f(ax), then time-reversal property, which states: if h(x) = f(x), then Conjugation If then

The case a = 1 leads to the

In particular, if f is real, then one has the reality condition function. And if f is purely imaginary, then Integration Substituting in the definition, we obtain

, that is,

is a Hermitian

That is, the evaluation of the Fourier transform in the origin (

) equals the integral of f all over its domain.

Invertibility and periodicity


Under suitable conditions on the function f, it can be recovered from its Fourier transform Fourier transform operator by simply flips the function: time is two-periodic, applying this twice yields so Indeed, denoting the then for suitable functions, applying the Fourier transform twice which can be interpreted as "reversing time". Since reversing so the Fourier transform operator is four-periodic, and

similarly the inverse Fourier transform can be obtained by applying the Fourier transform three times: In particular the defining Fourier transform isoperator invertible (under suitable conditions). More precisely, the parity that inverts time, :

These equalities of operators require careful definition of the space of functions in question, defining equality of functions (equality at every point? equality almost everywhere?) and defining equality of operators that is, defining the topology on the function space and operator space in question. These are not true for all functions, but are true under various conditions, which are the content of the various forms of the Fourier inversion theorem. This four-fold periodicity of the Fourier transform is similar to a rotation of the plane by 90, particularly as the two-fold iteration yields a reversal, and in fact this analogy can be made precise. While the Fourier transform can simply be interpreted as switching the time domain and the frequency domain, with the inverse Fourier transform switching them back, more geometrically it can be interpreted as a rotation by 90 in the timefrequency domain (considering time as the x-axis and frequency as the y-axis), and the Fourier transform can be generalized to the fractional Fourier transform, which involves rotations by other angles. This can be further generalized to linear canonical transformations, which can be visualized as the action of the special linear group SL2(R) on the timefrequency plane, with the preserved symplectic form corresponding to the uncertainty principle, below. This approach is particularly studied in signal processing, under timefrequency analysis.

Fourier transform

Uniform continuity and the RiemannLebesgue lemma


The Fourier transform may be defined in some cases for non-integrable functions, but the Fourier transforms of integrable functions have several strong properties. The Fourier transform, continuous and , of any integrable function f is uniformly (Katznelson 1976). By the

RiemannLebesgue lemma (Stein & Weiss 1971), However, need not be integrable. For example, the Fourier

The rectangular function is Lebesgue integrable.

transform of the rectangular function, which is integrable, is the sinc function, which is not Lebesgue integrable, because its improper integrals behave analogously to the alternating harmonic series, in converging to a sum without being absolutely convergent. It is not generally possible to write the inverse transform as a Lebesgue integral. However, when both f and are integrable, the inverse equality

holds almost everywhere. That is, the Fourier transform is injective on L1(R). (But if f is continuous, then equality holds for every x.)

The sinc function, which is the Fourier transform of the rectangular function, is bounded and continuous, but not Lebesgue integrable.

Plancherel theorem and Parseval's theorem


Let f(x) and g(x) be integrable, and let and be their Fourier transforms. If f(x) and g(x) are also

square-integrable, then we have Parseval's theorem (Rudin 1987, p. 187):

where the bar denotes complex conjugation. The Plancherel theorem, which is equivalent to Parseval's theorem, states (Rudin 1987, p. 186):

The Plancherel theorem makes it possible to extend the Fourier transform, by a continuity argument, to a unitary operator on L2(R). On L1(R)L2(R), this extension agrees with original Fourier transform defined on L1(R), thus enlarging the domain of the Fourier transform to L1(R) + L2(R) (and consequently to Lp(R) for 1 p 2). The Plancherel theorem has the interpretation in the sciences that the Fourier transform preserves the energy of the original quantity. Depending on the author either of these theorems might be referred to as the Plancherel theorem or as Parseval's theorem. See Pontryagin duality for a general formulation of this concept in the context of locally compact abelian groups.

Fourier transform

Poisson summation formula


The Poisson summation formula (PSF) is an equation that relates the Fourier series coefficients of the periodic summation of a function to values of the function's continuous Fourier transform. It has a variety of useful forms that are derived from the basic one by application of the Fourier transform's scaling and time-shifting properties. The frequency-domain dual of the standard PSF is also called discrete-time Fourier transform, which leads directly to: a popular, graphical, frequency-domain representation of the phenomenon of aliasing, and a proof of the Nyquist-Shannon sampling theorem.

Convolution theorem
The Fourier transform translates between convolution and multiplication of functions. If f(x) and g(x) are integrable functions with Fourier transforms and respectively, then the Fourier transform of the convolution is given by the product of the Fourier transforms Fourier transform a constant factor may appear). This means that if: and (under other conventions for the definition of the

where denotes the convolution operation, then:

In linear time invariant (LTI) system theory, it is common to interpret g(x) as the impulse response of an LTI system with input f(x) and output h(x), since substituting the unit impulse for f(x) yields h(x) = g(x). In this case, represents the frequency response of the system. Conversely, if f(x) can be decomposed as the product of two square integrable functions p(x) and q(x), then the Fourier transform of f(x) is given by the convolution of the respective Fourier transforms and .

Cross-correlation theorem
In an analogous manner, it can be shown that if h(x) is the cross-correlation of f(x) and g(x):

then the Fourier transform of h(x) is:

As a special case, the autocorrelation of function f(x) is:

for which

Fourier transform

Eigenfunctions
One important choice of an orthonormal basis for L2(R) is given by the Hermite functions

where Hen(x) are the "probabilist's" Hermite polynomials, defined by

Under this convention for the Fourier transform, we have that . In other words, the Hermite functions form a complete orthonormal system of eigenfunctions for the Fourier transform on L2(R) (Pinsky 2002). However, this choice of eigenfunctions is not unique. There are only four different eigenvalues of the Fourier transform (1 and i) and any linear combination of eigenfunctions with the same eigenvalue gives another eigenfunction. As a consequence of this, it is possible to decompose L2(R) as a direct sum of four spaces H0, H1, H2, and H3 where the Fourier transform acts on Hek simply by multiplication by ik. Since the complete set of Hermite functions provides a resolution of the identity, the Fourier transform can be represented by such a sum of terms weighted by the above eigenvalues, and these sums can be explicitly summed. This approach to define the Fourier transform was first done by Norbert Wiener(Duoandikoetxea 2001). Among other properties, Hermite functions decrease exponentially fast in both frequency and time domains, and they are thus used to define a generalization of the Fourier transform, namely the fractional Fourier transform used in time-frequency analysis (Boashash 2003). In physics, this transform was introduced by Edward Condon(Condon 1937).

Fourier transform on Euclidean space


The Fourier transform can be in any arbitrary number of dimensions n. As with the one-dimensional case, there are many conventions. For an integrable function f(x), this article takes the definition:

where x and are n-dimensional vectors, and x is the dot product of the vectors. The dot product is sometimes written as . All of the basic properties listed above hold for the n-dimensional Fourier transform, as do Plancherel's and Parseval's theorem. When the function is integrable, the Fourier transform is still uniformly continuous and the RiemannLebesgue lemma holds. (Stein & Weiss 1971)

Uncertainty principle
Generally speaking, the more concentrated f(x) is, the more spread out its Fourier transform must be. In particular, the scaling property of the Fourier transform may be seen as saying: if we "squeeze" a function in x, its Fourier transform "stretches out" in . It is not possible to arbitrarily concentrate both a function and its Fourier transform. The trade-off between the compaction of a function and its Fourier transform can be formalized in the form of an uncertainty principle by viewing a function and its Fourier transform as conjugate variables with respect to the symplectic form on the timefrequency domain: from the point of view of the linear canonical transformation, the Fourier transform is rotation by 90 in the timefrequency domain, and preserves the symplectic form. Suppose f(x) is an integrable and square-integrable function. Without loss of generality, assume that f(x) is normalized:

Fourier transform

It follows from the Plancherel theorem that

is also normalized.

The spread around x= 0 may be measured by the dispersion about zero (Pinsky 2002, p.131) defined by

In probability terms, this is the second moment of |f(x)|2 about zero. The Uncertainty principle states that, if f(x) is absolutely continuous and the functions xf(x) and f(x) are square integrable, then (Pinsky 2002). The equality is attained only in the case
2

(hence

) where > 0 is

arbitrary and C1 is such that f is L normalized (Pinsky 2002). In other words, where f is a (normalized) Gaussian function with variance 2, centered at zero, and its Fourier transform is a Gaussian function with variance 2. In fact, this inequality implies that:

for any x0, 0 R (Stein & Shakarchi 2003, p.158). In quantum mechanics, the momentum and position wave functions are Fourier transform pairs, to within a factor of Planck's constant. With this constant properly taken into account, the inequality above becomes the statement of the Heisenberg uncertainty principle (Stein & Shakarchi 2003, p.158). A stronger uncertainty principle is the Hirschman uncertainty principle which is expressed as:

where H(p) is the differential entropy of the probability density function p(x):

where the logarithms may be in any base which is consistent. The equality is attained for a Gaussian, as in the previous case.

Spherical harmonics
Let the set of homogeneous harmonic polynomials of degree k on Rn be denoted by Ak. The set Ak consists of the solid spherical harmonics of degree k. The solid spherical harmonics play a similar role in higher dimensions to the Hermite polynomials in dimension one. Specifically, if f(x) = e|x|2P(x) for some P(x) in Ak, then . Let the set Hk be the closure in L2(Rn) of linear combinations of functions of the form f(|x|)P(x) where P(x) is in Ak. The space L2(Rn) is then a direct sum of the spaces Hk and the Fourier transform maps each space Hk to itself and is possible to characterize the action of the Fourier transform on each space Hk (Stein & Weiss 1971). Let f(x) = f0(|x|)P(x) (with P(x) in Ak), then where

Here J(n+2k2)/2 denotes the Bessel function of the first kind with order (n+2k2)/2. When k=0 this gives a useful formula for the Fourier transform of a radial function (Grafakos 2004). Note that this is essentially the Hankel transform. Moreover, there is a simple recursion relating the cases n+2 and n (Grafakos & Teschl 2013) allowing to compute, e.g., the three-dimensional Fourier transform of a radial function from the one-dimensional one.

Fourier transform

Restriction problems
In higher dimensions it becomes interesting to study restriction problems for the Fourier transform. The Fourier transform of an integrable function is continuous and the restriction of this function to any set is defined. But for a square-integrable function the Fourier transform could be a general class of square integrable functions. As such, the restriction of the Fourier transform of an L2(Rn) function cannot be defined on sets of measure 0. It is still an active area of study to understand restriction problems in Lp for 1<p<2. Surprisingly, it is possible in some cases to define the restriction of a Fourier transform to a set S, provided S has non-zero curvature. The case when S is the unit sphere in Rn is of particular interest. In this case the Tomas-Stein restriction theorem states that the restriction of the Fourier transform to the unit sphere in Rn is a bounded operator on Lp provided 1 p (2n + 2) / (n + 3). One notable difference between the Fourier transform in 1 dimension versus higher dimensions concerns the partial sum operator. Consider an increasing collection of measurable sets ER indexed by R(0,): such as balls of radius R centered at the origin, or cubes of side 2R. For a given integrable function f, consider the function fR defined by:

Suppose in addition that f Lp(Rn). For n = 1 and 1 < p < , if one takes ER = (R, R), then fR converges to f in Lp as R tends to infinity, by the boundedness of the Hilbert transform. Naively one may hope the same holds true for n > 1. In the case that ER is taken to be a cube with side length R, then convergence still holds. Another natural candidate is the Euclidean ball ER= {:||< R}. In order for this partial sum operator to converge, it is necessary that the multiplier for the unit ball be bounded in Lp(Rn). For n2 it is a celebrated theorem of Charles Fefferman that the multiplier for the unit ball is never bounded unless p=2 (Duoandikoetxea 2001). In fact, when p 2, this shows that not only may fR fail to converge to f in Lp, but for some functions f Lp(Rn), fR is not even an element of Lp.

Fourier transform on function spaces


On Lp spaces
On L1 The definition of the Fourier transform by the integral formula

is valid for Lebesgue integrable functions f; that is, f L1(Rn). The Fourier transform : L1(Rn) L(Rn) is a bounded operator. This follows from the observation that

which shows that its operator norm is bounded by 1. Indeed it equals 1, which can be seen, for example, from the transform of the rect function. The image of L1 is a subset of the space C0(Rn) of continuous functions that tend to zero at infinity (the RiemannLebesgue lemma), although it is not the entire space. Indeed, there is no simple characterization of the image. On L2 Since compactly supported smooth functions are integrable and dense in L2(Rn), the Plancherel theorem allows us to extend the definition of the Fourier transform to general functions in L2(Rn) by continuity arguments. The Fourier transform in L2(Rn) is no longer given by an ordinary Lebesgue integral, although it can be computed by an improper integral, here meaning that for an L2 function f,

Fourier transform where the limit is taken in the L2 sense. Many of the properties of the Fourier transform in L1 carry over to L2, by a suitable limiting argument. Furthermore : L2(Rn) L2(Rn) is a unitary operator (Stein & Weiss 1971, Thm. 2.3). For an operator to be unitary it is sufficient to show that it is bijective and preserves the inner product, so in this case these follow from the Fourier inversion theorem combined with the fact that for any f,gL2(Rn) we have

10

In particular, the image of L2(Rn) is itself under the Fourier transform. On other Lp The definition of the Fourier transform can be extended to functions in Lp(Rn) for 1 p 2 by decomposing such functions into a fat tail part in L2 plus a fat body part in L1. In each of these spaces, the Fourier transform of a function in Lp(Rn) is in Lq(Rn), where is the Hlder conjugate of p. by the HausdorffYoung inequality. However, except for p = 2, the image is not easily characterized. Further extensions become more technical. The Fourier transform of functions in Lp for the range 2 < p < requires the study of distributions (Katznelson 1976). In fact, it can be shown that there are functions in Lp with p > 2 so that the Fourier transform is not defined as a function (Stein & Weiss 1971).

Tempered distributions
One might consider enlarging the domain of the Fourier transform from L1+L2 by considering generalized functions, or distributions. A distribution on Rn is a continuous linear functional on the space Cc(Rn) of compactly supported smooth functions, equipped with a suitable topology. The strategy is then to consider the action of the Fourier transform on Cc(Rn) and pass to distributions by duality. The obstruction to do this is that the Fourier transform does not map Cc(Rn) to Cc(Rn). In fact the Fourier transform of an element in Cc(Rn) can not vanish on an open set; see the above discussion on the uncertainty principle. The right space here is the slightly larger space of Schwartz functions. The Fourier transform is an automorphism on the Schwartz space, as a topological vector space, and thus induces an automorphism on its dual, the space of tempered distributions(Stein & Weiss 1971). The tempered distribution include all the integrable functions mentioned above, as well as well-behaved functions of polynomial growth and distributions of compact support. For the definition of the Fourier transform of a tempered distribution, let f and g be integrable functions, and let and be their Fourier transforms respectively. Then the Fourier transform obeys the following multiplication formula (Stein & Weiss 1971),

Every integrable function f defines (induces) a distribution Tf by the relation for all Schwartz functions . So it makes sense to define Fourier transform of Tf by

for all Schwartz functions . Extending this to all tempered distributions T gives the general definition of the Fourier transform. Distributions can be differentiated and the above mentioned compatibility of the Fourier transform with differentiation and convolution remains true for tempered distributions.

Fourier transform

11

Generalizations
FourierStieltjes transform
The Fourier transform of a finite Borel measure on Rn is given by (Pinsky 2002, p.256):

This transform continues to enjoy many of the properties of the Fourier transform of integrable functions. One notable difference is that the RiemannLebesgue lemma fails for measures (Katznelson 1976). In the case that d = f(x)dx, then the formula above reduces to the usual definition for the Fourier transform of f. In the case that is the probability distribution associated to a random variable X, the Fourier-Stieltjes transform is closely related to the characteristic function, but the typical conventions in probability theory take eix instead of e2ix (Pinsky 2002). In the case when the distribution has a probability density function this definition reduces to the Fourier transform applied to the probability density function, again with a different choice of constants. The Fourier transform may be used to give a characterization of measures. Bochner's theorem characterizes which functions may arise as the FourierStieltjes transform of a positive measure on the circle (Katznelson 1976). Furthermore, the Dirac delta function is not a function but it is a finite Borel measure. Its Fourier transform is a constant function (whose specific value depends upon the form of the Fourier transform used).

Locally compact abelian groups


The Fourier transform may be generalized to any locally compact abelian group. A locally compact abelian group is an abelian group which is at the same time a locally compact Hausdorff topological space so that the group operation is continuous. If G is a locally compact abelian group, it has a translation invariant measure , called Haar measure. For a locally compact abelian group G, the set of irreducible, i.e. one-dimensional, unitary representations are called its characters. With its natural group structure and the topology of pointwise convergence, the set of characters is itself a locally compact abelian group, called the Pontryagin dual of G. For a function f in L1(G), its Fourier transform is defined by (Katznelson 1976):

The Riemann-Lebesgue lemma holds in this case;

is a function vanishing at infinity on

Gelfand transform
The Fourier transform is also a special case of Gelfand transform. In this particular context, it is closely related to the Pontryagin duality map defined above. Given an abelian locally compact Hausdorff topological group G, as before we consider space L1(G), defined using a Haar measure. With convolution as multiplication, L1(G) is an abelian Banach algebra. It also has an involution * given by

Taking the completion with respect to the largest possibly C*-norm gives its enveloping C*-algebra, called the group C*-algebra C*(G) of G. (Any C*-norm on L1(G) is bounded by the L1 norm, therefore their supremum exists.) Given any abelian C*-algebra A, the Gelfand transform gives an isomorphism between A and C0(A^), where A^ is the multiplicative linear functionals, i.e. one-dimensional representations, on A with the weak-* topology. The map is simply given by

Fourier transform It turns out that the multiplicative linear functionals of C*(G), after suitable identification, are exactly the characters of G, and the Gelfand transform, when restricted to the dense subset L1(G) is the Fourier-Pontryagin transform.

12

Non-abelian groups
The Fourier transform can also be defined for functions on a non-abelian group, provided that the group is compact. Removing the assumption that the underlying group is abelian, irreducible unitary representations need not always be one-dimensional. This means the Fourier transform on a non-abelian group takes values as Hilbert space operators (Hewitt & Ross 1970, Chapter 8). The Fourier transform on compact groups is a major tool in representation theory (Knapp 2001) and non-commutative harmonic analysis. Let G be a compact Hausdorff topological group. Let denote the collection of all isomorphism classes of finite-dimensional irreducible unitary representations, along with a definite choice of representation U() on the Hilbert space H of finite dimension d for each . If is a finite Borel measure on G, then the FourierStieltjes transform of is the operator on H defined by

where

is the complex-conjugate representation of U() acting on H. If is absolutely continuous with respect

to the left-invariant probability measure on G, represented as for some f L1(), one identifies the Fourier transform of f with the FourierStieltjes transform of . The mapping defines an isomorphism between the Banach space M(G) of finite Borel measures (see rca

space) and a closed subspace of the Banach space C() consisting of all sequences E = (E) indexed by of (bounded) linear operators E: H H for which the norm is finite. The "convolution theorem" asserts that, furthermore, this isomorphism of Banach spaces is in fact an isometric isomorphism of C* algebras into a subspace of C(). Multiplication on M(G) is given by convolution of measures and the involution * defined by

and C() has a natural C*-algebra structure as Hilbert space operators. The Peter-Weyl theorem holds, and a version of the Fourier inversion formula (Plancherel's theorem) follows: if f L2(G), then

where the summation is understood as convergent in the L2 sense. The generalization of the Fourier transform to the noncommutative situation has also in part contributed to the development of noncommutative geometry.[citation needed] In this context, a categorical generalization of the Fourier transform to noncommutative groups is Tannaka-Krein duality, which replaces the group of characters with the category of representations. However, this loses the connection with harmonic functions.

Fourier transform

13

Alternatives
In signal processing terms, a function (of time) is a representation of a signal with perfect time resolution, but no frequency information, while the Fourier transform has perfect frequency resolution, but no time information: the magnitude of the Fourier transform at a point is how much frequency content there is, but location is only given by phase (argument of the Fourier transform at a point), and standing waves are not localized in time a sine wave continues out to infinity, without decaying. This limits the usefulness of the Fourier transform for analyzing signals that are localized in time, notably transients, or any signal of finite extent. As alternatives to the Fourier transform, in time-frequency analysis, one uses time-frequency transforms or time-frequency distributions to represent signals in a form that has some time information and some frequency information by the uncertainty principle, there is a trade-off between these. These can be generalizations of the Fourier transform, such as the short-time Fourier transform or fractional Fourier transform, or other functions to represent signals, as in wavelet transforms and chirplet transforms, with the wavelet analog of the (continuous) Fourier transform being the continuous wavelet transform. (Boashash 2003).

Applications
Analysis of differential equations
Fourier transforms and the closely related Laplace transforms are widely used in solving differential equations. Some problems, such as certain differential equations, become easier to solve when the The Fourier transform is compatible Fourier transform is applied. In that case the solution to the original problem is recovered using the inverse Fourier transform. with differentiation in the following sense: if f(x) is a differentiable function with Fourier transform , then the Fourier transform of its derivative is given by . This can be used to transform differential equations into algebraic equations. This technique only applies to problems whose domain is the whole set of real numbers. By extending the Fourier transform to functions of several variables partial differential equations with domain Rn can also be translated into algebraic equations.

Fourier transform spectroscopy


The Fourier transform is also used in nuclear magnetic resonance (NMR) and in other kinds of spectroscopy, e.g. infrared (FTIR). In NMR an exponentially shaped free induction decay (FID) signal is acquired in the time domain and Fourier-transformed to a Lorentzian line-shape in the frequency domain. The Fourier transform is also used in magnetic resonance imaging (MRI) and mass spectrometry.

Quantum mechanics and signal processing


In quantum mechanics, Fourier transforms of solutions to the Schrdinger equation are known as momentum space (or k space) wave functions. They display the amplitudes for momenta. Their absolute square is the probabilities of momenta. This is valid also for classical waves treated in signal processing, such as in swept frequency radar where data is taken in frequency domain and transformed to time domain, yielding range. The absolute square is then the power.

Fourier transform

14

Other notations
Other common notations for include:

Denoting the Fourier transform by a capital letter corresponding to the letter of function being transformed (such as f(x) and F()) is especially common in the sciences and engineering. In electronics, the omega () is often used instead of due to its interpretation as angular frequency, sometimes it is written as F(j), where j is the imaginary unit, to indicate its relationship with the Laplace transform, and sometimes it is written informally as F(2f) in order to use ordinary frequency. The interpretation of the complex function may be aided by expressing it in polar coordinate form

in terms of the two real functions A() and () where:

is the amplitude and

is the phase (see arg function). Then the inverse transform can be written:

which is a recombination of all the frequency components of f(x). Each component is a complex sinusoid of the form e2ix whose amplitude is A() and whose initial phase angle (at x=0) is (). The Fourier transform may be thought of as a mapping on function spaces. This mapping is here denoted is used to denote the Fourier transform of the function f. This mapping is linear, which means that and can

also be seen as a linear transformation on the function space and implies that the standard notation in linear algebra of applying a linear transformation to a vector (here the function f) can be used to write instead of . Since the result of applying the Fourier transform is again a function, we can be interested in the value of this function evaluated at the value for its variable, and this is denoted either as in the former case, it is implicitly understood that or as . Notice that is applied first to f and then the resulting function is evaluated

at , not the other way around. In mathematics and various applied sciences it is often necessary to distinguish between a function f and the value of f when its variable equals x, denoted f(x). This means that a notation like formally can be interpreted as the Fourier transform of the values of f at x. Despite this flaw, the previous notation appears frequently, often when a particular function or a function of a particular variable is to be transformed. For example, function is a sinc function, or is used to express the shift property of the Fourier transform. Notice, that the last example is only correct under the assumption that the transformed function is a function of x, not of x0. is sometimes used to express that the Fourier transform of a rectangular

Fourier transform

15

Other conventions
The Fourier transform can also be written in terms of angular frequency: = 2 whose units are radians per second. The substitution = /(2) into the formulas above produces this convention:

Under this convention, the inverse transform becomes:

Unlike the convention followed in this article, when the Fourier transform is defined this way, it is no longer a unitary transformation on L2(Rn). There is also less symmetry between the formulas for the Fourier transform and its inverse. Another convention is to split the factor of (2)n evenly between the Fourier transform and its inverse, which leads to definitions:

Under this convention, the Fourier transform is again a unitary transformation on L2(Rn). It also restores the symmetry between the Fourier transform and its inverse. Variations of all three conventions can be created by conjugating the complex-exponential kernel of both the forward and the reverse transform. The signs must be opposites. Other than that, the choice is (again) a matter of convention.

Summary of popular forms of the Fourier transform


ordinary frequency (hertz) unitary

angular frequency (rad/s) non-unitary

unitary

As discussed above, the characteristic function of a random variable is the same as the FourierStieltjes transform of its distribution measure, but in this context it is typical to take a different convention for the constants. Typically characteristic function is defined .

As in the case of the "non-unitary angular frequency" convention above, there is no factor of 2 appearing in either of the integral, or in the exponential. Unlike any of the conventions appearing above, this convention takes the opposite sign in the exponential.

Fourier transform

16

Tables of important Fourier transforms


The following tables record some closed-form Fourier transforms. For functions f(x), g(x) and h(x) denote their Fourier transforms by , , and respectively. Only the three most common conventions are included. It may be useful to notice that entry 105 gives a relationship between the Fourier transform of a function and the original function, which can be seen as relating the Fourier transform and its inverse.

Functional relationships
The Fourier transforms in this table may be found in Erdlyi (1954) or Kammler (2000, appendix).
Function Fourier transform unitary, ordinary frequency Fourier transform unitary, angular frequency Fourier transform non-unitary, angular frequency Definition Remarks

101 102 103

Linearity Shift in time domain Shift in frequency domain, dual of 102 Scaling in the time domain. If is large, then is concentrated around 0 and spreads out and flattens.

104

105

Duality. Here

needs to be

calculated using the same method as Fourier transform column. Results from swapping "dummy" variables of and or 106 107 This is the dual of 106 or .

108

The notation convolution of

denotes the and this

rule is the convolution theorem 109 This is the dual of 108

110 For

purely real

Hermitian symmetry. indicates the complex conjugate. , and are purely real even functions.

111 For

a purely real

even function 112 For a purely real , and are purely imaginary odd functions. Complex conjugation, generalization of 110

odd function 113

Fourier transform

17

Square-integrable functions
The Fourier transforms in this table may be found in (Campbell & Foster 1948), (Erdlyi 1954), or the appendix of (Kammler 2000).
Function Fourier transform unitary, ordinary frequency Fourier transform unitary, angular frequency Fourier transform non-unitary, angular frequency Remarks

201

The rectangular pulse and the normalized sinc function, here defined as sinc(x) = sin(x)/(x) Dual of rule 201. The rectangular function is an ideal low-pass filter, and the sinc function is the non-causal impulse response of such a filter. The sinc function is defined here as sinc(x) = sin(x)/(x) The function tri(x) is the triangular function Dual of rule 203.

202

203

204

205

The function u(x) is the Heaviside unit step function and a>0. This shows that, for the unitary Fourier transforms, the Gaussian function exp(x2) is its own Fourier transform for some choice of . For this to be integrable we must have Re()>0. For a>0. That is, the Fourier transform of a decaying exponential function is a Lorentzian function. Hyperbolic secant is its own Fourier transform is the Hermite's polynomial. If a = 1 then the Gauss-Hermite functions are eigenfunctions of the Fourier transform operator. For a derivation, see Hermite polynomial. The formula reduces to 206 for n = 0.

206

207

208

209

Fourier transform

18

Distributions
The Fourier transforms in this table may be found in (Erdlyi 1954) or the appendix of (Kammler 2000).
Function Fourier transform unitary, ordinary frequency Fourier transform unitary, angular frequency Fourier transform non-unitary, angular frequency Remarks

301

The distribution () denotes the Dirac delta function. Dual of rule 301.

302

303

This follows from 103 and 301. This follows from rules 101 and 303 using Euler's formula:

304

305

This follows from 101 and 303 using

306

307

308

Here, n is a natural number and is the n-th distribution derivative of the Dirac delta function. This rule follows from rules 107 and 301. Combining this rule with 101, we can transform all polynomials.

309

Here sgn() is the sign function. Note that 1/x is not a distribution. It is necessary to use the Cauchy principal value when testing against Schwartz functions. This rule is useful in studying the Hilbert transform. 1/xn is the homogeneous distribution defined by the distributional derivative

310

Fourier transform

19
This formula is valid for 0 > > 1. For > 0 some singular terms arise at the origin that can be found by differentiating 318. If Re > 1, then is a locally integrable function, and so a tempered distribution. The function is a holomorphic function from the right half-plane to the space of tempered distributions. It admits a unique meromorphic extension to a tempered distribution, also denoted for 2, 4,... (See homogeneous distribution.)

311

312

The dual of rule 309. This time the Fourier transforms need to be considered as Cauchy principal value. The function u(x) is the Heaviside unit step function; this follows from rules 101, 301, and 312. This function is known as the Dirac comb function. This result can be derived from 302 and 102, together with the fact that

313

314

as distributions. 315 The function J0(x) is the zeroth order Bessel function of first kind. This is a generalization of 315. The function Jn(x) is the n-th order Bessel function of first kind. The function Tn(x) is the Chebyshev polynomial of the first kind. is the EulerMascheroni constant.

316

317

Fourier transform

20
This formula is valid for 1 > > 0. Use differentiation to derive formula for higher exponents. u is the Heaviside function.

318

Two-dimensional functions
Function Fourier transform unitary, ordinary frequency Fourier transform unitary, angular frequency Fourier transform non-unitary, angular frequency

400

401

402

Remarks To 400: The variables x, y, x, y, x and y are real numbers. The integrals are taken over the entire plane. To 401: Both functions are Gaussians, which may not have unit volume. To 402: The function is defined by circ(r)=1 0r1, and is 0 otherwise. This is the Airy distribution, and is expressed using J1 (the order 1 Bessel function of the first kind). (Stein & Weiss 1971, Thm. IV.3.3)

Formulas for general n-dimensional functions


Function Fourier transform unitary, ordinary frequency Fourier transform unitary, angular frequency Fourier transform non-unitary, angular frequency

500

501

502 503

Remarks To 501: The function [0, 1] is the indicator function of the interval [0, 1]. The function (x) is the gamma function. The function Jn/2 + is a Bessel function of the first kind, with order n/2 + . Taking n = 2 and = 0 produces 402. (Stein & Weiss 1971, Thm. 4.15) To 502: See Riesz potential. The formula also holds for all n, n 1, by analytic continuation, but then the function and its Fourier transforms need to be understood as suitably regularized tempered distributions. See homogeneous distribution.

Fourier transform To 503: This is the formula for a multivariate normal distribution normalized to 1 with a mean of 0. Bold variables are vectors or matrices. Following the notation of the aforementioned page, and

21

References
Boashash, B., ed. (2003), Time-Frequency Signal Analysis and Processing: A Comprehensive Reference, Oxford: Elsevier Science, ISBN0-08-044335-4 Bochner S., Chandrasekharan K. (1949), Fourier Transforms, Princeton University Press Bracewell, R. N. (2000), The Fourier Transform and Its Applications (3rd ed.), Boston: McGraw-Hill, ISBN0-07-116043-4. Campbell, George; Foster, Ronald (1948), Fourier Integrals for Practical Applications, New York: D. Van Nostrand Company, Inc.. Condon, E. U. (1937), "Immersion of the Fourier transform in a continuous group of functional transformations", Proc. Nat. Acad. Sci. USA 23: 158164. Duoandikoetxea, Javier (2001), Fourier Analysis, American Mathematical Society, ISBN0-8218-2172-5. Dym, H; McKean, H (1985), Fourier Series and Integrals, Academic Press, ISBN978-0-12-226451-1. Erdlyi, Arthur, ed. (1954), Tables of Integral Transforms 1, New Your: McGraw-Hill Fourier, J. B. Joseph (1822), Thorie Analytique de la Chaleur [1], Paris: Chez Firmin Didot, pre et fils Fourier, J. B. Joseph; Freeman, Alexander, translator (1878), The Analytical Theory of Heat [2], The University Press Grafakos, Loukas (2004), Classical and Modern Fourier Analysis, Prentice-Hall, ISBN0-13-035399-X. Grafakos, Loukas; Teschl, Gerald (2013), "On Fourier transforms of radial functions and distributions", J. Fourier Anal. Appl. 19: 167-179, doi:10.1007/s00041-012-9242-5 [3]. Hewitt, Edwin; Ross, Kenneth A. (1970), Abstract harmonic analysis. Vol. II: Structure and analysis for compact groups. Analysis on locally compact Abelian groups, Die Grundlehren der mathematischen Wissenschaften, Band 152, Berlin, New York: Springer-Verlag, MR0262773 [4]. Hrmander, L. (1976), Linear Partial Differential Operators, Volume 1, Springer-Verlag, ISBN978-3-540-00662-6. James, J.F. (2011), A Student's Guide to Fourier Transforms (3rd ed.), New York: Cambridge University Press, ISBN978-0-521-17683-5. Kaiser, Gerald (1994), A Friendly Guide to Wavelets [5], Birkhuser, ISBN0-8176-3711-7 Kammler, David (2000), A First Course in Fourier Analysis, Prentice Hall, ISBN0-13-578782-3 Katznelson, Yitzhak (1976), An introduction to Harmonic Analysis, Dover, ISBN0-486-63331-4 Knapp, Anthony W. (2001), Representation Theory of Semisimple Groups: An Overview Based on Examples [6], Princeton University Press, ISBN978-0-691-09089-4 Pinsky, Mark (2002), Introduction to Fourier Analysis and Wavelets [7], Brooks/Cole, ISBN0-534-37660-6 Polyanin, A. D.; Manzhirov, A. V. (1998), Handbook of Integral Equations, Boca Raton: CRC Press, ISBN0-8493-2876-4. Rudin, Walter (1987), Real and Complex Analysis (Third ed.), Singapore: McGraw Hill, ISBN0-07-100276-6. Rahman, Matiur (2011), Applications of Fourier Transforms to Generalized Functions [8], WIT Press, ISBN1845645642. Stein, Elias; Shakarchi, Rami (2003), Fourier Analysis: An introduction [9], Princeton University Press, ISBN0-691-11384-X. Stein, Elias; Weiss, Guido (1971), Introduction to Fourier Analysis on Euclidean Spaces [10], Princeton, N.J.: Princeton University Press, ISBN978-0-691-08078-9. Taneja, HC (2008), "Chapter 18: Fourier integrals and Fourier transforms" [11], Advanced Engineering Mathematics:, Volume 2, New Delhi, India: I. K. International Pvt Ltd, ISBN8189866567.

Fourier transform Titchmarsh, E (1948), Introduction to the theory of Fourier integrals (2nd ed.), Oxford University: Clarendon Press (published 1986), ISBN978-0-8284-0324-5. Wilson, R. G. (1995), Fourier Series and Optical Transform Techniques in Contemporary Optics, New York: Wiley, ISBN0-471-30357-7. Yosida, K. (1968), Functional Analysis, Springer-Verlag, ISBN3-540-58654-7.

22

External links
The Discrete Fourier Transformation (DFT): Definition and numerical examples [12] A Matlab tutorial The Fourier Transform Tutorial Site [13] (thefouriertransform.com) Fourier Series Applet [14] (Tip: drag magnitude or phase dots up or down to change the wave form). Stephan Bernsee's FFTlab [15] (Java Applet) Stanford Video Course on the Fourier Transform [16] Hazewinkel, Michiel, ed. (2001), "Fourier transform" [17], Encyclopedia of Mathematics, Springer, ISBN978-1-55608-010-4 Weisstein, Eric W., "Fourier Transform [18]", MathWorld. The DFT Pied: Mastering The Fourier Transform in One Day [19] at The DSP Dimension An Interactive Flash Tutorial for the Fourier Transform [20] Java Library for DFT [21]

References
[1] http:/ / books. google. com/ books?id=TDQJAAAAIAAJ& pg=PA525& dq=%22c%27est-%C3%A0-dire+ qu%27on+ a+ l%27%C3%A9quation%22& hl=en& sa=X& ei=SrC7T9yKBorYiALVnc2oDg& sqi=2& ved=0CEAQ6AEwAg#v=onepage& q=%22c%27est-%C3%A0-dire%20qu%27on%20a%20l%27%C3%A9quation%22& f=false [2] http:/ / books. google. com/ books?id=-N8EAAAAYAAJ& pg=PA408& dq=%22that+ is+ to+ say,+ that+ we+ have+ the+ equation%22& hl=en& sa=X& ei=F667T-u5I4WeiALEwpHXDQ& ved=0CDgQ6AEwAA#v=onepage& q=%22that%20is%20to%20say%2C%20that%20we%20have%20the%20equation%22& f=false [3] http:/ / dx. doi. org/ 10. 1007%2Fs00041-012-9242-5 [4] http:/ / www. ams. org/ mathscinet-getitem?mr=0262773 [5] http:/ / books. google. com/ books?id=rfRnrhJwoloC& pg=PA29& dq=%22becomes+ the+ Fourier+ %28integral%29+ transform%22& hl=en& sa=X& ei=osO7T7eFOqqliQK3goXoDQ& ved=0CDQQ6AEwAA#v=onepage& q=%22becomes%20the%20Fourier%20%28integral%29%20transform%22& f=false [6] http:/ / books. google. com/ ?id=QCcW1h835pwC [7] http:/ / books. google. com/ books?id=tlLE4KUkk1gC& pg=PA256& dq=%22The+ Fourier+ transform+ of+ the+ measure%22& hl=en& sa=X& ei=w8e7T43XJsiPiAKZztnRDQ& ved=0CEUQ6AEwAg#v=onepage& q=%22The%20Fourier%20transform%20of%20the%20measure%22& f=false [8] http:/ / books. google. com/ books?id=k_rdcKaUdr4C& pg=PA10 [9] http:/ / books. google. com/ books?id=FAOc24bTfGkC& pg=PA158& dq=%22The+ mathematical+ thrust+ of+ the+ principle%22& hl=en& sa=X& ei=Esa7T5PZIsqriQKluNjPDQ& ved=0CDQQ6AEwAA#v=onepage& q=%22The%20mathematical%20thrust%20of%20the%20principle%22& f=false [10] http:/ / books. google. com/ books?id=YUCV678MNAIC& dq=editions:xbArf-TFDSEC& source=gbs_navlinks_s [11] http:/ / books. google. com/ books?id=X-RFRHxMzvYC& pg=PA192& dq=%22The+ Fourier+ integral+ can+ be+ regarded+ as+ an+ extension+ of+ the+ concept+ of+ Fourier+ series%22& hl=en& sa=X& ei=D4rDT_vdCueQiAKF6PWeCA& ved=0CDQQ6AEwAA#v=onepage& q=%22The%20Fourier%20integral%20can%20be%20regarded%20as%20an%20extension%20of%20the%20concept%20of%20Fourier%20series%22& f=false [12] http:/ / www. nbtwiki. net/ doku. php?id=tutorial:the_discrete_fourier_transformation_dft [13] http:/ / www. thefouriertransform. com [14] http:/ / www. westga. edu/ ~jhasbun/ osp/ Fourier. htm [15] http:/ / www. dspdimension. com/ fftlab/ [16] http:/ / www. academicearth. org/ courses/ the-fourier-transform-and-its-applications [17] http:/ / www. encyclopediaofmath. org/ index. php?title=p/ f041150 [18] http:/ / mathworld. wolfram. com/ FourierTransform. html [19] http:/ / www. dspdimension. com/ admin/ dft-a-pied/

Fourier transform
[20] http:/ / www. fourier-series. com/ f-transform/ index. html [21] http:/ / www. patternizando. com. br/ 2013/ 05/ transformadas-discretas-wavelet-e-fourier-em-java/

23

Convolution
In mathematics and, in particular, functional analysis, convolution is a mathematical operation on two functions f and g, producing a third function that is typically viewed as a modified version of one of the original functions, giving the area overlap between the two functions as a function of the amount that one of the original functions is translated. Convolution is similar to cross-correlation. It has applications that include probability, statistics, computer vision, image and signal processing, electrical engineering, and differential equations.

Visual comparison of convolution, cross-correlation and autocorrelation.

The convolution can be defined for functions on groups other than Euclidean space. For example, periodic functions, such as the discrete-time Fourier transform, can be defined on a circle and convolved by periodic convolution. (See row 10 at DTFT#Properties.) And discrete convolution can be defined for functions on the set of integers. Generalizations of convolution have applications in the field of numerical analysis and numerical linear algebra, and in the design and implementation of finite impulse response filters in signal processing. Computing the inverse of the convolution operation is known as deconvolution.

Definition
The convolution of f and g is written fg, using an asterisk or star. It is defined as the integral of the product of the two functions after one is reversed and shifted. As such, it is a particular kind of integral transform:

(commutativity)

While the symbol t is used above, it need not represent the time domain. But in that context, the convolution formula can be described as a weighted average of the function f() at the moment t where the weighting is given by g() simply shifted by amount t. As t changes, the weighting function emphasizes different parts of the input function. For functions f, g defined on only, the integration domain is finite and the convolution is given by

In this case, the Laplace transform is more appropriate than the Fourier transform below and boundary terms become relevant. For the multi-dimensional formulation of convolution, see Domain of definition (below).

Convolution

24

Derivations
Convolution describes the output (in terms of the input) of an important class of operations known as linear time-invariant (LTI). See LTI system theory for a derivation of convolution as the result of LTI constraints. In terms of the Fourier transforms of the input and output of an LTI operation, no new frequency components are created. The existing ones are only modified (amplitude and/or phase). In other words, the output transform is the pointwise product of the input transform with a third transform (known as a transfer function). See Convolution theorem for a derivation of that property of convolution. Conversely, convolution can be derived as the inverse Fourier transform of the pointwise product of two Fourier transforms.
Visual explanations of convolution 1. Express each function in terms of a dummy variable 2. Reflect one of the functions: 3. Add a time-offset, t, which allows to slide along the

-axis. 4. Start t at and slide it all the way to +. Wherever the two functions intersect, find the integral of their product. In other words, compute a sliding, weighted-average of function , where the weighting function is The resulting waveform (not shown here) is the convolution of functions f and g. If f(t) is a unit impulse, the result of this process is simply g(t), which is therefore called the impulse response. Formally:

In this example, the red-colored "pulse", symmetrical function "movie" shows functions for some value of parameter as the distance from the and so

is a

convolution is equivalent to correlation. A snapshot of this (in blue) which is arbitrarily defined axis to the center of the red

pulse. The amount of yellow is the area of the product computed by the convolution/correlation integral. The movie is created by continuously changing and recomputing the integral. The result (shown in black) is a function of but is plotted on the same axis as comparison. In this depiction, for convenience and

could represent the response of an

RC circuit to a narrow pulse that occurs at In other words, if the result of convolution is just But when is the wider pulse (in red), the It begins at as the distance from response is a "smeared" version of because we defined the the leading edge).

axis to the center of the wide pulse (instead of

Convolution

25

Historical developments
According to Origin and history of convolution,[1] "Probably one of the first occurrences of the real convolution integral took place in the year 1754 when the mathematician Jean-le-Rond D'Alembert derived Taylor's expansion theorem on page 50 of Volume 1 of his book "Recherches sur diffrents points importants du systme du monde". Also, an expression of the type:

is used by Sylvestre Franois Lacroix on page 505 of his book entitled Treatise on differences and series, which is the last of 3 volumes of the encyclopedic series: Trait du calcul diffrentiel et du calcul intgral, Chez Courcier, Paris, 1797-1800.[2] Soon thereafter, convolution operations appear in the works of Pierre Simon Laplace, Jean Baptiste Joseph Fourier, Simon Denis Poisson, and others. The term itself did not come into wide use until the 1950s or 60s. Prior to that it was sometimes known as faltung (which means folding in German), composition product, superposition integral, and Carson's integral. Yet it appears as early as 1903, though the definition is rather unfamiliar in older uses. The operation:

is a particular case of composition products considered by the Italian mathematician Vito Volterra in 1913.[3]

Circular convolution
When a function gT is periodic, with period T, then for functions, f, such that fgT exists, the convolution is also periodic and identical to:

where to is an arbitrary choice. The summation is called a periodic summation of the functionf. When gT is a periodic summation of another function, g, then fgT is known as a circular or cyclic convolution of f and g. And if the periodic summation above is replaced by fT, the operation is called a periodic convolution of fT and gT.

Discrete convolution
For complex-valued functions f, g defined on the set Z of integers, the discrete convolution of f and g is given by:

(commutativity) The convolution of two finite sequences is defined by extending the sequences to finitely supported functions on the set of integers. When the sequences are the coefficients of two polynomials, then the coefficients of the ordinary product of the two polynomials are the convolution of the original two sequences. This is known as the Cauchy product of the coefficients of the sequences. Thus when g has finite support in the set impulse response), a finite summation may be used: (representing, for instance, a finite

Convolution

26

Circular discrete convolution


When a function gN is periodic, with period N, then for functions, f, such that fgN exists, the convolution is also periodic and identical to:

The summation on k is called a periodic summation of the function f. If gN is a periodic summation of another function, g, then fgN is known as a circular convolution of f and g. When the non-zero durations of both f and g are limited to the interval [0,N1], fgN reduces to these common forms:

(Eq.1)

The notation (f N g) for cyclic convolution denotes convolution over the cyclic group of integers modulo N. Circular convolution arises most often in the context of fast convolution with an FFT algorithm.

Fast convolution algorithms


In many situations, discrete convolutions can be converted to circular convolutions so that fast transforms with a convolution property can be used to implement the computation. For example, convolution of digit sequences is the kernel operation in multiplication of multi-digit numbers, which can therefore be efficiently implemented with transform techniques (Knuth 1997, 4.3.3.C; von zur Gathen & Gerhard 2003, 8.2). Eq.1 requires N arithmetic operations per output value and N2 operations for N outputs. That can be significantly reduced with any of several fast algorithms. Digital signal processing and other applications typically use fast convolution algorithms to reduce the cost of the convolution to O(NlogN) complexity. The most common fast convolution algorithms use fast Fourier transform (FFT) algorithms via the circular convolution theorem. Specifically, the circular convolution of two finite-length sequences is found by taking an FFT of each sequence, multiplying pointwise, and then performing an inverse FFT. Convolutions of the type defined above are then efficiently implemented using that technique in conjunction with zero-extension and/or discarding portions of the output. Other fast convolution algorithms, such as the SchnhageStrassen algorithm or the Mersenne transform, use fast Fourier transforms in other rings. If one sequence is much longer than the other, zero-extension of the shorter sequence and fast circular convolution is not the most computationally efficient method available. Instead, decomposing the longer sequence into blocks and convolving each block allows for faster algorithms such as the Overlapsave method and Overlapadd method. A hybrid convolution method that combines block and FIR algorithms allows for a zero input-output latency that is useful for real-time convolution computations.

Convolution

27

Domain of definition
The convolution of two complex-valued functions on Rd, defined by:

is well-defined only if f and g decay sufficiently rapidly at infinity in order for the integral to exist. Conditions for the existence of the convolution may be tricky, since a blow-up in g at infinity can be easily offset by sufficiently rapid decay in f. The question of existence thus may involve different conditions on f and g:

Compactly supported functions


If f and g are compactly supported continuous functions, then their convolution exists, and is also compactly supported and continuous (Hrmander 1983, Chapter 1). More generally, if either function (say f) is compactly supported and the other is locally integrable, then the convolution fg is well-defined and continuous. Convolution of f and g is also well defined when both functions are locally square integrable on R and supported on an interval of the form [a,+) (or both supported on [-,a]).

Integrable functions
The convolution of f and g exists if f and g are both Lebesgue integrable functions in L1(Rd), and in this case fg is also integrable (Stein & Weiss 1971, Theorem 1.3). This is a consequence of Tonelli's theorem. This is also true for functions in , under the discrete convolution, or more generally for the convolution on any group. Likewise, if fL1(Rd) and gLp(Rd) where 1p, then fgLp(Rd) and In the particular case p= 1, this shows that L1 is a Banach algebra under the convolution (and equality of the two sides holds if f and g are non-negative almost everywhere). More generally, Young's inequality implies that the convolution is a continuous bilinear map between suitable Lp spaces. Specifically, if 1p,q,r satisfy

then so that the convolution is a continuous bilinear mapping from LpLq to Lr. The Young inequality for convolution is also true in other contexts (circle group, convolution on Z). The preceding inequality is not sharp on the real line: when 1 < p, q, r < , there exists a constant Bp, q < 1 such that the Lr(R) norm of f g is bounded by Bp, q times the product of norms ||f||p||g||q. The optimal value of Bp, q was discovered in 1975.[4] A stronger estimate is true provided 1 < p, q, r < : is the weak Lq norm. Convolution also defines a bilinear continuous map , owing to the weak Young inequality:

where

for

Convolution

28

Functions of rapid decay


In addition to compactly supported functions and integrable functions, functions that have sufficiently rapid decay at infinity can also be convolved. An important feature of the convolution is that if f and g both decay rapidly, then fg also decays rapidly. In particular, if f and g are rapidly decreasing functions, then so is the convolution fg. Combined with the fact that convolution commutes with differentiation (see Properties), it follows that the class of Schwartz functions is closed under convolution (Stein & Weiss 1971, Theorem 3.3).

Distributions
Under some circumstances, it is possible to define the convolution of a function with a distribution, or of two distributions. If f is a compactly supported function and g is a distribution, then fg is a smooth function defined by a distributional formula analogous to

More generally, it is possible to extend the definition of the convolution in a unique way so that the associative law

remains valid in the case where f is a distribution, and g a compactly supported distribution (Hrmander 1983, 4.2).

Measures
The convolution of any two Borel measures and of bounded variation is the measure defined by (Rudin 1962)

This agrees with the convolution defined above when and are regarded as distributions, as well as the convolution of L1 functions when and are absolutely continuous with respect to the Lebesgue measure. The convolution of measures also satisfies the following version of Young's inequality

where the norm is the total variation of a measure. Because the space of measures of bounded variation is a Banach space, convolution of measures can be treated with standard methods of functional analysis that may not apply for the convolution of distributions.

Properties
Algebraic properties
The convolution defines a product on the linear space of integrable functions. This product satisfies the following algebraic properties, which formally mean that the space of integrable functions with the product given by convolution is a commutative algebra without identity (Strichartz 1994, 3.3). Other linear spaces of functions, such as the space of continuous functions of compact support, are closed under the convolution, and so also form commutative algebras. Commutativity

Associativity

Distributivity

Convolution Associativity with scalar multiplication

29

for any real (or complex) number Multiplicative identity

No algebra of functions possesses an identity for the convolution. The lack of identity is typically not a major inconvenience, since most collections of functions on which the convolution is performed can be convolved with a delta distribution or, at the very least (as is the case of L1) admit approximations to the identity. The linear space of compactly supported distributions does, however, admit an identity under the convolution. Specifically,

where is the delta distribution. Inverse element Some distributions have an inverse element for the convolution, S(1), which is defined by

The set of invertible distributions forms an abelian group under the convolution. Complex conjugation

Integration
If f and g are integrable functions, then the integral of their convolution on the whole space is simply obtained as the product of their integrals:

This follows from Fubini's theorem. The same result holds if f and g are only assumed to be nonnegative measurable functions, by Tonelli's theorem.

Differentiation
In the one-variable case,

where d/dx is the derivative. More generally, in the case of functions of several variables, an analogous formula holds with the partial derivative:

A particular consequence of this is that the convolution can be viewed as a "smoothing" operation: the convolution of f and g is differentiable as many times as f and g are in total. These identities hold under the precise condition that f and g are absolutely integrable and at least one of them has an absolutely integrable (L1) weak derivative, as a consequence of Young's inequality. For instance, when f is continuously differentiable with compact support, and g is an arbitrary locally integrable function,

These identities also hold much more broadly in the sense of tempered distributions if one of f or g is a compactly supported distribution or a Schwartz function and the other is a tempered distribution. On the other hand, two positive integrable and infinitely differentiable functions may have a nowhere continuous convolution.

Convolution In the discrete case, the difference operator Df(n)=f(n+1)f(n) satisfies an analogous relationship:

30

Convolution theorem
The convolution theorem states that

where

denotes the Fourier transform of

, and

is a constant that depends on the specific normalization

of the Fourier transform (see Properties of the Fourier transform). Versions of this theorem also hold for the Laplace transform, two-sided Laplace transform, Z-transform and Mellin transform. See also the less trivial Titchmarsh convolution theorem.

Translation invariance
The convolution commutes with translations, meaning that

where xf is the translation of the function f by x defined by If f is a Schwartz function, then xf is the convolution with a translated Dirac delta function xf=fx. So translation invariance of the convolution of Schwartz functions is a consequence of the associativity of convolution. Furthermore, under certain conditions, convolution is the most general translation invariant operation. Informally speaking, the following holds Suppose that S is a linear operator acting on functions which commutes with translations: S(xf)=x(Sf) for all x. Then S is given as convolution with a function (or distribution) gS; that is Sf=gSf. Thus any translation invariant operation can be represented as a convolution. Convolutions play an important role in the study of time-invariant systems, and especially LTI system theory. The representing function gS is the impulse response of the transformationS. A more precise version of the theorem quoted above requires specifying the class of functions on which the convolution is defined, and also requires assuming in addition that S must be a continuous linear operator with respect to the appropriate topology. It is known, for instance, that every continuous translation invariant continuous linear operator on L1 is the convolution with a finite Borel measure. More generally, every continuous translation invariant continuous linear operator on Lp for 1p< is the convolution with a tempered distribution whose Fourier transform is bounded. To wit, they are all given by bounded Fourier multipliers.

Convolutions on groups
If G is a suitable group endowed with a measure , and if f and g are real or complex valued integrable functions onG, then we can define their convolution by

In typical cases of interest G is a locally compact Hausdorff topological group and is a (left-) Haar measure. In that case, unless G is unimodular, the convolution defined in this way is not the same as . The preference of one over the other is made so that convolution with a fixed function g commutes with left translation in the group:

Convolution Furthermore, the convention is also required for consistency with the definition of the convolution of measures given below. However, with a right instead of a left Haar measure, the latter integral is preferred over the former. On locally compact abelian groups, a version of the convolution theorem holds: the Fourier transform of a convolution is the pointwise product of the Fourier transforms. The circle group T with the Lebesgue measure is an immediate example. For a fixed g in L1(T), we have the following familiar operator acting on the Hilbert space L2(T):

31

The operator T is compact. A direct calculation shows that its adjoint T* is convolution with

By the commutativity property cited above, T is normal: T*T = TT*. Also, T commutes with the translation operators. Consider the family S of operators consisting of all such convolutions and the translation operators. Then S is a commuting family of normal operators. According to spectral theory, there exists an orthonormal basis {hk} that simultaneously diagonalizes S. This characterizes convolutions on the circle. Specifically, we have

which are precisely the characters ofT. Each convolution is a compact multiplication operator in this basis. This can be viewed as a version of the convolution theorem discussed above. A discrete example is a finite cyclic group of ordern. Convolution operators are here represented by circulant matrices, and can be diagonalized by the discrete Fourier transform. A similar result holds for compact groups (not necessarily abelian): the matrix coefficients of finite-dimensional unitary representations form an orthonormal basis in L2 by the PeterWeyl theorem, and an analog of the convolution theorem continues to hold, along with many other aspects of harmonic analysis that depend on the Fourier transform.

Convolution of measures
Let G be a topological group. If and are finite Borel measures on G, then their convolution is defined by

for each measurable subset E of G. The convolution is also a finite measure, whose total variation satisfies

In the case when G is locally compact with (left-)Haar measure , and and are absolutely continuous with respect to a , so that each has a density function, then the convolution is also absolutely continuous, and its density function is just the convolution of the two separate density functions. If and are probability measures on the topological group (R,+), then the convolution is the probability distribution of the sum X+Y of two independent random variables X and Y whose respective distributions are and .

Convolution

32

Bialgebras
Let (X,,,,) be a bialgebra with comultiplication , multiplication , unit , and counit . The convolution is a product defined on the endomorphism algebra End(X) as follows. Let , End(X), that is, ,:XX are functions that respect all algebraic structure of X, then the convolution is defined as the composition

The convolution appears notably in the definition of Hopf algebras (Kassel 1995, III.3). A bialgebra is a Hopf algebra if and only if it has an antipode: an endomorphism S such that

Applications
Convolution and related operations are found in many applications in science, engineering and mathematics. In image processing In digital image processing convolutional filtering plays an important role in many important algorithms in edge detection and related processes. In optics, an out-of-focus photograph is a convolution of the sharp image with a lens function. The photographic term for this is bokeh. In image processing applications such as adding blurring. In digital data processing In analytical chemistry, SavitzkyGolay smoothing filters are used for the analysis of spectroscopic data. They can improve signal-to-noise ratio with minimal distortion of the spectra. In statistics, a weighted moving average is a convolution. In acoustics, reverberation is the convolution of the original sound with echos from objects surrounding the sound source.
Gaussian blur can be used in order to obtain a smooth grayscale digital image of a halftone print

In digital signal processing, convolution is used to map the impulse response of a real room on a digital audio signal. In electronic music convolution is the imposition of a spectral or rhythmic structure on a sound. Often this envelope or structure is taken from another sound. The convolution of two signals is the filtering of one through the other.[5] In electrical engineering, the convolution of one function (the input signal) with a second function (the impulse response) gives the output of a linear time-invariant system (LTI). At any given moment, the output is an accumulated effect of all the prior values of the input function, with the most recent values typically having the most influence (expressed as a multiplicative factor). The impulse response function provides that factor as a function of the elapsed time since each input value occurred. In physics, wherever there is a linear system with a "superposition principle", a convolution operation makes an appearance. For instance, in spectroscopy line broadening due to the Doppler effect on its own gives a Gaussian spectral line shape and collision broadening alone gives a Lorentzian line shape. When both effects are operative, the line shape is a convolution of Gaussian and Lorentzian, a Voigt function.

Convolution In Time-resolved fluorescence spectroscopy, the excitation signal can be treated as a chain of delta pulses, and the measured fluorescence is a sum of exponential decays from each delta pulse. In computational fluid dynamics, the large eddy simulation (LES) turbulence model uses the convolution operation to lower the range of length scales necessary in computation thereby reducing computational cost. In probability theory, the probability distribution of the sum of two independent random variables is the convolution of their individual distributions. In kernel density estimation, a distribution is estimated from sample points by convolution with a kernel, such as an isotropic Gaussian. (Diggle 1995). In radiotherapy treatment planning systems, most part of all modern codes of calculation applies a convolution-superposition algorithm..Wikipedia:Please clarify

33

Notes
[1] Dominguez-Torres, p 2 [2] Dominguez-Torres, p 4 [3] According to [Lothar von Wolfersdorf (2000), "Einige Klassen quadratischer Integralgleichungen", Sitzungsberichte der Schsischen Akademie der Wissenschaften zu Leipzig, Mathematisch-naturwissenschaftliche Klasse, volume 128, number 2, 67], the source is Volterra, Vito (1913), "Leons sur les fonctions de linges". Gauthier-Villars, Paris 1913. [4] Beckner, William (1975), "Inequalities in Fourier analysis", Ann. of Math. (2) 102: 159182. Independently, Brascamp, Herm J. and Lieb, Elliott H. (1976), "Best constants in Young's inequality, its converse, and its generalization to more than three functions", Advances in Math. 20: 151173. See BrascampLieb inequality [5] Zlzer, Udo, ed. (2002). DAFX:Digital Audio Effects, p.4849. ISBN 0471490784.

References
Bracewell, R. (1986), The Fourier Transform and Its Applications (2nd ed.), McGrawHill, ISBN0-07-116043-4. Damelin, S.; Miller, W. (2011), The Mathematics of Signal Processing, Cambridge University Press, ISBN978-1107601048 Diggle, P. J., "A kernel method for smoothing point process data", Journal of the Royal Statistical Society, Series C 34: 138147 Dominguez-Torres, Alejandro (Nov 2, 2010). "Origin and history of convolution". 41 pgs. http://www. slideshare.net/Alexdfar/origin-adn-history-of-convolution.Cranfield, Bedford MK43 OAL, UK. Retrieved Mar 13, 2013. Hewitt, Edwin; Ross, Kenneth A. (1979), Abstract harmonic analysis. Vol. I, Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences] 115 (2nd ed.), Berlin, New York: Springer-Verlag, ISBN978-3-540-09434-0, MR 551496 (http://www.ams.org/ mathscinet-getitem?mr=551496). Hewitt, Edwin; Ross, Kenneth A. (1970), Abstract harmonic analysis. Vol. II: Structure and analysis for compact groups. Analysis on locally compact Abelian groups, Die Grundlehren der mathematischen Wissenschaften, Band 152, Berlin, New York: Springer-Verlag, MR 0262773 (http://www.ams.org/ mathscinet-getitem?mr=0262773). Hrmander, L. (1983), The analysis of linear partial differential operators I, Grundl. Math. Wissenschaft. 256, Springer, ISBN3-540-12104-8, MR 0717035 (http://www.ams.org/mathscinet-getitem?mr=0717035). Kassel, Christian (1995), Quantum groups, Graduate Texts in Mathematics 155, Berlin, New York: Springer-Verlag, ISBN978-0-387-94370-1, MR 1321145 (http://www.ams.org/ mathscinet-getitem?mr=1321145).

Convolution Knuth, Donald (1997), Seminumerical Algorithms (3rd. ed.), Reading, Massachusetts: AddisonWesley, ISBN0-201-89684-2. Reed, Michael; Simon, Barry (1975), Methods of modern mathematical physics. II. Fourier analysis, self-adjointness, New York-London: Academic Press Harcourt Brace Jovanovich, Publishers, pp.xv+361, ISBN0-12-585002-6, MR 0493420 (http://www.ams.org/mathscinet-getitem?mr=0493420) Rudin, Walter (1962), Fourier analysis on groups, Interscience Tracts in Pure and Applied Mathematics, No. 12, Interscience Publishers (a division of John Wiley and Sons), New YorkLondon, ISBN0-471-52364-X, MR 0152834 (http://www.ams.org/mathscinet-getitem?mr=0152834). Sobolev, V.I. (2001), "Convolution of functions" (http://www.encyclopediaofmath.org/index.php?title=C/ c026430), in Hazewinkel, Michiel, Encyclopedia of Mathematics, Springer, ISBN978-1-55608-010-4. Stein, Elias; Weiss, Guido (1971), Introduction to Fourier Analysis on Euclidean Spaces, Princeton University Press, ISBN0-691-08078-X. Strichartz, R. (1994), A Guide to Distribution Theory and Fourier Transforms, CRC Press, ISBN0-8493-8273-4. Titchmarsh, E (1948), Introduction to the theory of Fourier integrals (2nd ed.), New York, N.Y.: Chelsea Pub. Co. (published 1986), ISBN978-0-8284-0324-5. Uludag, A. M. (1998), "On possible deterioration of smoothness under the operation of convolution", J. Math. Anal. Appl. 227 no. 2, 335358 Treves, Franois (1967), Topological Vector Spaces, Distributions and Kernels, Academic Press, ISBN0-486-45352-9. von zur Gathen, J.; Gerhard, J. (2003), Modern Computer Algebra, Cambridge University Press, ISBN0-521-82646-2.

34

External links
Earliest Uses: The entry on Convolution has some historical information. (http://jeff560.tripod.com/c.html) Convolution (http://rkb.home.cern.ch/rkb/AN16pp/node38.html#SECTION000380000000000000000), on The Data Analysis BriefBook (http://rkb.home.cern.ch/rkb/titleA.html) http://www.jhu.edu/~signals/convolve/index.html Visual convolution Java Applet http://www.jhu.edu/~signals/discreteconv2/index.html Visual convolution Java Applet for discrete-time functions Lectures on Image Processing: A collection of 18 lectures in pdf format from Vanderbilt University. Lecture 7 is on 2-D convolution. (http://www.archive.org/details/Lectures_on_Image_Processing), by Alan Peters http://archive.org/details/Lectures_on_Image_Processing Convolution Kernel Mask Operation Interactive tutorial (http://micro.magnet.fsu.edu/primer/java/ digitalimaging/processing/kernelmaskoperation/) Convolution (http://mathworld.wolfram.com/Convolution.html) at MathWorld Freeverb3 Impulse Response Processor (http://freeverb3.sourceforge.net/): Opensource zero latency impulse response processor with VST plugins Stanford University CS 178 interactive Flash demo (http://graphics.stanford.edu/courses/cs178/applets/ convolution.html) showing how spatial convolution works. A video lecture on the subject of convolution (http://www.youtube.com/watch?v=IW4Reburjpc) given by Salman Khan A Javascript interactive plot of the convolution with several functions (http://www.onmyphd.com/ ?p=convolution)

Convolution theorem

35

Convolution theorem
In mathematics, the convolution theorem states that under suitable conditions the Fourier transform of a convolution is the pointwise product of Fourier transforms. In other words, convolution in one domain (e.g., time domain) equals point-wise multiplication in the other domain (e.g., frequency domain). Versions of the convolution theorem are true for various Fourier-related transforms. Let and be two functions with convolution . (Note that the asterisk denotes convolution in this context, and not multiplication. The tensor product symbol sometimes used instead.) Let transforms of and denote the Fourier transform operator, so and , respectively. Then is are the Fourier

where denotes point-wise multiplication. It also works the other way around:

By applying the inverse Fourier transform

, we can write:

Note that the relationships above are only valid for the form of the Fourier transform shown in the Proof section below. The transform may be normalised in other ways, in which case constant scaling factors (typically or ) will appear in the relationships above. This theorem also holds for the Laplace transform, the two-sided Laplace transform and, when suitably modified, for the Mellin transform and Hartley transform (see Mellin inversion theorem). It can be extended to the Fourier transform of abstract harmonic analysis defined over locally compact abelian groups. This formulation is especially useful for implementing a numerical convolution on a computer: The standard convolution algorithm has quadratic computational complexity. With the help of the convolution theorem and the fast Fourier transform, the complexity of the convolution can be reduced to O(n log n). This can be exploited to construct fast multiplication algorithms.

Proof
The proof here is shown for a particular normalisation of the Fourier transform. As mentioned above, if the transform is normalised differently, then constant scaling factors will appear in the derivation. Let f, g belong to L1(Rn). Let be the Fourier transform of and be the Fourier transform of :

where the dot between x and indicates the inner product of Rn. Let

be the convolution of

and

Now notice that

Hence by Fubini's theorem we have that formula

so its Fourier transform

is defined by the integral

Convolution theorem

36

Observe that

and hence by the argument above we may apply

Fubini's theorem again (i.e. interchange the order of integration):

Substitute

; then

, so:

These two integrals are the definitions of QED.

and

, so:

Functions of discrete variable sequences


By similar arguments, it can be shown that the discrete convolution of sequences and is given by:

where DTFT represents the discrete-time Fourier transform. An important special case is the circular convolution of summation: and defined by where is a periodic

It can then be shown that:

where DFT represents the discrete Fourier transform. The proof follows from DTFT#Periodic_data, which indicates that can be written as:

The product with

is thereby reduced to a discrete-frequency function: (also using

Sampling the DTFT). The inverse DTFT is:

Convolution theorem

37

QED.

References
Katznelson, Yitzhak (1976), An introduction to Harmonic Analysis, Dover, ISBN0-486-63331-4 Weisstein, Eric W., " Convolution Theorem (http://mathworld.wolfram.com/ConvolutionTheorem.html)", MathWorld. Crutchfield, Steve (October 9, 2010), "The Joy of Convolution" (http://www.jhu.edu/signals/convolve/index. html), Johns Hopkins University, retrieved November 19, 2010

Additional resources
For visual representation of the use of the convolution theorem in signal processing, see: Johns Hopkins University's Java-aided simulation: http://www.jhu.edu/signals/convolve/index.html

Laplace transform
The Laplace transform is a widely used integral transform in mathematics with many applications in physics and engineering. It is a linear operator of a function f(t) with a real argument t (t 0) that transforms f(t) to a function F(s) with complex argument s, given by the integral

This transformation is bijective for the majority of practical uses; the most-common pairs of f(t) and F(s) are often given in tables for easy reference. The Laplace transform has the useful property that many relationships and operations over the original f(t) correspond to simpler relationships and operations over its image F(s). It is named after Pierre-Simon Laplace (/lpls/), who introduced the transform in his work on probability theory. The Laplace transform is related to the Fourier transform, but whereas the Fourier transform expresses a function or signal as a series of modes of vibration (frequencies), the Laplace transform resolves a function into its moments. Like the Fourier transform, the Laplace transform is used for solving differential and integral equations. In physics and engineering it is used for analysis of linear time-invariant systems such as electrical circuits, harmonic oscillators, optical devices, and mechanical systems. In such analyses, the Laplace transform is often interpreted as a transformation from the time-domain, in which inputs and outputs are functions of time, to the frequency-domain, where the same inputs and outputs are functions of complex angular frequency, in radians per unit time. Given a simple mathematical or functional description of an input or output to a system, the Laplace transform provides an alternative functional description that often simplifies the process of analyzing the behavior of the system, or in synthesizing a new system based on a set of specifications.

Laplace transform

38

History
The Laplace transform is named after mathematician and astronomer Pierre-Simon Laplace, who used a similar transform (now called z transform) in his work on probability theory. The current widespread use of the transform came about soon after World War II although it had been used in the 19th century by Abel, Lerch, Heaviside, and Bromwich. The older history of similar transforms is as follows. From 1744, Leonhard Euler investigated integrals of the form

as solutions of differential equations but did not pursue the matter very far.[1] Joseph Louis Lagrange was an admirer of Euler and, in his work on integrating probability density functions, investigated expressions of the form

which some modern historians have interpreted within modern Laplace transform theory.Wikipedia:Please clarify These types of integrals seem first to have attracted Laplace's attention in 1782 where he was following in the spirit of Euler in using the integrals themselves as solutions of equations. However, in 1785, Laplace took the critical step forward when, rather than just looking for a solution in the form of an integral, he started to apply the transforms in the sense that was later to become popular. He used an integral of the form:

akin to a Mellin transform, to transform the whole of a difference equation, in order to look for solutions of the transformed equation. He then went on to apply the Laplace transform in the same way and started to derive some of its properties, beginning to appreciate its potential power. Laplace also recognised that Joseph Fourier's method of Fourier series for solving the diffusion equation could only apply to a limited region of space as the solutions were periodic. In 1809, Laplace applied his transform to find solutions that diffused indefinitely in space.

Formal definition
The Laplace transform of a function f(t), defined for all real numbers t 0, is the function F(s), defined by:

The parameter s is a complex number: with real numbers and . Other notations for the Laplace transform include or alternatively instead of F.

The meaning of the integral depends on types of functions of interest. A necessary condition for existence of the integral is that f must be locally integrable on [0,). For locally integrable functions that decay at infinity or are of exponential type, the integral can be understood as a (proper) Lebesgue integral. However, for many applications it is necessary to regard it as a conditionally convergent improper integral at . Still more generally, the integral can be understood in a weak sense, and this is dealt with below. One can define the Laplace transform of a finite Borel measure by the Lebesgue integral

An important special case is where is a probability measure or, even more specifically, the Dirac delta function. In operational calculus, the Laplace transform of a measure is often treated as though the measure came from a distribution function f. In that case, to avoid potential confusion, one often writes

Laplace transform

39

where the lower limit of 0 is shorthand notation for

This limit emphasizes that any point mass located at 0 is entirely captured by the Laplace transform. Although with the Lebesgue integral, it is not necessary to take such a limit, it does appear more naturally in connection with the LaplaceStieltjes transform.

Probability theory
In pure and applied probability, the Laplace transform is defined as an expected value. If X is a random variable with probability density function f, then the Laplace transform of f is given by the expectation

By abuse of language, this is referred to as the Laplace transform of the random variable X itself. Replacing s by t gives the moment generating function of X. The Laplace transform has applications throughout probability theory, including first passage times of stochastic processes such as Markov chains, and renewal theory. Of particular use is the ability to recover the cumulative distribution function of a random variable X by means of the Laplace transform as follows[2]

Bilateral Laplace transform


When one says "the Laplace transform" without qualification, the unilateral or one-sided transform is normally intended. The Laplace transform can be alternatively defined as the bilateral Laplace transform or two-sided Laplace transform by extending the limits of integration to be the entire real axis. If that is done the common unilateral transform simply becomes a special case of the bilateral transform where the definition of the function being transformed is multiplied by the Heaviside step function. The bilateral Laplace transform is defined as follows:

Inverse Laplace transform


The inverse Laplace transform is given by the following complex integral, which is known by various names (the Bromwich integral, the Fourier-Mellin integral, and Mellin's inverse formula):

where is a real number so that the contour path of integration is in the region of convergence of F(s). An alternative formula for the inverse Laplace transform is given by Post's inversion formula.

Laplace transform

40

Region of convergence
If f is a locally integrable function (or more generally a Borel measure locally of bounded variation), then the Laplace transform F(s) of f converges provided that the limit

exists. The Laplace transform converges absolutely if the integral

exists (as a proper Lebesgue integral). The Laplace transform is usually understood as conditionally convergent, meaning that it converges in the former instead of the latter sense. The set of values for which F(s) converges absolutely is either of the form Re(s) > a or else Re(s) a, where a is an extended real constant, a . (This follows from the dominated convergence theorem.) The constant a is known as the abscissa of absolute convergence, and depends on the growth behavior of f(t). Analogously, the two-sided transform converges absolutely in a strip of the form a < Re(s) < b, and possibly including the lines Re(s) = a or Re(s) = b. The subset of values of s for which the Laplace transform converges absolutely is called the region of absolute convergence or the domain of absolute convergence. In the two-sided case, it is sometimes called the strip of absolute convergence. The Laplace transform is analytic in the region of absolute convergence. Similarly, the set of values for which F(s) converges (conditionally or absolutely) is known as the region of conditional convergence, or simply the region of convergence (ROC). If the Laplace transform converges (conditionally) at s = s0, then it automatically converges for all s with Re(s) > Re(s0). Therefore the region of convergence is a half-plane of the form Re(s) > a, possibly including some points of the boundary line Re(s) = a. In the region of convergence Re(s) > Re(s0), the Laplace transform of f can be expressed by integrating by parts as the integral

That is, in the region of convergence F(s) can effectively be expressed as the absolutely convergent Laplace transform of some other function. In particular, it is analytic. A variety of theorems, in the form of PaleyWiener theorems, exist concerning the relationship between the decay properties of f and the properties of the Laplace transform within the region of convergence. In engineering applications, a function corresponding to a linear time-invariant (LTI) system is stable if every bounded input produces a bounded output. This is equivalent to the absolute convergence of the Laplace transform of the impulse response function in the region Re(s) 0. As a result, LTI systems are stable provided the poles of the Laplace transform of the impulse response function have negative real part.

Properties and theorems


The Laplace transform has a number of properties that make it useful for analyzing linear dynamical systems. The most significant advantage is that differentiation and integration become multiplication and division, respectively, by s (similarly to logarithms changing multiplication of numbers to addition of their logarithms). Because of this property, the Laplace variable s is also known as operator variable in the L domain: either derivative operator or (for s1) integration operator. The transform turns integral equations and differential equations to polynomial equations, which are much easier to solve. Once solved, use of the inverse Laplace transform reverts to the time domain. Given the functions f(t) and g(t), and their respective Laplace transforms F(s) and G(s):

Laplace transform the following table is a list of properties of unilateral Laplace transform:

41

Properties of the unilateral Laplace transform


Time domain Linearity 's' domain Comment Can be proved using basic rules of integration. F is the first derivative of F.

Frequency differentiation Frequency differentiation Differentiation

More general form, nth derivative of F(s). f is assumed to be a differentiable function, and its derivative is assumed to be of exponential type. This can then be obtained by integration by parts f is assumed twice differentiable and the second derivative to be of exponential type. Follows by applying the Differentiation property to f(t). f is assumed to be n-times differentiable, with nth derivative of exponential type. Follow by mathematical induction. This is deduced using the nature of frequency differentiation and conditional convergence. u(t) is the Heaviside step function. Note (u f)(t) is the convolution of u(t) and f(t).

Second Differentiation

General Differentiation

Frequency integration

Integration

Time scaling

Frequency shifting Time shifting Multiplication u(t) is the Heaviside step function the integration is done along the vertical line Re() = c that lies entirely within the region of convergence of F. f(t) and g(t) are extended by zero for t < 0 in the definition of the convolution.

Convolution

Complex conjugation Cross-correlation Periodic Function f(t) is a periodic function of period T so that f(t) = f(t + T), for all t 0. This is the result of the time shifting property and the geometric series.

Initial value theorem:

Final value theorem: , if all poles of sF(s) are in the left half-plane.

Laplace transform The final value theorem is useful because it gives the long-term behaviour without having to perform partial fraction decompositions or other difficult algebra. If a function has poles in the right-hand plane or on the imaginary axis, (e.g. or sin(t), respectively) the behaviour of this formula is undefined.

42

Relation to power series


The Laplace transform can be viewed as a continuous analogue of a power series. Replacing summation with integration, a continuous version of a power series becomes

For this to have nicer convergence properties, we would like for 0 < x < 1. Changing the base of the power, this becomes

and making the substitution -s=log x, we obtain the definition of the Laplace transform.

Relation to moments
The quantities

are the moments of the function f. Note by repeated differentiation under the integral,

This is of special significance in probability theory, where the moments of a random variable X are given by the expectation values . Then the relation holds:

Proof of the Laplace transform of a function's derivative


It is often convenient to use the differentiation property of the Laplace transform to find the transform of a function's derivative. This can be derived from the basic expression for a Laplace transform as follows:

yielding

and in the bilateral case,

The general result where fn is the nth derivative of f, can then be established with an inductive argument.

Laplace transform

43

Evaluating improper integrals


Let , then (see the table above)

or

Letting s 0, we get the identity

For example,

Another example is Dirichlet integral.

Relationship to other transforms


LaplaceStieltjes transform The (unilateral) LaplaceStieltjes transform of a function g : R R is defined by the LebesgueStieltjes integral

The function g is assumed to be of bounded variation. If g is the antiderivative of f:

then the LaplaceStieltjes transform of g and the Laplace transform of f coincide. In general, the LaplaceStieltjes transform is the Laplace transform of the Stieltjes measure associated to g. So in practice, the only distinction between the two transforms is that the Laplace transform is thought of as operating on the density function of the measure, whereas the LaplaceStieltjes transform is thought of as operating on its cumulative distribution function. Fourier transform The continuous Fourier transform is equivalent to evaluating the bilateral Laplace transform with imaginary argument s = i or s = 2fi:

This definition of the Fourier transform requires a prefactor of 1/2 on the reverse Fourier transform. This relationship between the Laplace and Fourier transforms is often used to determine the frequency spectrum of a signal or dynamical system. The above relation is valid as stated if and only if the region of convergence (ROC) of F(s) contains the imaginary axis, = 0. For example, the function f(t) = cos(0t) has a Laplace transform F(s) = s/(s2 + 02) whose ROC is Re(s) > 0. As s = i is a pole of F(s), substituting s = i in F(s) does not yield the Fourier transform of f(t)u(t), which is proportional to the Dirac delta-function (-0).

Laplace transform However, a relation of the form

44

holds under much weaker conditions. For instance, this holds for the above example provided that the limit is understood as a weak limit of measures (see vague topology). General conditions relating the limit of the Laplace transform of a function on the boundary to the Fourier transform take the form of Paley-Wiener theorems. Mellin transform The Mellin transform and its inverse are related to the two-sided Laplace transform by a simple change of variables. If in the Mellin transform

we set = et we get a two-sided Laplace transform. Z-transform The unilateral or one-sided Z-transform is simply the Laplace transform of an ideally sampled signal with the substitution of

where T = 1/fs is the sampling period (in units of time e.g., seconds) and fs is the sampling rate (in samples per second or hertz) Let

be a sampling impulse train (also called a Dirac comb) and

be the sampled representation of the continuous-time x(t)

The Laplace transform of the sampled signal

is

This is precisely the definition of the unilateral Z-transform of the discrete function x[n]

Laplace transform with the substitution of z esT. Comparing the last two equations, we find the relationship between the unilateral Z-transform and the Laplace transform of the sampled signal:

45

The similarity between the Z and Laplace transforms is expanded upon in the theory of time scale calculus. Borel transform The integral form of the Borel transform

is a special case of the Laplace transform for f an entire function of exponential type, meaning that

for some constants A and B. The generalized Borel transform allows a different weighting function to be used, rather than the exponential function, to transform functions not of exponential type. Nachbin's theorem gives necessary and sufficient conditions for the Borel transform to be well defined. Fundamental relationships Since an ordinary Laplace transform can be written as a special case of a two-sided transform, and since the two-sided transform can be written as the sum of two one-sided transforms, the theory of the Laplace-, Fourier-, Mellin-, and Z-transforms are at bottom the same subject. However, a different point of view and different characteristic problems are associated with each of these four major integral transforms.

Table of selected Laplace transforms


The following table provides Laplace transforms for many common functions of a single variable. For definitions and explanations, see the Explanatory Notes at the end of the table. Because the Laplace transform is a linear operator: The Laplace transform of a sum is the sum of Laplace transforms of each term.

The Laplace transform of a multiple of a function is that multiple times the Laplace transformation of that function.

Using this linearity, and various trigonometric, hyperbolic, and complex number (etc.) properties and/or identities, some Laplace transforms can be obtained from others quicker than by using the definition directly. The unilateral Laplace transform takes as input a function whose time domain is the non-negative reals, which is why all of the time domain functions in the table below are multiples of the Heaviside step function, u(t). The entries of the table that involve a time delay are required to be causal (meaning that > 0). A causal system is a system where the impulse response h(t) is zero for all time t prior to t = 0. In general, the region of convergence for causal systems is not the same as that of anticausal systems.

Laplace transform

46

Function

Time domain

Laplace s-domain

Region of convergence

Reference inspection time shift of unit impulse

unit impulse delayed impulse

unit step delayed unit step

Re(s) > 0 Re(s) > 0

integrate unit impulse time shift of unit step integrate unit impulse twice Integrate unit step n times [3][4]

ramp

Re(s) > 0

nth power ( for integer n ) qth power (for complex q) nth root

Re(s) > 0 (n > 1) Re(s) > 0 Re(q) > 1 Re(s) > 0

Set q = 1/n above.

nth power with frequency shift

Re(s) >

Integrate unit step, apply frequency shift Integrate unit step, apply frequency shift, apply time shift Frequency shift of unit step Frequency shift of unit step Unit step minus exponential decay Bracewell 1978, p.227 Bracewell 1978, p.227 Williams 1973, p.88 Williams 1973, p.88 Bracewell 1978, p.227

delayed nth power with frequency shift

Re(s) >

exponential decay

Re(s) >

two-sided exponential decay

< Re(s) <

exponential approach

Re(s) > 0

sine cosine hyperbolic sine hyperbolic cosine exponentially decaying sine wave exponentially decaying cosine wave natural logarithm Bessel function of the first kind, of order n Error function

Re(s) > 0 Re(s) > 0 Re(s) > || Re(s) > || Re(s) >

Re(s) >

Bracewell 1978, p.227

Re(s) > 0 Re(s) > 0 (n > 1)

Williams 1973, p.88 Williams 1973, p.89

Re(s) > 0

Williams 1973, p.89

Laplace transform

47

Explanatory notes: u(t) represents the Heaviside step function. represents the Dirac delta function. (z) represents the Gamma function. is the EulerMascheroni constant. t, a real number, typically represents time, although it can represent any independent dimension. s is the complex angular frequency, and Re(s) is its real part. , , , and are real numbers. n is an integer.

s-Domain equivalent circuits and impedances


The Laplace transform is often used in circuit analysis, and simple conversions to the s-Domain of circuit elements can be made. Circuit elements can be transformed into impedances, very similar to phasor impedances. Here is a summary of equivalents:

Note that the resistor is exactly the same in the time domain and the s-Domain. The sources are put in if there are initial conditions on the circuit elements. For example, if a capacitor has an initial voltage across it, or if the inductor has an initial current through it, the sources inserted in the s-Domain account for that. The equivalents for current and voltage sources are simply derived from the transformations in the table above.

Laplace transform

48

Examples: How to apply the properties and theorems


The Laplace transform is used frequently in engineering and physics; the output of a linear time invariant system can be calculated by convolving its unit impulse response with the input signal. Performing this calculation in Laplace space turns the convolution into a multiplication; the latter being easier to solve because of its algebraic form. For more information, see control theory. The Laplace transform can also be used to solve differential equations and is used extensively in electrical engineering. The Laplace transform reduces a linear differential equation to an algebraic equation, which can then be solved by the formal rules of algebra. The original differential equation can then be solved by applying the inverse Laplace transform. The English electrical engineer Oliver Heaviside first proposed a similar scheme, although without using the Laplace transform; and the resulting operational calculus is credited as the Heaviside calculus.

Example 1: Solving a differential equation


In nuclear physics, the following fundamental relationship governs radioactive decay: the number of radioactive atoms N in a sample of a radioactive isotope decays at a rate proportional to N. This leads to the first order linear differential equation

where is the decay constant. The Laplace transform can be used to solve this equation. Rearranging the equation to one side, we have

Next, we take the Laplace transform of both sides of the equation:

where

and

Solving, we find

Finally, we take the inverse Laplace transform to find the general solution

which is indeed the correct form for radioactive decay.

Laplace transform

49

Example 2: Deriving the complex impedance for a capacitor


In the theory of electrical circuits, the current flow in a capacitor is proportional to the capacitance and rate of change in the electrical potential (in SI units). Symbolically, this is expressed by the differential equation

where C is the capacitance (in farads) of the capacitor, i = i(t) is the electric current (in amperes) through the capacitor as a function of time, and v = v(t) is the voltage (in volts) across the terminals of the capacitor, also as a function of time. Taking the Laplace transform of this equation, we obtain

where

and

Solving for V(s) we have

The definition of the complex impedance Z (in ohms) is the ratio of the complex voltage V divided by the complex current I while holding the initial state Vo at zero:

Using this definition and the previous equation, we find:

which is the correct expression for the complex impedance of a capacitor.

Example 3: Method of partial fraction expansion


Consider a linear time-invariant system with transfer function

The impulse response is simply the inverse Laplace transform of this transfer function:

To evaluate this inverse transform, we begin by expanding H(s) using the method of partial fraction expansion:

The unknown constants P and R are the residues located at the corresponding poles of the transfer function. Each residue represents the relative contribution of that singularity to the transfer function's overall shape. By the residue theorem, the inverse Laplace transform depends only upon the poles and their residues. To find the residue P, we multiply both sides of the equation by s + to get

Then by letting s = , the contribution from R vanishes and all that is left is

Laplace transform

50

Similarly, the residue R is given by

Note that

and so the substitution of R and P into the expanded expression for H(s) gives

Finally, using the linearity property and the known transform for exponential decay (see Item #3 in the Table of Laplace Transforms, above), we can take the inverse Laplace transform of H(s) to obtain:

which is the impulse response of the system.

Example 3.2: Convolution


The same result can be achieved using the convolution property as if the system is a series of filters with transfer functions of 1/(s+a) and 1/(s+b). That is, the inverse of

is

Example 4: Mixing sines, cosines, and exponentials


Time function Laplace transform

Starting with the Laplace transform

we find the inverse transform by first adding and subtracting the same constant to the numerator:

By the shift-in-frequency property, we have

Laplace transform

51

Finally, using the Laplace transforms for sine and cosine (see the table, above), we have

Example 5: Phase delay


Time function Laplace transform

Starting with the Laplace transform,

we find the inverse by first rearranging terms in the fraction:

We are now able to take the inverse Laplace transform of our terms:

This is just the sine of the sum of the arguments, yielding:

We can apply similar logic to find that

Laplace transform

52

Example 6: Determining structure of astronomical object from spectrum


The wide and general applicability of the Laplace transform and its inverse is illustrated by an application in astronomy which provides some information on the spatial distribution of matter of an astronomical source of radiofrequency thermal radiation too distant to resolve as more than a point, given its flux density spectrum, rather than relating the time domain with the spectrum (frequency domain). Assuming certain properties of the object, e.g. spherical shape and constant temperature, calculations based on carrying out an inverse Laplace transformation on the spectrum of the object can produce the only possible model of the distribution of matter in it (density as a function of distance from the center) consistent with the spectrum.[5] When independent information on the structure of an object is available, the inverse Laplace transform method has been found to be in good agreement.

Notes
[1] , , [2] The cumulative distribution function is the integral of the probability density function. [3] Mathematical Handbook of Formulas and Tables (3rd edition), S. Lipschutz, M.R. Spiegel, J. Liu, Schuam's Outline Series, p.183, 2009, ISBN 978-0-07-154855-7 - provides the case for real q. [4] http:/ / mathworld. wolfram. com/ LaplaceTransform. html - Wolfram Mathword provides case for complex q [5] On the interpretation of continuum flux observations from thermal radio sources: I. Continuum spectra and brightness contours, M Salem and MJ Seaton, Monthly Notices of the Royal Astronomical Society (MNRAS), Vol. 167, p. 493-510 (1974) (http:/ / adsabs. harvard. edu/ cgi-bin/ nph-data_query?bibcode=1974MNRAS. 167. . 493S& link_type=ARTICLE& db_key=AST& high=) II. Three-dimensional models, M Salem, MNRAS Vol. 167, p. 511-516 (1974) (http:/ / adsabs. harvard. edu/ cgi-bin/ nph-data_query?bibcode=1974MNRAS. 167. . 511S& link_type=ARTICLE& db_key=AST& high=)

References
Modern
Arendt, Wolfgang; Batty, Charles J.K.; Hieber, Matthias; Neubrander, Frank (2002), Vector-Valued Laplace Transforms and Cauchy Problems, Birkhuser Basel, ISBN3-7643-6549-8. Bracewell, Ronald N. (1978), The Fourier Transform and its Applications (2nd ed.), McGraw-Hill Kogakusha, ISBN0-07-007013-X Bracewell, R. N. (2000), The Fourier Transform and Its Applications (3rd ed.), Boston: McGraw-Hill, ISBN0-07-116043-4. Davies, Brian (2002), Integral transforms and their applications (Third ed.), New York: Springer, ISBN0-387-95314-0. Feller, William (1971), An introduction to probability theory and its applications. Vol. II., Second edition, New York: John Wiley & Sons, MR 0270403 (http://www.ams.org/mathscinet-getitem?mr=0270403). Korn, G. A.; Korn, T. M. (1967), Mathematical Handbook for Scientists and Engineers (2nd ed.), McGraw-Hill Companies, ISBN0-07-035370-0. Polyanin, A. D.; Manzhirov, A. V. (1998), Handbook of Integral Equations, Boca Raton: CRC Press, ISBN0-8493-2876-4. Schwartz, Laurent (1952), "Transformation de Laplace des distributions", Comm. Sm. Math. Univ. Lund [Medd. Lunds Univ. Mat. Sem.] (in French) 1952: 196206, MR 0052555 (http://www.ams.org/ mathscinet-getitem?mr=0052555). Siebert, William McC. (1986), Circuits, Signals, and Systems, Cambridge, Massachusetts: MIT Press, ISBN0-262-19229-2. Widder, David Vernon (1941), The Laplace Transform, Princeton Mathematical Series, v. 6, Princeton University Press, MR 0005923 (http://www.ams.org/mathscinet-getitem?mr=0005923).

Laplace transform Widder, David Vernon (1945), "What is the Laplace transform?", The American Mathematical Monthly (The American Mathematical Monthly) 52 (8): 419425, doi: 10.2307/2305640 (http://dx.doi.org/10.2307/ 2305640), ISSN 0002-9890 (http://www.worldcat.org/issn/0002-9890), JSTOR 2305640 (http://www.jstor. org/stable/2305640), MR 0013447 (http://www.ams.org/mathscinet-getitem?mr=0013447). Williams, J. (1973), Laplace Transforms, Problem Solvers, George Allen & Unwin, ISBN0-04-512021-8 Takacs, J. (1953), "Fourier amplitudok meghatarozasa operatorszamitassal", Magyar Hiradastechnika (in Hungarian) IV (78): 9396

53

Historical
Deakin, M. A. B. (1981), "The development of the Laplace transform", Archive for the History of the Exact Sciences 25 (4): 343390, doi: 10.1007/BF01395660 (http://dx.doi.org/10.1007/BF01395660) Deakin, M. A. B. (1982), "The development of the Laplace transform", Archive for the History of the Exact Sciences 26 (4): 351381, doi: 10.1007/BF00418754 (http://dx.doi.org/10.1007/BF00418754) Euler, L. (1744), "De constructione aequationum", Opera omnia, 1st series 22: 150161. Euler, L. (1753), "Methodus aequationes differentiales", Opera omnia, 1st series 22: 181213. Euler, L. (1769), "Institutiones calculi integralis, Volume 2", Opera omnia, 1st series 12, Chapters 35. Grattan-Guinness, I (1997), "Laplace's integral solutions to partial differential equations", in Gillispie, C. C., Pierre Simon Laplace 17491827: A Life in Exact Science, Princeton: Princeton University Press, ISBN0-691-01185-0. Lagrange, J. L. (1773), Mmoire sur l'utilit de la mthode, uvres de Lagrange 2, pp.171234.

External links
Hazewinkel, Michiel, ed. (2001), "Laplace transform" (http://www.encyclopediaofmath.org/index. php?title=p/l057540), Encyclopedia of Mathematics, Springer, ISBN978-1-55608-010-4 Online Computation (http://wims.unice.fr/wims/wims.cgi?lang=en&+module=tool/analysis/fourierlaplace. en) of the transform or inverse transform, wims.unice.fr Tables of Integral Transforms (http://eqworld.ipmnet.ru/en/auxiliary/aux-inttrans.htm) at EqWorld: The World of Mathematical Equations. Weisstein, Eric W., " Laplace Transform (http://mathworld.wolfram.com/LaplaceTransform.html)", MathWorld. Laplace Transform Module by John H. Mathews (http://math.fullerton.edu/mathews/c2003/ LaplaceTransformMod.html) Good explanations of the initial and final value theorems (http://fourier.eng.hmc.edu/e102/lectures/ Laplace_Transform/) Laplace Transforms (http://www.mathpages.com/home/kmath508/kmath508.htm) at MathPages Computational Knowledge Engine (http://www.wolframalpha.com/input/?i=laplace+transform+example) allows to easily calculate Laplace Transforms and its inverse Transform.

Dirac delta function

54

Dirac delta function


In mathematics, the Dirac delta function, or function, is (informally) a generalized function on the real number line that is zero everywhere except at zero, with an integral of one over the entire real line.[1] The delta function is sometimes thought of as an infinitely high, infinitely thin spike at the origin, with total area one under the spike, and physically represents an idealized point mass or point charge. It was introduced by theoretical physicist Paul Dirac. In the context of signal processing it is often referred to as the unit impulse symbol (or function). Its discrete analog is the Kronecker delta function which is usually defined on a finite domain and takes values 0 and 1. From a purely mathematical viewpoint, the Dirac delta is not strictly a function, because any extended-real function that is equal to zero everywhere but a single point must have total integral zero. The delta function only makes sense as a mathematical object when it appears inside an integral. While from this perspective the Dirac delta can usually be manipulated as though it were a function, formally it must be defined as a distribution that is also a measure. In many applications, the Dirac delta is regarded as a kind of limit (a weak limit) of a sequence of functions having a tall spike at the origin. The approximating functions of the sequence are thus "approximate" or "nascent" delta functions.
The Dirac delta function as the limit (in the sense of distributions) of the sequence of zero-centered normal distributions as .

Schematic representation of the Dirac delta function by a line surmounted by an arrow. The height of the arrow is usually used to specify the value of any multiplicative constant, which will give the area under the function. The other convention is to write the area next to the arrowhead.

Dirac delta function

55

Overview
The graph of the delta function is usually thought of as following the whole x-axis and the positive y-axis. Despite its name, the delta function is not truly a function, at least not a usual one with range in real numbers. For example, the objects f(x) = (x) and g(x) = 0 are equal everywhere except at x = 0 yet have integrals that are different. According to Lebesgue integration theory, if f and g are functions such that f = g almost everywhere, then f is integrable if and only if g is integrable and the integrals of f and g are identical. Rigorous treatment of the Dirac delta requires measure theory or the theory of distributions. The Dirac delta is used to model a tall narrow spike function (an impulse), and other similar abstractions such as a point charge, point mass or electron point. For example, to calculate the dynamics of a baseball being hit by a bat, one can approximate the force of the bat hitting the baseball by a delta function. In doing so, one not only simplifies the equations, but one also is able to calculate the motion of the baseball by only considering the total impulse of the bat against the ball rather than requiring knowledge of the details of how the bat transferred energy to the ball. In applied mathematics, the delta function is often manipulated as a kind of limit (a weak limit) of a sequence of functions, each member of which has a tall spike at the origin: for example, a sequence of Gaussian distributions centered at the origin with variance tending to zero.

History
Joseph Fourier presented what is now called the Fourier integral theorem in his treatise Thorie analytique de la chaleur in the form:[2]

which is tantamount to the introduction of the -function in the form:

Later, Augustin Cauchy expressed the theorem using exponentials:

Cauchy pointed out that in some circumstances the order of integration in this result was significant.[3] As justified using the theory of distributions, the Cauchy equation can be rearranged to resemble Fourier's original formulation and expose the -function as:

where the -function is expressed as:

A rigorous interpretation of the exponential form and the various limitations upon the function f necessary for its application extended over several centuries. The problems with a classical interpretation are explained as follows: The greatest drawback of the classical Fourier transformation is a rather narrow class of functions (originals) for which it can be effectively computed. Namely, it is necessary that these functions decrease sufficiently rapidly to zero (in the neighborhood of infinity) in order to insure the existence of the Fourier integral. For example, the Fourier transform of such simple functions as polynomials does not exist in the classical sense. The extension of the classical Fourier transformation to distributions considerably enlarged the class of

Dirac delta function functions that could be transformed and this removed many obstacles. Further developments included generalization of the Fourier integral, "beginning with Plancherel's pathbreaking L2-theory (1910), continuing with Wiener's and Bochner's works (around 1930) and culminating with the amalgamation into L. Schwartz's theory of distributions (1945)...", and leading to the formal development of the Dirac delta function. An infinitesimal formula for an infinitely tall, unit impulse delta function (infinitesimal version of Cauchy distribution) explicitly appears in an 1827 text of Augustin Louis Cauchy. Simon Denis Poisson considered the issue in connection with the study of wave propagation as did Gustav Kirchhoff somewhat later. Kirchhoff and Hermann von Helmholtz also introduced the unit impulse as a limit of Gaussians, which also corresponded to Lord Kelvin's notion of a point heat source. At the end of the 19th century, Oliver Heaviside used formal Fourier series to manipulate the unit impulse.[4] The Dirac delta function as such was introduced as a "convenient notation" by Paul Dirac in his influential 1930 book Principles of Quantum Mechanics. He called it the "delta function" since he used it as a continuous analogue of the discrete Kronecker delta.

56

Definitions
The Dirac delta can be loosely thought of as a function on the real line which is zero everywhere except at the origin, where it is infinite,

and which is also constrained to satisfy the identity

This is merely a heuristic characterization. The Dirac delta is not a function in the traditional sense as no function defined on the real numbers has these properties. The Dirac delta function can be rigorously defined either as a distribution or as a measure.

As a measure
One way to rigorously define the delta function is as a measure, which accepts as an argument a subset A of the real line R, and returns (A) = 1 if 0 A, and (A) = 0 otherwise. If the delta function is conceptualized as modeling an idealized point mass at 0, then (A) represents the mass contained in the set A. One may then define the integral against as the integral of a function against this mass distribution. Formally, the Lebesgue integral provides the necessary analytic device. The Lebesgue integral with respect to the measure satisfies

for all continuous compactly supported functions f. The measure is not absolutely continuous with respect to the Lebesgue measure in fact, it is a singular measure. Consequently, the delta measure has no RadonNikodym derivative no true function for which the property

holds. As a result, the latter notation is a convenient abuse of notation, and not a standard (Riemann or Lebesgue) integral. As a probability measure on R, the delta measure is characterized by its cumulative distribution function, which is the unit step function[5]

Dirac delta function

57

This means that H(x) is the integral of the cumulative indicator function 1(,x] with respect to the measure ; to wit,

Thus in particular the integral of the delta function against a continuous function can be properly understood as a Stieltjes integral:

All higher moments of are zero. In particular, characteristic function and moment generating function are both equal to one.

As a distribution
In the theory of distributions a generalized function is thought of not as a function itself, but only in relation to how it affects other functions when it is "integrated" against them. In keeping with this philosophy, to define the delta function properly, it is enough to say what the "integral" of the delta function against a sufficiently "good" test function is. If the delta function is already understood as a measure, then the Lebesgue integral of a test function against that measure supplies the necessary integral. A typical space of test functions consists of all smooth functions on R with compact support. As a distribution, the Dirac delta is a linear functional on the space of test functions and is defined by
(1)

for every test function . For to be properly a distribution, it must be "continuous" in a suitable sense. In general, for a linear functional S on the space of test functions to define a distribution, it is necessary and sufficient that, for every positive integer N there is an integer MN and a constant CN such that for every test function , one has the inequality

With the distribution, one has such an inequality (with CN=1) with MN=0 for all N. Thus is a distribution of order zero. It is, furthermore, a distribution with compact support (the support being {0}). The delta distribution can also be defined in a number of equivalent ways. For instance, it is the distributional derivative of the Heaviside step function. This means that, for every test function , one has

Intuitively, if integration by parts were permitted, then the latter integral should simplify to

and indeed, a form of integration by parts is permitted for the Stieltjes integral, and in that case one does have

In the context of measure theory, the Dirac measure gives rise to a distribution by integration. Conversely, equation (1) defines a Daniell integral on the space of all compactly supported continuous functions which, by the Riesz representation theorem, can be represented as the Lebesgue integral of with respect to some Radon measure.

Dirac delta function

58

Generalizations
The delta function can be defined in n-dimensional Euclidean space Rn as the measure such that

for every compactly supported continuous function f. As a measure, the n-dimensional delta function is the product measure of the 1-dimensional delta functions in each variable separately. Thus, formally, with x=(x1,x2,...,xn), one has
(2)

The delta function can also be defined in the sense of distributions exactly as above in the one-dimensional case. However, despite widespread use in engineering contexts, (2) should be manipulated with care, since the product of distributions can only be defined under quite narrow circumstances. The notion of a Dirac measure makes sense on any set whatsoever. Thus if X is a set, x0X is a marked point, and is any sigma algebra of subsets of X, then the measure defined on sets A by

is the delta measure or unit mass concentrated at x0. Another common generalization of the delta function is to a differentiable manifold where most of its properties as a distribution can also be exploited because of the differentiable structure. The delta function on a manifold M centered at the point x0M is defined as the following distribution:
(3)

for all compactly supported smooth real-valued functions on M. A common special case of this construction is when M is an open set in the Euclidean space Rn. On a locally compact Hausdorff space X, the Dirac delta measure concentrated at a point x is the Radon measure associated with the Daniell integral (3) on compactly supported continuous functions . At this level of generality, calculus as such is no longer possible, however a variety of techniques from abstract analysis are available. For instance, the mapping is a continuous embedding of X into the space of finite Radon measures on X, equipped with its vague topology. Moreover, the convex hull of the image of X under this embedding is dense in the space of probability measures on X.

Properties
Scaling and symmetry
The delta function satisfies the following scaling property for a non-zero scalar :

and so
(4)

In particular, the delta function is an even distribution, in the sense that

Dirac delta function which is homogeneous of degree 1.

59

Algebraic properties
The distributional product of with x is equal to zero:

Conversely, if xf(x)=xg(x), where f and g are distributions, then

for some constant c.

Translation
The integral of the time-delayed Dirac delta is given by:

This is sometimes referred to as the sifting property or the sampling property. The delta function is said to "sift out" the value at t = T. It follows that the effect of convolving a function f(t) with the time-delayed Dirac delta is to time-delay f(t) by the same amount:

(using (4):

This holds under the precise condition that f be a tempered distribution (see the discussion of the Fourier transform below). As a special case, for instance, we have the identity (understood in the distribution sense)

Composition with a function


More generally, the delta distribution may be composed with a smooth function g(x) in such a way that the familiar change of variables formula holds, that

provided that g is a continuously differentiable function with g nowhere zero. That is, there is a unique way to assign meaning to the distribution so that this identity holds for all compactly supported test functions f. This distribution satisfies (g(x)) = 0 if g is nowhere zero, and otherwise if g has a real root at x0, then

It is natural therefore to define the composition (g(x)) for continuously differentiable functions g by

where the sum extends over all roots of g(x), which are assumed to be simple. Thus, for example

Dirac delta function

60

In the integral form the generalized scaling property may be written as

Properties in n dimensions
The delta distribution in an n-dimensional space satisfies the following scaling property instead:

so that is a homogeneous distribution of degree n. Under any reflection or rotation , the delta function is invariant: As in the one-variable case, it is possible to define the composition of with a bi-Lipschitz function[6] g: Rn Rn uniquely so that the identity

for all compactly supported functions f. Using the coarea formula from geometric measure theory, one can also define the composition of the delta function with a submersion from one Euclidean space to another one of different dimension; the result is a type of current. In the special case of a continuously differentiable function g: Rn R such that the gradient of g is nowhere zero, the following identity holds

where the integral on the right is over g1(0), the n 1 dimensional surface defined by g(x)=0 with respect to the Minkowski content measure. This is known as a simple layer integral. More generally, if S is a smooth hypersurface of Rn, then we can associated to S the distribution that integrates any compactly supported smooth function g over S:

where is the hypersurface measure associated to S. This generalization is associated with the potential theory of simple layer potentials on S. If D is a domain in Rn with smooth boundary S, then S is equal to the normal derivative of the indicator function of D in the distribution sense:

where n is the outward normal. For a proof, see e.g. the article on the surface delta function.

Dirac delta function

61

Fourier transform
The delta function is a tempered distribution, and therefore it has a well-defined Fourier transform. Formally, one finds[7]

Properly speaking, the Fourier transform of a distribution is defined by imposing self-adjointness of the Fourier transform under the duality pairing of tempered distributions with Schwartz functions. Thus is defined as the unique tempered distribution satisfying

for all Schwartz functions . And indeed it follows from this that As a result of this identity, the convolution of the delta function with any other tempered distribution S is simply S:

That is to say that is an identity element for the convolution on tempered distributions, and in fact the space of compactly supported distributions under convolution is an associative algebra with identity the delta function. This property is fundamental in signal processing, as convolution with a tempered distribution is a linear time-invariant system, and applying the linear time-invariant system measures its impulse response. The impulse response can be computed to any desired degree of accuracy by choosing a suitable approximation for , and once it is known, it characterizes the system completely. See LTI system theory:Impulse response and convolution. The inverse Fourier transform of the tempered distribution f() = 1 is the delta function. Formally, this is expressed

and more rigorously, it follows since

for all Schwartz functions f. In these terms, the delta function provides a suggestive statement of the orthogonality property of the Fourier kernel on R. Formally, one has

This is, of course, shorthand for the assertion that the Fourier transform of the tempered distribution

is

which again follows by imposing self-adjointness of the Fourier transform. By analytic continuation of the Fourier transform, the Laplace transform of the delta function is found to be

Dirac delta function

62

Distributional derivatives
The distributional derivative of the Dirac delta distribution is the distribution defined on compactly supported smooth test functions by

The first equality here is a kind of integration by parts, for if were a true function then

The k-th derivative of is defined similarly as the distribution given on test functions by

In particular is an infinitely differentiable distribution. The first derivative of the delta function is the distributional limit of the difference quotients:

More properly, one has

where h is the translation operator, defined on functions by h(x)=(x+h), and on a distribution S by In the theory of electromagnetism, the first derivative of the delta function represents a point magnetic dipole situated at the origin. Accordingly, it is referred to as a dipole or the doublet function. The derivative of the delta function satisfies a number of basic properties, including:
[8]

Furthermore, the convolution of ' with a compactly supported smooth function f is

which follows from the properties of the distributional derivative of a convolution.

Higher dimensions
More generally, on an open set U in the n-dimensional Euclidean space Rn, the Dirac delta distribution centered at a point aU is defined by

for all S(U), the space of all smooth compactly supported functions on U. If = (1, ..., n) is any multi-index and denotes the associated mixed partial derivative operator, then the th derivative a of a is given by That is, the th derivative of a is the distribution whose value on any test function is the th derivative of at a (with the appropriate positive or negative sign). The first partial derivatives of the delta function are thought of as double layers along the coordinate planes. More generally, the normal derivative of a simple layer supported on a surface is a double layer supported on that surface, and represents a laminar magnetic monopole. Higher derivatives of the delta function are known in physics as multipoles.

Dirac delta function Higher derivatives enter into mathematics naturally as the building blocks for the complete structure of distributions with point support. If S is any distribution on U supported on the set {a} consisting of a single point, then there is an integer m and coefficients c such that

63

Representations of the delta function


The delta function can be viewed as the limit of a sequence of functions

where (x) is sometimes called a nascent delta function. This limit is meant in a weak sense: either that
(5)

for all continuous functions f having compact support, or that this limit holds for all smooth functions f with compact support. The difference between these two slightly different modes of weak convergence is often subtle: the former is convergence in the vague topology of measures, and the latter is convergence in the sense of distributions.

Approximations to the identity


Typically a nascent delta function can be constructed in the following manner. Let be an absolutely integrable function on R of total integral 1, and define

In n dimensions, one uses instead the scaling

Then a simple change of variables shows that also has integral 1. One shows easily that (5) holds for all continuous compactly supported functions f, and so converges weakly to in the sense of measures. The constructed in this way are known as an approximation to the identity. This terminology is because the space L1(R) of absolutely integrable functions is closed under the operation of convolution of functions: fg L1(R) whenever f and g are in L1(R). However, there is no identity in L1(R) for the convolution product: no element h such that fh = f for all f. Nevertheless, the sequence does approximate such an identity in the sense that This limit holds in the sense of mean convergence (convergence in L1). Further conditions on the , for instance that it be a mollifier associated to a compactly supported function,[9] are needed to ensure pointwise convergence almost everywhere. If the initial = 1 is itself smooth and compactly supported then the sequence is called a mollifier. The standard mollifier is obtained by choosing to be a suitably normalized bump function, for instance

In some situations such as numerical analysis, a piecewise linear approximation to the identity is desirable. This can be obtained by taking 1 to be a hat function. With this choice of 1, one has

which are all continuous and compactly supported, although not smooth and so not a mollifier.

Dirac delta function

64

Probabilistic considerations
In the context of probability theory, it is natural to impose the additional condition that the initial 1 in an approximation to the identity should be positive, as such a function then represents a probability distribution. Convolution with a probability distribution is sometimes favorable because it does not result in overshoot or undershoot, as the output is a convex combination of the input values, and thus falls between the maximum and minimum of the input function. Taking 1 to be any probability distribution at all, and letting (x) = 1(x/)/ as above will give rise to an approximation to the identity. In general this converges more rapidly to a delta function if, in addition, has mean 0 and has small higher moments. For instance, if 1 is the uniform distribution on [1/2, 1/2], also known as the rectangular function, then:

Another example is with the Wigner semicircle distribution

This is continuous and compactly supported, but not a mollifier because it is not smooth.

Semigroups
Nascent delta functions often arise as convolution semigroups. This amounts to the further constraint that the convolution of with must satisfy for all , > 0. Convolution semigroups in L1 that form a nascent delta function are always an approximation to the identity in the above sense, however the semigroup condition is quite a strong restriction. In practice, semigroups approximating the delta function arise as fundamental solutions or Green's functions to physically motivated elliptic or parabolic partial differential equations. In the context of applied mathematics, semigroups arise as the output of a linear time-invariant system. Abstractly, if A is a linear operator acting on functions of x, then a convolution semigroup arises by solving the initial value problem

in which the limit is as usual understood in the weak sense. Setting (x)=(, x) gives the associated nascent delta function. Some examples of physically important convolution semigroups arising from such a fundamental solution include the following. The heat kernel The heat kernel, defined by

represents the temperature in an infinite wire at time t > 0, if a unit of heat energy is stored at the origin of the wire at time t = 0. This semigroup evolves according to the one-dimensional heat equation:

In probability theory, (x) is a normal distribution of variance and mean 0. It represents the probability density at time t= of the position of a particle starting at the origin following a standard Brownian motion. In this context, the

Dirac delta function semigroup condition is then an expression of the Markov property of Brownian motion. In higher dimensional Euclidean space Rn, the heat kernel is

65

and has the same physical interpretation, mutatis mutandis. It also represents a nascent delta function in the sense that in the distribution sense as 0. The Poisson kernel The Poisson kernel

is the fundamental solution of the Laplace equation in the upper half-plane. It represents the electrostatic potential in a semi-infinite plate whose potential along the edge is held at fixed at the delta function. The Poisson kernel is also closely related to the Cauchy distribution. This semigroup evolves according to the equation

where the operator is rigorously defined as the Fourier multiplier

Oscillatory integrals
In areas of physics such as wave propagation and wave mechanics, the equations involved are hyperbolic and so may have more singular solutions. As a result, the nascent delta functions that arise as fundamental solutions of the associated Cauchy problems are generally oscillatory integrals. An example, which comes from a solution of the EulerTricomi equation of transonic gas dynamics, is the rescaled Airy function

Although using the Fourier transform, it is easy to see that this generates a semigroup in some sense, it is not absolutely integrable and so cannot define a semigroup in the above strong sense. Many nascent delta functions constructed as oscillatory integrals only converge in the sense of distributions (an example is the Dirichlet kernel below), rather than in the sense of measures. Another example is the Cauchy problem for the wave equation in R1+1:

The solution u represents the displacement from equilibrium of an infinite elastic string, with an initial disturbance at the origin. Other approximations to the identity of this kind include the sinc function (used widely in electronics and telecommunications)

and the Bessel function

Dirac delta function

66

Plane wave decomposition


One approach to the study of a linear partial differential equation where L is a differential operator on Rn, is to seek first a fundamental solution, which is a solution of the equation

When L is particularly simple, this problem can often be resolved using the Fourier transform directly (as in the case of the Poisson kernel and heat kernel already mentioned). For more complicated operators, it is sometimes easier first to consider an equation of the form

where h is a plane wave function, meaning that it has the form

for some vector . Such an equation can be resolved (if the coefficients of L are analytic functions) by the CauchyKovalevskaya theorem or (if the coefficients of L are constant) by quadrature. So, if the delta function can be decomposed into plane waves, then one can in principle solve linear partial differential equations. Such a decomposition of the delta function into plane waves was part of a general technique first introduced essentially by Johann Radon, and then developed in this form by Fritz John (1955).[10] Choose k so that n+k is an even integer, and for a real number s, put

Then is obtained by applying a power of the Laplacian to the integral with respect to the unit sphere measure d of g(x ) for in the unit sphere Sn1:

The Laplacian here is interpreted as a weak derivative, so that this equation is taken to mean that, for any test function ,

The result follows from the formula for the Newtonian potential (the fundamental solution of Poisson's equation). This is essentially a form of the inversion formula for the Radon transform, because it recovers the value of (x) from its integrals over hyperplanes. For instance, if n is odd and k=1, then the integral on the right hand side is

where R(, p) is the Radon transform of :

An alternative equivalent expression of the plane wave decomposition, from Gel'fand & Shilov (19661968, I, 3.10), is

Dirac delta function for n even, and

67

for n odd.

Fourier kernels
In the study of Fourier series, a major question consists of determining whether and in what sense the Fourier series associated with a periodic function converges to the function. The nth partial sum of the Fourier series of a function f of period 2 is defined by convolution (on the interval [,]) with the Dirichlet kernel:

Thus,

where

A fundamental result of elementary Fourier series states that the Dirichlet kernel tends to the a multiple of the delta function as N. This is interpreted in the distribution sense, that

for every compactly supported smooth function f. Thus, formally one has

on the interval [,]. In spite of this, the result does not hold for all compactly supported continuous functions: that is DN does not converge weakly in the sense of measures. The lack of convergence of the Fourier series has led to the introduction of a variety of summability methods in order to produce convergence. The method of Cesro summation leads to the Fejr kernel

The Fejr kernels tend to the delta function in a stronger sense that[11]

for every compactly supported continuous function f. The implication is that the Fourier series of any continuous function is Cesro summable to the value of the function at every point.

Dirac delta function

68

Hilbert space theory


The Dirac delta distribution is a densely defined unbounded linear functional on the Hilbert space L2 of square integrable functions. Indeed, smooth compactly support functions are dense in L2, and the action of the delta distribution on such functions is well-defined. In many applications, it is possible to identify subspaces of L2 and to give a stronger topology on which the delta function defines a bounded linear functional. Sobolev spaces The Sobolev embedding theorem for Sobolev spaces on the real line R implies that any square-integrable function f such that

is automatically continuous, and satisfies in particular Thus is a bounded linear functional on the Sobolev space H1. Equivalently is an element of the continuous dual space H1 of H1. More generally, in n dimensions, one has Hs(Rn) provideds > n / 2. Spaces of holomorphic functions In complex analysis, the delta function enters via Cauchy's integral formula which asserts that if D is a domain in the complex plane with smooth boundary, then

for all holomorphic functions f in D that are continuous on the closure of D. As a result, the delta function z is represented on this class of holomorphic functions by the Cauchy integral:

More generally, let H2(D) be the Hardy space consisting of the closure in L2(D) of all holomorphic functions in D continuous up to the boundary of D. Then functions in H2(D) uniquely extend to holomorphic functions in D, and the Cauchy integral formula continues to hold. In particular for zD, the delta function z is a continuous linear functional on H2(D). This is a special case of the situation in several complex variables in which, for smooth domains D, the Szeg kernel plays the role of the Cauchy integral. Resolutions of the identity Given a complete orthonormal basis set of functions {n} in a separable Hilbert space, for example, the normalized eigenvectors of a compact self-adjoint operator, any vector f can be expressed as:

The coefficients {n} are found as: which may be represented by the notation: a form of the bra-ket notation of Dirac.[12] Adopting this notation, the expansion of f takes the dyadic form:

Letting I denote the identity operator on the Hilbert space, the expression

Dirac delta function

69

is called a resolution of the identity. When the Hilbert space is the space L2(D) of square-integrable functions on a domain D, the quantity:

is an integral operator, and the expression for f can be rewritten as:

The right-hand side converges to f in the L2 sense. It need not hold in a pointwise sense, even when f is a continuous function. Nevertheless, it is common to abuse notation and write

resulting in the representation of the delta function:

With a suitable rigged Hilbert space (, L2(D), *) where L2(D) contains all compactly supported smooth functions, this summation may converge in *, depending on the properties of the basis n. In most cases of practical interest, the orthonormal basis comes from an integral or differential operator, in which case the series converges in the distribution sense.

Infinitesimal delta functions


Cauchy used an infinitesimal to write down a unit impulse, infinitely tall and narrow Dirac-type delta function satisfying in a number of articles in 1827.[13] Cauchy defined an infinitesimal in Cours d'Analyse (1827) in terms of a sequence tending to zero. Namely, such a null sequence becomes an infinitesimal in Cauchy's and Lazare Carnot's terminology. Modern set-theoretic approaches allow one to define infinitesimals via the ultrapower construction, where a null sequence becomes an infinitesimal in the sense of an equivalence class modulo a relation defined in terms of a suitable ultrafilter. The article by Yamashita (2007) contains a bibliography on modern Dirac delta functions in the context of an infinitesimal-enriched continuum provided by the hyperreals. Here the Dirac delta can be given by an actual function, having the property that for every real function F one has by Fourier and Cauchy. as anticipated

Dirac comb
A so-called uniform "pulse train" of Dirac delta measures, which is known as a Dirac comb, or as the Shah distribution, creates a sampling function, often used in digital signal processing (DSP) and discrete time signal analysis. The Dirac comb is given as the infinite sum, whose limit is understood in the distribution sense,

which is a sequence of point masses at each of the integers.

A Dirac comb is an infinite series of Dirac delta functions spaced at intervals of T

Dirac delta function Up to an overall normalizing constant, the Dirac comb is equal to its own Fourier transform. This is significant because if f is any Schwartz function, then the periodization of f is given by the convolution

70

In particular,

is precisely the Poisson summation formula.

SokhotskiPlemelj theorem
The SokhotskiPlemelj theorem, important in quantum mechanics, relates the delta function to the distribution p.v.1/x, the Cauchy principal value of the function 1/x, defined by

Sokhatsky's formula states that

Here the limit is understood in the distribution sense, that for all compactly supported smooth functions f,

Relationship to the Kronecker delta


The Kronecker delta ij is the quantity defined by

for all integers i, j. This function then satisfies the following analog of the sifting property: if infinite sequence, then

is any doubly

Similarly, for any real or complex valued continuous function f on R, the Dirac delta satisfies the sifting property

This exhibits the Kronecker delta function as a discrete analog of the Dirac delta function.

Dirac delta function

71

Applications
Probability theory
In probability theory and statistics, the Dirac delta function is often used to represent a discrete distribution, or a partially discrete, partially continuous distribution, using a probability density function (which is normally used to represent fully continuous distributions). For example, the probability density function f(x) of a discrete distribution consisting of points x = {x1, ..., xn}, with corresponding probabilities p1, ..., pn, can be written as

As another example, consider a distribution which 6/10 of the time returns a standard normal distribution, and 4/10 of the time returns exactly the value 3.5 (i.e. a partly continuous, partly discrete mixture distribution). The density function of this distribution can be written as

The delta function is also used in a completely different way to represent the local time of a diffusion process (like Brownian motion). The local time of a stochastic process B(t) is given by

and represents the amount of time that the process spends at the point x in the range of the process. More precisely, in one dimension this integral can be written

where 1[x, x+] is the indicator function of the interval [x, x+].

Quantum mechanics
We give an example of how the delta function is expedient in quantum mechanics. The wave function of a particle gives the probability amplitude of finding a particle within a given region of space. Wave functions are assumed to be elements of the Hilbert space L2 of square-integrable functions, and the total probability of finding a particle within a given interval is the integral of the magnitude of the wave function squared over the interval. A set {n} of wave functions is orthonormal if they are normalized by

where here refers to the Kronecker delta. A set of orthonormal wave functions is complete in the space of square-integrable functions if any wave function can be expressed as a combination of the n:

with

. Complete orthonormal systems of wave functions appear naturally as the eigenfunctions of

the Hamiltonian (of a bound system) in quantum mechanics that measures the energy levels, which are called the eigenvalues. The set of eigenvalues, in this case, is known as the spectrum of the Hamiltonian. In bra-ket notation, as above, this equality implies the resolution of the identity:

Here the eigenvalues are assumed to be discrete, but the set of eigenvalues of an observable may be continuous rather than discrete. An example is the position observable, Q(x)=x(x). The spectrum of the position (in one dimension) is the entire real line, and is called a continuous spectrum. However, unlike the Hamiltonian, the position operator lacks proper eigenfunctions. The conventional way to overcome this shortcoming is to widen the class of available functions by allowing distributions as well: that is, to replace the Hilbert space of quantum mechanics by

Dirac delta function an appropriate rigged Hilbert space. In this context, the position operator has a complete set of eigen-distributions, labeled by the points y of the real line, given by

72

The eigenfunctions of position are denoted by

in Dirac notation, and are known as position eigenstates.

Similar considerations apply to the eigenstates of the momentum operator, or indeed any other self-adjoint unbounded operator P on the Hilbert space, provided the spectrum of P is continuous and there are no degenerate eigenvalues. In that case, there is a set of real numbers (the spectrum), and a collection y of distributions indexed by the elements of , such that

That is, y are the eigenvectors of P. If the eigenvectors are normalized so that in the distribution sense, then for any test function ,

where

That is, as in the discrete case, there is a resolution of the identity

where the operator-valued integral is again understood in the weak sense. If the spectrum of P has both continuous and discrete parts, then the resolution of the identity involves a summation over the discrete spectrum and an integral over the continuous spectrum. The delta function also has many more specialized applications in quantum mechanics, such as the delta potential models for a single and double potential well.

Structural mechanics
The delta function can be used in structural mechanics to describe transient loads or point loads acting on structures. The governing equation of a simple massspring system excited by a sudden force impulse I at time t = 0 can be written

where m is the mass, the deflection and k the spring constant. As another example, the equation governing the static deflection of a slender beam is, according to Euler-Bernoulli theory,

where EI is the bending stiffness of the beam, w the deflection, x the spatial coordinate and q(x) the load distribution. If a beam is loaded by a point force F at x = x0, the load distribution is written As integration of the delta function results in the Heaviside step function, it follows that the static deflection of a slender beam subject to multiple point loads is described by a set of piecewise polynomials. Also a point moment acting on a beam can be described by delta functions. Consider two opposing point forces F at a distance d apart. They then produce a moment M = Fd acting on the beam. Now, let the distance d approach the limit zero, while M is kept constant. The load distribution, assuming a clockwise moment acting at x = 0, is written

Dirac delta function

73

Point moments can thus be represented by the derivative of the delta function. Integration of the beam equation again results in piecewise polynomial deflection.

Notes
[1] , p. 58 [2] The original French text can be found here (http:/ / books. google. com/ books?id=TDQJAAAAIAAJ& pg=PA525& dq="c'est--dire+ qu'on+ a+ l'quation"& hl=en& sa=X& ei=SrC7T9yKBorYiALVnc2oDg& sqi=2& ved=0CEAQ6AEwAg#v=onepage& q="c'est--dire qu'on a l'quation"& f=false). [3] See, for example, Des intgrales doubles qui se prsentent sous une forme indtermine (http:/ / gallica. bnf. fr/ ark:/ 12148/ bpt6k90181x/ f387) [4] A more complete historical account can be found in . [5] . See also for a different interpretation. Other conventions for the assigning the value of the Heaviside function at zero exist, and some of these are not consistent with what follows. [6] Further refinement is possible, namely to submersions, although these require a more involved change of variables formula. [7] In some conventions for the Fourier transform. [8] The property follows by applying a test function and integration by parts. [9] More generally, one only needs = 1 to have an integrable radially symmetric decreasing rearrangement. [10] See also . [11] In the terminology of , the Fejr kernel is a Dirac sequence, whereas the Dirichlet kernel is not. [12] The development of this section in bra-ket notation is found in [13] See .

References
Aratyn, Henrik; Rasinariu, Constantin (2006), A short course in mathematical methods with Maple (http://books. google.com/?id=JFmUQGd1I3IC&pg=PA314), World Scientific, ISBN981-256-461-6. Arfken, G. B.; Weber, H. J. (2000), Mathematical Methods for Physicists (5th ed.), Boston, MA: Academic Press, ISBN978-0-12-059825-0. Bracewell, R. (1986), The Fourier Transform and Its Applications (2nd ed.), McGraw-Hill. Crdoba, A., "La formule sommatoire de Poisson", C.R. Acad. Sci. Paris, Series I 306: 373376. Courant, Richard; Hilbert, David (1962), Methods of Mathematical Physics, Volume II, Wiley-Interscience. Davis, Howard Ted; Thomson, Kendall T (2000), Linear algebra and linear operators in engineering with applications in Mathematica (http://books.google.com/?id=3OqoMFHLhG0C&pg=PA344#v=onepage&q), Academic Press, ISBN0-12-206349-X Dieudonn, Jean (1976), Treatise on analysis. Vol. II, New York: Academic Press [Harcourt Brace Jovanovich Publishers], ISBN978-0-12-215502-4, MR 0530406 (http://www.ams.org/mathscinet-getitem?mr=0530406). Dieudonn, Jean (1972), Treatise on analysis. Vol. III, Boston, MA: Academic Press, MR 0350769 (http:// www.ams.org/mathscinet-getitem?mr=0350769) Dirac, Paul (1958), Principles of quantum mechanics (4th ed.), Oxford at the Clarendon Press, ISBN978-0-19-852011-5. Driggers, Ronald G. (2003), Encyclopedia of Optical Engineering, CRC Press, ISBN978-0-8247-0940-2. Federer, Herbert (1969), Geometric measure theory, Die Grundlehren der mathematischen Wissenschaften 153, New York: Springer-Verlag, pp.xiv+676, ISBN978-3-540-60656-7, MR 0257325 (http://www.ams.org/

Dirac delta function mathscinet-getitem?mr=0257325). Gel'fand, I.M.; Shilov, G.E. (19661968), Generalized functions 15, Academic Press . Hartman, William M. (1997), Signals, sound, and sensation (http://books.google.com/ books?id=3N72rIoTHiEC), Springer, ISBN978-1-56396-283-7. Hewitt, E; Stromberg, K (1963), Real and abstract analysis, Springer-Verlag. Hrmander, L. (1983), The analysis of linear partial differential operators I, Grundl. Math. Wissenschaft. 256, Springer, ISBN3-540-12104-8, MR 0717035 (http://www.ams.org/mathscinet-getitem?mr=0717035). Isham, C. J. (1995), Lectures on quantum theory: mathematical and structural foundations, Imperial College Press, ISBN978-81-7764-190-5. John, Fritz (1955), Plane waves and spherical means applied to partial differential equations, Interscience Publishers, New York-London, MR 0075429 (http://www.ams.org/mathscinet-getitem?mr=0075429). Lang, Serge (1997), Undergraduate analysis, Undergraduate Texts in Mathematics (2nd ed.), Berlin, New York: Springer-Verlag, ISBN978-0-387-94841-6, MR 1476913 (http://www.ams.org/ mathscinet-getitem?mr=1476913).

74

Lange, Rutger-Jan (2012), "Potential theory, path integrals and the Laplacian of the indicator" (http://link. springer.com/article/10.1007/JHEP11(2012)032), Journal of High Energy Physics (Springer) 2012 (11): 2930, arXiv: 1302.0864 (http://arxiv.org/abs/1302.0864), Bibcode: 2012JHEP...11..032L (http://adsabs. harvard.edu/abs/2012JHEP...11..032L), doi: 10.1007/JHEP11(2012)032 (http://dx.doi.org/10.1007/ JHEP11(2012)032). Laugwitz, D. (1989), "Definite values of infinite sums: aspects of the foundations of infinitesimal analysis around 1820", Arch. Hist. Exact Sci. 39 (3): 195245, doi: 10.1007/BF00329867 (http://dx.doi.org/10.1007/ BF00329867). Levin, Frank S. (2002), "Coordinate-space wave functions and completeness" (http://books.google.com/ ?id=oc64f4EspFgC&pg=PA109), An introduction to quantum theory, Cambridge University Press, pp.109ff, ISBN0-521-59841-9 Li, Y. T.; Wong, R. (2008), "Integral and series representations of the Dirac delta function", Commun. Pure Appl. Anal. 7 (2): 229247, doi: 10.3934/cpaa.2008.7.229 (http://dx.doi.org/10.3934/cpaa.2008.7.229), MR 2373214 (http://www.ams.org/mathscinet-getitem?mr=2373214). de la Madrid, R.; Bohm, A.; Gadella, M. (2002), "Rigged Hilbert Space Treatment of Continuous Spectrum", Fortschr. Phys. 50 (2): 185216, arXiv: quant-ph/0109154 (http://arxiv.org/abs/quant-ph/0109154), Bibcode: 2002ForPh..50..185D (http://adsabs.harvard.edu/abs/2002ForPh..50..185D), doi: 10.1002/1521-3978(200203)50:2<185::AID-PROP185>3.0.CO;2-S (http://dx.doi.org/10.1002/ 1521-3978(200203)50:2<185::AID-PROP185>3.0.CO;2-S). McMahon, D. (2005-11-22), "An Introduction to State Space" (http://www.mhprofessional.com/product. php?isbn=0071455469&cat=&promocode=), Quantum Mechanics Demystified, A Self-Teaching Guide, Demystified Series, New York: McGraw-Hill, p.108, doi: 10.1036/0071455469 (http://dx.doi.org/10.1036/ 0071455469), ISBN0-07-145546-9, retrieved 2008-03-17. van der Pol, Balth.; Bremmer, H. (1987), Operational calculus (3rd ed.), New York: Chelsea Publishing Co., ISBN978-0-8284-0327-6, MR 904873 (http://www.ams.org/mathscinet-getitem?mr=904873). Rudin, W. (1991), Functional Analysis (2nd ed.), McGraw-Hill, ISBN0-07-054236-8. Soares, Manuel; Valle, Olivier (2004), Airy functions and applications to physics, London: Imperial College Press. Saichev, A I; Woyczyski, Wojbor Andrzej (1997), "Chapter1: Basic definitions and operations" (http://books. google.com/?id=42I7huO-hiYC&pg=PA3), Distributions in the Physical and Engineering Sciences: Distributional and fractal calculus, integral transforms, and wavelets, Birkhuser, ISBN0-8176-3924-1 Schwartz, L. (1950), Thorie des distributions 1, Hermann. Schwartz, L. (1951), Thorie des distributions 2, Hermann.

Dirac delta function Stein, Elias; Weiss, Guido (1971), Introduction to Fourier Analysis on Euclidean Spaces, Princeton University Press, ISBN0-691-08078-X. Strichartz, R. (1994), A Guide to Distribution Theory and Fourier Transforms, CRC Press, ISBN0-8493-8273-4. Vladimirov, V. S. (1971), Equations of mathematical physics, Marcel Dekker, ISBN0-8247-1713-9. Weisstein, Eric W., " Delta Function (http://mathworld.wolfram.com/DeltaFunction.html)", MathWorld. Yamashita, H. (2006), "Pointwise analysis of scalar fields: A nonstandard approach", Journal of Mathematical Physics 47 (9): 092301, Bibcode: 2006JMP....47i2301Y (http://adsabs.harvard.edu/abs/2006JMP.... 47i2301Y), doi: 10.1063/1.2339017 (http://dx.doi.org/10.1063/1.2339017) Yamashita, H. (2007), "Comment on "Pointwise analysis of scalar fields: A nonstandard approach" [J. Math. Phys. 47, 092301 (2006)]", Journal of Mathematical Physics 48 (8): 084101, Bibcode: 2007JMP....48h4101Y (http://adsabs.harvard.edu/abs/2007JMP....48h4101Y), doi: 10.1063/1.2771422 (http://dx.doi.org/10. 1063/1.2771422)

75

External links
Hazewinkel, Michiel, ed. (2001), "Delta-function" (http://www.encyclopediaofmath.org/index.php?title=p/ d030950), Encyclopedia of Mathematics, Springer, ISBN978-1-55608-010-4 KhanAcademy.org video lesson (http://www.khanacademy.org/video/dirac-delta-function) The Dirac Delta function (http://www.physicsforums.com/showthread.php?t=73447), a tutorial on the Dirac delta function. Video Lectures Lecture 23 (http://ocw.mit.edu/courses/mathematics/ 18-03-differential-equations-spring-2010/video-lectures/lecture-23-use-with-impulse-inputs), a lecture by Arthur Mattuck. Dirac Delta Function (http://planetmath.org/encyclopedia/DiracDeltaFunction.html) on PlanetMath The Dirac delta measure is a hyperfunction (http://www.osaka-kyoiku.ac.jp/~ashino/pdf/chinaproceedings. pdf) We show the existence of a unique solution and analyze a finite element approximation when the source term is a Dirac delta measure (http://www.ing-mat.udec.cl/~rodolfo/Papers/BGR-3.pdf) Non-Lebesgue measures on R. Lebesgue-Stieltjes measure, Dirac delta measure. (http://www.mathematik. uni-muenchen.de/~lerdos/WS04/FA/content.html)

Article Sources and Contributors

76

Article Sources and Contributors


Fourier transform Source: http://en.wikipedia.org/w/index.php?oldid=595119005 Contributors: 9258fahsflkh917fas, A Doon, A. Pichler, Abecedare, Adam.stinchcombe, Admartch, Adoniscik, Ahoerstemeier, Akbg, Alejo2083, AliceNovak, Alipson, Amaher, AnAj, Andrei Polyanin, Andres, Angalla, Anna Lincoln, Anoko moonlight, Ap, Arctic Kangaroo, Army1987, Arondals, Asmeurer, Astronautameya, Avicennasis, Avoided, AxelBoldt, BD2412, Barak Sh, Bci2, Bdmy, BehzadAhmadi, BenFrantzDale, BigJohnHenry, Bo Jacoby, Bob K, Bobblewik, Bobo192, BorisG, Brews ohare, Bugnot, Bumm13, Burhem, Butala, Bwb1729, CSTAR, Caio2112, Cassandra B, Catslash, Cburnett, CecilWard, Ch mad, Charles Matthews, Chris the speller, ChrisGualtieri, ClickRick, Cmghim925, Complexica, Compsonheir, Coppertwig, CrisKatz, Crisfilax, Cuzkatzimhut, Cyrapas, DX-MON, Da nuke, DabMachine, Daqu, David R. Ingham, DavidCBryant, Dcirovic, Demosta, Dhabih, Discospinster, DmitTrix, Dmmaus, Dougweller, Download, Dr.enh, DrBob, Drew335, Drilnoth, Dysprosia, EconoPhysicist, Ed g2s, Eliyak, Elkman, Enochlau, Epzcaw, Favonian, Feline Hymnic, Feraudyh, Fgnievinski, Fizyxnrd, Forbes72, Formula255, Foxj, Fr33kman, Frappyjohn, Fred Bradstadt, Freiddie, Fropuff, Futurebird, Gaidheal1, Gaius Cornelius, Gareth Owen, Geekdiva, Giftlite, Giovannidimauro, Glenn, GuidoGer, GyroMagician, H2g2bob, HappyCamper, Heimstern, HenningThielemann, Herr Lip, Hesam7, HirsuteSimia, Hrafeiro, Ht686rg90, I am a mushroom, Ianweiner, Igny, Iihki, Ivan Shmakov, Iwfyita, Jaakobou, Jdorje, Jfmantis, Jhealy, Jko, Joerite, John Cline, JohnBlackburne, JohnOFL, JohnQPedia, Joriki, Jose Brox, Justwantedtofixonething, KHamsun, KYN, Keenan Pepper, Kevmitch, Klallas, Kostmo, Kri, Kunaporn, Larsobrien, Linas, LokiClock, Loodog, Looxix, Lovibond, LucasVB, Luciopaiva, Lupin, M1ss1ontomars2k4, Maine12329, Manik762007, Maschen, MathKnight, Maxim, Mckee, Mct mht, Mecanismo, Metacomet, Michael Hardy, Mikeblas, Mikiemike, Millerdl, Moxfyre, Mr. PIM, NTUDISP, Naddy, NameIsRon, NathanHagen, Nbarth, NickGarvey, Nihil, Nishantjr, Njerseyguy, Njm7203, Nk, Nmnogueira, NokMok, NotWith, Nscozzaro, Od Mishehu, Offsure, Oleg Alexandrov, Oli Filth, Omegatron, Oreo Priest, Ouzel Ring, PAR, Pak21, Papa November, Paul August, Pedrito, Pete463251, Petergans, Phasmatisnox, Phils, PhotoBox, PigFlu Oink, Poincarecon, Pol098, Policron, PsiEpsilon, PtDw832, Publichealthguru, Quercus solaris, Quietbritishjim, Quintote, Qwfp, R.e.b., Raffamaiden, Rainwarrior, Rbj, Red Winged Duck, Riesz, Rifleman 82, Rijkbenik, Rjwilmsi, RobertHannah89, Rror, Rs2, Rurz2007, SKvalen, Safenner1, Sai2020, Sandb, Sbyrnes321, SchreiberBike, SebastianHelm, Sepia tone, Sgoder, Sgreddin, Shreevatsa, Silly rabbit, Slawekb, SlimDeli, Smibu, Snigbrook, Snoyes, Sohale, Soulkeeper, SpaceFlight89, Spanglej, Sprocedato, Stausifr, Stevan White, Stevenj, Stpasha, StradivariusTV, Sun Creator, Sunev, Sverdrup, Sylvestersteele, Sawomir Biay, THEN WHO WAS PHONE?, TYelliot, Tabletop, Tahome, TakuyaMurata, TarryWorst, Tetracube, The Thing That Should Not Be, Thenub314, Thermochap, Thinking of England, Tim Goodwyn, Tim Starling, Tinos, Tobias Bergemann, Tobych, TranceThrust, Tunabex, Ujjalpatra, User A1, Vadik wiki, Vasi, Verdy p, VeryNewToThis, VictorAnyakin, Vidalian Tears, Vnb61, Voronwae, WLior, Waldir, Wavelength, Wiki Edit Testing, WikiDao, Wile E. Heresiarch, Writer130, Wwheaton, Ybhatti, YouRang?, Zoz, Zvika, 654 anonymous edits Convolution Source: http://en.wikipedia.org/w/index.php?oldid=593853033 Contributors: 16@r, 1exec1, AhmedFasih, Aitias, Ajb, Akella, Allens, Almwi, Anonymous Dissident, Antandrus, Anthony Appleyard, Ap, Arcenciel, Armin Straub, Arthur Rubin, Auclairde, Avoided, AxelBoldt, Bdiscoe, Bdmy, Belizefan, Ben pcc, BenFrantzDale, Berland, BigHaz, BjarteSorensen, Bob K, Boing! said Zebedee, Bombshell, Bracchesimo, Btyner, CRGreathouse, Carolus m, CaseInPoint, Cdnc, Charles Matthews, Chato, Chiqago, Chowbok, Chricho, Cmglee, Coco, Coffee2theorems, Comech, Conversion script, Crisfilax, Cronholm144, CryptoDerk, Cwkmail, Cyp, D1ma5ad, Daniele.tampieri, Dankonikolic, DaveWF, Dbfirs, Dekart, Dicklyon, DirkOliverTheis, Discospinster, Dmcq, Dmmaus, Dmn, Dpbsmith, Drew335, Dysprosia, Dzordzm, Edward Z. Yang, Eleassar, Emc2, Emote, Esoth, Exoneo, Felix0411, Feraudyh, Flavio Guitian, Forderud, Frozenport, Furrykef, Gaba p, GabrielEbner, Gareth Owen, Gene Ward Smith, Giftlite, Giuscarl, Gobonobo, GoingBatty, Greenec3, Greg newhouse, H2g2bob, Hankwang, Headbomb, Helder.wiki, Hellbus, HenningThielemann, Hippasus, Hurricane111, Hyacinth, IMSoP, Illia Connell, Inductiveload, Intelliproject, InverseHypercube, JSKau, Jeff G., JethroElfman, Jevinsweval, Jschuler16, Jshadias, Justin545, Kallikanzarid, Karada, Kazkaskazkasako, Kelly Martin, Kenyon, Kielhorn, Kku, Konradek, Kri, Kurykh, LCS check, Lamoidfl, Larvy, Lautarocarmona, Liotier, Lklundin, Loisel, LokiClock, Loodog, Looxix, Loren Rosen, Lovibond, Lwalt, MFH, Maciekk, Madmath789, MarcLevoy, Mark viking, Matteo.Bertini, Mckee, Mct mht, Melchoir, Michael Hardy, Mild Bill Hiccup, Minna Sora no Shita, Mircha, Mitrut, Miym, Mmendis, Modster, Mojodaddy, Mormegil, Mpatel, MrOllie, Mschlindwein, Mwilde, N.MacInnes, Naddy, Nbarth, Neffk, Ngchen, Noki 6630, Nuwewsco, Odedee, Oleg Alexandrov, Oli Filth, Omegatron, Ortho, Oski, P4lm0r3, PMajer, Patrick, Patrickmclaren, Paul Rako, Perey, Petergans, Phys, Pinethicket, Pion, PowerWill500, Quinnculver, Rade Kutil, Radiodef, RapidR, Rimianika, Rlcook.unm, Robert K S, RobertStar20, Rogerbrent, Rubybrian, SD5, Schneelocke, Seabhcan, SebastianHelm, Shanes, Silly rabbit, Slawekb, Smack the donkey, Smithwn, Stelpa, Steve2011, Stevenj, SuburbaniteFury, Sullivan.t.j, Sawomir Biay, Tarquin, Tblackma222, TheObtuseAngleOfDoom, Thenub314, Tinos, Tobias Bergemann, Tomash, User A1, UtherSRG, Visionat, Vuo, Waqas946, Wayne Slam, Wikomidia, WojciechSwiderski, Wwoods, Wxm29, Xe7al, Yaris678, Zoltan808, Zueignung, Zvika, 342 anonymous edits Convolution theorem Source: http://en.wikipedia.org/w/index.php?oldid=594752997 Contributors: Alexweigel, Almwi, AtticusX, AxelBoldt, Bdmy, BeteNoir, Bob K, Bobbyi, Brad7777, Btyner, Charles Matthews, Gaba p, Gene Ward Smith, Giftlite, Gtz, Hanacy, Hu12, Isusprof, Javalenok, Kallikanzarid, Kevmitch, LC, LokiClock, Lupin, Mark viking, MathHisSci, Mckee, Mct mht, Michael Hardy, MrOllie, Mrhota, Nayuki, Nuwewsco, Ohnoitsjamie, Oleg Alexandrov, Oli Filth, OrgasGirl, Ottjes, Pbanta, Phreed, Pthibault, Qorilla, RDBury, Rade Kutil, Slawekb, SpuriousQ, Stevenj, Taw, Taweetham, Thenub314, Thorwald, Tommathew-NJITWILL, Visionat, Wile E. Heresiarch, Yimuyin, ^musaz, 36 anonymous edits Laplace transform Source: http://en.wikipedia.org/w/index.php?oldid=594721669 Contributors: 213.253.39.xxx, A. Pichler, Abb615, Ahoerstemeier, Alansohn, Alejo2083, Alexthe5th, Alfred Centauri, AllHailZeppelin, Alll, AlmostSurely, Amfortas, Andrei Polyanin, Android Mouse, Anonymous Dissident, Anterior1, Ap, Arunib, Ascentury, AugPi, AvicAWB, AxelBoldt, BWeed, Bart133, Bdmy, BehnamFarid, Bemoeial, BenFrantzDale, Bengski68, BigJohnHenry, Blablablob, Bookbuddi, Bplohr, Cburnett, Cfp, Charles Matthews, Chris 73, ChrisGualtieri, Chrislewis.au, Chronulator, Chubby Chicken, Cic, Clark89, Cnilep, Commander Nemet, Conversion script, Cronholm144, Cutler, Cwkmail, Cyp, CyrilB, DVdm, DabMachine, Danpovey, Dantonel, Dillard421, Dissipate, Don4of4, Doraemonpaul, Dragon0128, Drew335, Drilnoth, DukeEgr93, Dysprosia, ESkog, Ec5618, ElBarto, Electron9, Eli Osherovich, Ellywa, Emiehling, Eshylay, F=q(E+v^B), Fblasqueswiki, Fcueto, Fintor, First Harmonic, Flekstro, Fofti, Foom, Fred Bradstadt, Freiddie, Fresheneesz, Futurebird, Gene Ward Smith, Gerrit, Ghostal, Giftlite, Glickglock, Glrx, Gocoolrao, Grafen, GregRM, Guardian of Light, GuidoGer, H2g2bob, Haham hanuka, Hair Commodore, HalJor, Haukurth, Headbomb, Hereforhomework2, Hesam7, Humanengr, Incnis Mrsi, Intangir, Isheden, Ivan kryven, Izno, JAIG, JFB80, JPopovic, Janto, Javalenok, Jitse Niesen, Jmnbatista, JohnCD, Johndarrington, JonathonReinhart, Jscott.trapp, Julian Mendez, Jwmillerusa, K.e.elsayed, KSmrq, Kaiserkarl13, Karipuf, Kensaii, Kenyon, Ketiltrout, Kevin Baas, Kiensvay, Kingpin13, KittySaturn, Klilidiplomus, KoenDelaere, Kri, LachlanA, Lambiam, Lantonov, Lbs6380, Le Docteur, Lightmouse, Linas, LokiClock, Looxix, Lupin, M ayadi78, Macl, Maksim-e, Manop, MarkSutton, Mars2035, Martynas Patasius, MathRocks2012, MaxEnt, Maximus Rex, Mecanismo, Mekong Bluesman, Metacomet, Mfe1710, Michael Hardy, MiddaSantaClaus, Mike.lifeguard, Mild Bill Hiccup, Mlewis000, Mohamed F. El-Hewie, Mohqas, Mojo Hand, Moly23, Morpo, Morqueozwald, Mschlindwein, Msiddalingaiah, Msmdmmm, N5iln, Nbarth, Neil Parker, Nein, Netheril96, NevilleDNZ, Nixdorf, NotWith, Nuwewsco, Octahedron80, Ojigiri, Oleg Alexandrov, Oli Filth, Omegatron, Ozweepay, Peter.Hiscocks, Petr Burian, Pgadfor, Phgao, Pokyrek, Pol098, Policron, Popabun888, Prunesqualer, Pyninja, PyonDude, Qwerty Binary, Rbj, Rboesch, Rdrosson, Reaper Eternal, Reedy, Rememberlands, RexNL, Reyk, Rifleman 82, Rjwilmsi, Robin48gx, Rockingani, Rojypala, Ron Ritzman, Rovenhot, Rs2, Salix alba, Salvidrim!, Scforth, Schaapli, SchreyP, Scls19fr, SebastianHelm, Sebbie88, Serrano24, Shay Guy, Sideways713, Sifaka, Silly rabbit, Simamura, Skizzik, Slawekb, SocratesJedi, Solarra, Starwiz, Stevenj, StradivariusTV, Stutts, Swagat konchada, Sawomir Biay, T boyd, Tarquin, Tbsmith, TedPavlic, Tentinator, The Thing That Should Not Be, TheProject, Thegeneralguy, Thumperward, Tide rolls, Tim Starling, Tobias Bergemann, Tobias Hoevekamp, Walter.Arrighetti, Wavelength, Weyes, Wiml, Wknight94, Wyklety, XJaM, Xenonice, Yardleydobon, Yintan, Yunshui, Zhkh2n, Ziusudra, Zvika, ^musaz, 511 anonymous edits Dirac delta function Source: http://en.wikipedia.org/w/index.php?oldid=595013684 Contributors: 130.94.122.xxx, 134.132.11.xxx, Aaron north, Abrhm17, Adavis444, AdjustShift, Alexxauw, Ap, Arbitrarily0, Ashok567, AugPi, AxelBoldt, Baxtrom, Bdmy, Bejitunksu, BenFrantzDale, Bender235, Benwing, Bnitin, Bob K, Bob K31416, Brews ohare, Btyner, C S, CBM, Caiyu, Camembert, Centrx, Ceplusplus, Chaohuang, Charles Matthews, Chas zzz brown, ChrisGualtieri, Christopher.Gordon3, Cj67, Classicalecon, Coppertwig, Crasshopper, CronoDAS, Curtdbz, Cyan, Czhangrice, Daniel.Cardenas, Darkknight911, David Eppstein, David Martland, Davidcarfi, Dcirovic, Delta G, Donarreiskoffer, Dpaddy, Dr.enh, Dudesleeper, Eleuther, Emvee, FatUglyJo, Fibonacci, Fred Stober, Fyrael, Galdobr, Genie05, Gianluigi, Giftlite, Haonhien, Headbomb, Henrygb, Heptadecagon, Holmansf, Hwasungmars, Ihatedirac2k13, IkamusumeFan, Inferior Olive, InverseHypercube, Ivanip, JJL, JWroblewski, Jason Quinn, Jfmantis, Jkl, Joel7687, Joelholdsworth, JohnBlackburne, JorisvS, Jshadias, Jujutacular, Justin545, Karl-H, Kidburla, KlappCK, Knucmo2, KorgBoy, Kri, Krishnavedala, Kupirijo, Laurascudder, Le Docteur, Lethe, Light current, Lingwitt, LokiClock, Looxix, Lzur, M gol, MagnaMopus, Mal, Mark.przepiora, MarkSweep, MathKnight, Mboverload, McSly, Meigel, Melchoir, Michael Hardy, Mikez, Mild Bill Hiccup, Mindriot, Moxfyre, Msa11usec, Mrten Berglund, Nbarth, NoFunction, Oleg Alexandrov, Oli Filth, Omegatron, PAR, Papa November, Paul August, Philtime, Plasticup, Policron, Pyrop, Qef, Quondum, Qwfp, RAE, Rattatosk, Rbj, Rex the first, Rkr1991, Rmartin16, Ronhjones, Rumping, Rutgerjanlange, Salgueiro, Sayanpatra, Shapeev, Shervinafshar, SimpsonDG, Sinawiki, Skizzik, Slashme, Slawekb, Smjg, Solomonfromfinland, Spendabuk, Spike Wilbury, Stca74, Steel1943, StefanosNikolaou, SteffenGodskesen, Stevenj, Stevvers, Sullivan.t.j, Sverdrup, Sawomir Biay, TMA, TakuyaMurata, Tardis, Tarquin, The Anome, TheSeven, Thenub314, Thingg, TimothyRias, Tkuvho, Tobias Bergemann, TonyMath, Tosha, Treisijs, Tyrrell McAllister, UnameAlreadyTaken, W0lfie, WLior, Wavelength, Wearedeltax, Wtmitchell, Wvbailey, Wyf92, XJaM, XaosBits, Yaris678, Yurik, Zfeinst, Zueignung, Zundark, Zvika, Zytsef, - -, , 323 anonymous edits

Image Sources, Licenses and Contributors

77

Image Sources, Licenses and Contributors


File:Fourier transform time and frequency domains (small).gif Source: http://en.wikipedia.org/w/index.php?title=File:Fourier_transform_time_and_frequency_domains_(small).gif License: Public Domain Contributors: Lucas V. Barbosa Image:Function ocsillating at 3 hertz.svg Source: http://en.wikipedia.org/w/index.php?title=File:Function_ocsillating_at_3_hertz.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors: Thenub314 Image:Onfreq.svg Source: http://en.wikipedia.org/w/index.php?title=File:Onfreq.svg License: GNU Free Documentation License Contributors: Original: Nicholas Longo, SVG conversion: DX-MON (Richard Mant) Image:Offfreq.svg Source: http://en.wikipedia.org/w/index.php?title=File:Offfreq.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors: Thenub314 Image:Fourier transform of oscillating function.svg Source: http://en.wikipedia.org/w/index.php?title=File:Fourier_transform_of_oscillating_function.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors: Thenub314 File:Rectangular function.svg Source: http://en.wikipedia.org/w/index.php?title=File:Rectangular_function.svg License: GNU Free Documentation License Contributors: Aflafla1, Axxgreazz, Bender235, Darapti, Jochen Burghardt, Omegatron, 1 anonymous edits File:Sinc function (normalized).svg Source: http://en.wikipedia.org/w/index.php?title=File:Sinc_function_(normalized).svg License: GNU Free Documentation License Contributors: Aflafla1, Bender235, Jochen Burghardt, Juiced lemon, Krishnavedala, Omegatron, Pieter Kuiper File:Commutative diagram illustrating problem solving via the Fourier transform.svg Source: http://en.wikipedia.org/w/index.php?title=File:Commutative_diagram_illustrating_problem_solving_via_the_Fourier_transform.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors: User:Quietbritishjim File:Comparison convolution correlation.svg Source: http://en.wikipedia.org/w/index.php?title=File:Comparison_convolution_correlation.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors: Cmglee File:Convolution3.PNG Source: http://en.wikipedia.org/w/index.php?title=File:Convolution3.PNG License: Public Domain Contributors: Original uploader was Emote at en.wikipedia Later version(s) were uploaded by McLoaf at en.wikipedia. File:Convolution of box signal with itself2.gif Source: http://en.wikipedia.org/w/index.php?title=File:Convolution_of_box_signal_with_itself2.gif License: Creative Commons Attribution-Sharealike 3.0 Contributors: Convolution_of_box_signal_with_itself.gif: Brian Amberg derivative work: Tinos (talk) File:Convolution of spiky function with box2.gif Source: http://en.wikipedia.org/w/index.php?title=File:Convolution_of_spiky_function_with_box2.gif License: Creative Commons Attribution-Sharealike 3.0 Contributors: Convolution_of_spiky_function_with_box.gif: Brian Amberg derivative work: Tinos (talk) File:Halftone, Gaussian Blur.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Halftone,_Gaussian_Blur.jpg License: anonymous-EU Contributors: Alex:D File:S-Domain circuit equivalency.svg Source: http://en.wikipedia.org/w/index.php?title=File:S-Domain_circuit_equivalency.svg License: Public Domain Contributors: Ordoon (original creator) and Flekstro Image:Dirac distribution PDF.svg Source: http://en.wikipedia.org/w/index.php?title=File:Dirac_distribution_PDF.svg License: GNU Free Documentation License Contributors: Original SVG by Omegatron Original PNG version by PAR This version adapted by Qef Image:Dirac function approximation.gif Source: http://en.wikipedia.org/w/index.php?title=File:Dirac_function_approximation.gif License: Public Domain Contributors: Oleg Alexandrov Image:Dirac comb.svg Source: http://en.wikipedia.org/w/index.php?title=File:Dirac_comb.svg License: Creative Commons Zero Contributors: User:Krishnavedala

License

78

License
Creative Commons Attribution-Share Alike 3.0 //creativecommons.org/licenses/by-sa/3.0/

También podría gustarte