Está en la página 1de 499

K.

Deergha Rao

Signals and Systems


K. Deergha Rao
Department of Electronics and Communication Engineering, Vasavi College of
Engineering (Affiliated to Osmania University), Hyderabad, Telangana, India
ISBN 978-3-319-68674-5 e-ISBN 978-3-319-68675-2
https://doi.org/10.1007/978-3-319-68675-2
Library of Congress Control Number: 2017958547
Mathematics Subject Classification (2010): 94A12, 94A05, 93C55, 93C20, 35Q93
© Springer International Publishing AG, part of Springer Nature 2018
This work is subject to copyright. All rights are reserved by the Publisher, whether the
whole or part of the material is concerned, specifically the rights of translation, reprinting,
reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other
physical way, and transmission or information storage and retrieval, electronic adaptation,
computer software, or by similar or dissimilar methodology now known or hereafter
developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in
this publication does not imply, even in the absence of a specific statement, that such
names are exempt from the relevant protective laws and regulations and therefore free for
general use.
The publisher, the authors and the editors are safe to assume that the advice and
information in this book are believed to be true and accurate at the date of publication.
Neither the publisher nor the authors or the editors give a warranty, express or implied,
with respect to the material contained herein or for any errors or omissions that may have
been made. The publisher remains neutral with regard to jurisdictional claims in published
maps and institutional affiliations.
Printed on acid-free paper
This book is published under the imprint Birkhäuser, www.​birkhauser-science.​com by the
registered company Springer International Publishing AG part of Springer Nature.
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
To
My Parents Dalamma and Boddu,
My Beloved Wife Sarojini,
and
My Mentor Prof. M.N.S. Swamy
Preface
The signals and systems course is not only an important element for undergraduate
electrical engineering students but the fundamentals and techniques of the subject are
essential in all the disciplines of engineering. Signals and systems analysis has a long
history, with its techniques and fundamentals found in broad areas of applications. The
signals and systems is continuously evolving and developing in response to new problems,
such as the development of integrated circuits technology and its applications.
In this book, many illustrative examples are included in each chapter for easy
understanding of the fundamentals and methodologies of signals and systems. An
attractive feature of this book is the inclusion of MATLAB-based examples with codes to
encourage readers to implement exercises on their personal computers in order to become
confident with the fundamentals and to gain more insight into signals and systems. In
addition to the problems that require analytical solutions, MATLAB exercises are
introduced to the reader at the end of some chapters.
This book is divided into 8 chapters. Chapter 1 presents an introduction to signals and
systems with basic classification of signals, elementary operations on signals, and some
real-world examples of signals and systems . Chapter 2 gives time-domain analysis of
continuous time signals and systems, and state-space representation of continuous-time
LTI systems. Fourier analysis of continuous-time signals and systems is covered in
Chapter 3 . Chapter 4 deals with the Laplace transform and analysis of continuous-time
signals and systems, and solution of state-space equations of continuous-time LTI systems
using Laplace transform. Ideal continuous-time (analog) filters, practical analog filter
approximations and design methodologies, and design of special class filters based on
pole-zero placement are discussed in Chapter 5 . Chapter 6 discusses the time-domain
representation of discrete-time signals and systems, linear time-invariant (LTI) discrete-
time systems and their properties, characterization of discrete-time systems, and state-
space representation of discrete-time LTI systems . Representation of discrete-time signals
and systems in frequency domain, representation of sampling in frequency domain,
reconstruction of a band-limited signal from its samples, and sampling of discrete-time
signals are detailed in Chapter 7 . Chapter 8 describes the z-transform and analysis of LTI
discrete-time systems, the solution of state-space equations of discrete-time LTI systems
using z-transform, and transformations between the continuous-time systems and discrete-
time systems.
The salient features of this book are as follows:
Provides introductory and comprehensive exposure to all aspects of signal and
systems with clarity and in an easy way to understand.
Provides an integrated treatment of continuous-time signals and systems and discrete-
time signals and systems.
Several fully worked numerical examples are provided to help students understand
the fundamentals of signals and systems.
PC-based MATLAB m-files for the illustrative examples are included in this book.
This book is written at introductory level for undergraduate classes in electrical
engineering and applied sciences that are the prerequisite for upper level courses, such as
communication systems, digital signal processing, and control systems.
K. Deergha Rao
Hyderabad, India
Contents
1 Introduction
1.​1 What is a Signal?​
1.​2 What is a System?​
1.​3 Elementary Operations on Signals
1.​3.​1 Time Shifting
1.​3.​2 Time Scaling
1.​3.​3 Time Reversal
1.​4 Classification of Signals
1.​4.​1 Continuous-Time and Discrete-Time Signals
1.​4.​2 Analog and Digital Signals
1.​4.​3 Periodic and Aperiodic Signals
1.​4.​4 Even and Odd Signals
1.​4.​5 Causal, Noncausal, and Anticausal Signal
1.​4.​6 Energy and Power Signals
1.​4.​7 Deterministic and Random Signals
1.​5 Basic Continuous-Time Signals
1.​5.​1 The Unit Step Function
1.​5.​2 The Unit Impulse Function
1.​5.​3 The Ramp Function
1.​5.​4 The Rectangular Pulse Function
1.​5.​5 The Signum Function
1.​5.​6 The Real Exponential Function
1.​5.​7 The Complex Exponential Function
1.​5.​8 The Sinc Function
1.​6 Generation of Continuous-Time Signals Using MATLAB
1.​7 Typical Signal Processing Operations
1.​7.​1 Correlation
1.​7.​2 Filtering
1.​7.​3 Modulation and Demodulation
1.​7.​4 Transformation
1.​7.​5 Multiplexing and Demultiplexing
1.​8 Some Examples of Real-World Signals and Systems
1.​8.​1 Audio Recording System
1.​8.​2 Global Positioning System
1.​8.​3 Location-Based Mobile Emergency Services System
1.​8.​4 Heart Monitoring System
1.​8.​5 Human Visual System
1.​8.​6 Magnetic Resonance Imaging
1.​9 Problems
1.​10 MATLAB Exercises
Further Reading
2 Continuous-Time Signals and Systems
2.​1 The Representation of Signals in Terms of Impulses
2.​2 Continuous-Time Systems
2.​2.​1 Linear Systems
2.​2.​2 Time-Invariant System
2.​2.​3 Causal System
2.​2.​4 Stable System
2.​2.​5 Memory and Memoryless System
2.​2.​6 Invertible System
2.​2.​7 Step and Impulse Responses
2.​3 The Convolution Integral
2.​3.​1 Some Properties of the Convolution Integral
2.​3.​2 Graphical Convolution
2.​3.​3 Computation of Convolution Integral Using MATLAB
2.​3.​4 Interconnected Systems
2.​3.​5 Periodic Convolution
2.​4 Properties of Linear Time-Invariant Continuous-Time System
2.​4.​1 LTI Systems With and Without Memory
2.​4.​2 Causality for LTI Systems
2.​4.​3 Stability for LTI Systems
2.​4.​4 Invertible LTI System
2.​5 Systems Described by Differential Equations
2.​5.​1 Linear Constant-Coefficient Differential Equations
2.​5.​2 The General Solution of Differential Equation
2.​5.​3 Linearity
2.​5.​4 Causality
2.​5.​5 Time-Invariance
2.​5.​6 Impulse Response
2.​5.​7 Solution of Differential Equations Using MATLAB
2.​5.​8 Determining Impulse Response and Step Response for a Linear System
Described by a Differential Equation Using MATLAB
2.​6 Block-Diagram Representations of LTI Systems Described by Differential
Equations
2.​7 Singularity Functions
2.​8 State-Space Representation of Continuous-Time LTI Systems
2.​8.​1 State and State Variables
2.​8.​2 State-Space Representation of Single-Input Single-Output Continuous-
Time LTI Systems
2.​8.​3 State-Space Representation of Multi-input Multi-output Continuous-
Time LTI Systems
2.​9 Problems
2.​10 MATLAB Exercises
Further Reading
3 Frequency Domain Analysis of Continuous-Time Signals and Systems
3.​1 Complex Exponential Fourier Series Representation of the Continuous-Time
Periodic Signals
3.​1.​1 Convergence of Fourier Series
3.​1.​2 Properties of Fourier Series
3.​2 Trigonometric Fourier Series Representation
3.​2.​1 Symmetry Conditions in Trigonometric Fourier Series
3.​3 The Continuous Fourier Transform for Nonperiodic Signals
3.​3.​1 Convergence of Fourier Transforms
3.​3.​2 Fourier Transforms of Some Commonly Used Continuous-Time Signals
3.​3.​3 Properties of the Continuous-Time Fourier Transform
3.​4 The Frequency Response of Continuous-Time Systems
3.​4.​1 Distortion During Transmission
3.​5 Some Communication Application Examples
3.​5.​1 Amplitude Modulation (AM) and Demodulation Amplitude Modulation
3.​5.​2 Single-Sideband (SSB) AM
3.​5.​3 Frequency Division Multiplexing (FDM)
3.​6 Problems
Further Reading
4 Laplace Transforms
4.​1 The Laplace Transform
4.​1.​1 Definition of Laplace Transform
4.​1.​2 The Unilateral Laplace Transform
4.​1.​3 Existence of Laplace Transforms
4.​1.​4 Relationship Between Laplace Transform and Fourier Transform
4.1.5 Representation of Laplace Transform in the S -Plane
4.​2 Properties of the Region of Convergence
4.​3 The Inverse Laplace Transform
4.4 Properties of the Laplace Transform
4.​4.​1 Laplace Transform Properties of Even and Odd Functions
4.​4.​2 Differentiation Property of the Unilateral Laplace Transform
4.​4.​3 Initial Value Theorem
4.​4.​4 Final Value Theorem
4.​5 Laplace Transforms of Elementary Functions
4.​6 Computation of Inverse Laplace Transform Using Partial Fraction Expansion
4.​6.​1 Partial Fraction Expansion of X(s) with Simple Poles
4.​6.​2 Partial Fraction Expansion of X(s) with Multiple Poles
4.​7 Inverse Laplace Transform by Partial Fraction Expansion Using MATLAB
4.​8 Analysis of Continuous-Time LTI Systems Using the Laplace Transform
4.​8.​1 Transfer Function
4.​8.​2 Stability and Causality
4.​8.​3 LTI Systems Characterized by Linear Constant Coefficient Differential
Equations
4.​8.​4 Solution of linear Differential Equations Using Laplace Transform
4.​8.​5 Solution of Linear Differential Equations Using Laplace Transform and
MATLAB
4.​8.​6 System Function for Interconnections​ of LTI Systems
4.​9 Block-Diagram Representation of System Functions in the S-Domain
4.​10 Solution of State-Space Equations Using Laplace Transform
4.​11 Problems
4.​12 MATLAB Exercises
Further Reading
5 Analog Filters
5.​1 Ideal Analog Filters
5.​2 Practical Analog Low-Pass Filter Design
5.​2.​1 Filter Specifications
5.​2.​2 Butterworth Analog Low-Pass Filter
5.​2.​3 Chebyshev Analog Low-Pass Filter
5.​2.​4 Elliptic Analog Low-Pass Filter
5.​2.​5 Bessel Filter
5.​2.​6 Comparison of Various Types of Analog Filters
5.​2.​7 Design of Analog High-Pass, Band-Pass, and Band-Stop Filters
5.​3 Effect of Poles and Zeros on Frequency Response
5.​3.​1 Effect of Two Complex System Poles on the Frequency Response
5.​3.​2 Effect of Two Complex System Zeros on the Frequency Response
5.​4 Design of Specialized Analog Filters by Pole-Zero Placement
5.​4.​1 Notch Filter
5.​5 Problems
Further Reading
6 Discrete-Time Signals and Systems
6.​1 The Sampling Process of Analog Signals
6.​1.​1 Impulse-Train Sampling
6.​1.​2 Sampling with a Zero-Order Hold
6.​1.​3 Quantization and Coding
6.​2 Classification of Discrete-Time Signals
6.​2.​1 Symmetric and Anti-symmetric Signals
6.​2.​2 Finite and Infinite Length Sequences
6.​2.​3 Right-Sided and Left-Sided Sequences
6.​2.​4 Periodic and Aperiodic Signals
6.​2.​5 Energy and Power Signals
6.​3 Discrete-Time Systems
6.​3.​1 Classification of Discrete-Time Systems
6.​3.​2 Impulse and Step Responses
6.​4 Linear Time-Invariant Discrete-Time Systems
6.​4.​1 Input-Output Relationship
6.​4.​2 Computation of Linear Convolution
6.​4.​3 Computation of Convolution Sum Using MATLAB
6.​4.​4 Some Properties of the Convolution Sum
6.​4.​5 Stability and Causality of LTI Systems in Terms of the Impulse Response
6.​5 Characterization​ of Discrete-Time Systems
6.​5.​1 Non-Recursive Difference Equation
6.​5.​2 Recursive Difference Equation
6.​5.​3 Solution of Difference Equations
6.​5.​4 Computation of Impulse and Step Responses Using MATLAB
6.​6 Sampling of Discrete-Time Signals
6.​6.​1 Discrete-Time Down Sampler
6.​6.​2 Discrete-Time Up-Sampler
6.​7 State-Space Representation of Discrete-Time LTI Systems
6.​7.​1 State-Space Representation of Single-Input Single-Output Discrete-Time
LTI Systems
6.​7.​2 State-Space Representation of Multi-input Multi-output Discrete-Time
LTI Systems
6.​8 Problems
6.​9 MATLAB Exercises
Further Reading
7 Frequency Domain Analysis of Discrete-Time Signals and Systems
7.​1 The Discrete-Time Fourier Series
7.​1.​1 Periodic Convolution
7.​2 Representation of Discrete-Time Signals and Systems in Frequency Domain
7.​2.​1 Fourier Transform of Discrete-Time Signals
7.​2.​2 Theorems on DTFT
7.2.3 Some Properties of the DTFT of a Complex Sequence x ( n )
7.2.4 Some Properties of the DTFT of a Real Sequence x ( n )
7.​3 Frequency Response of Discrete-Time Systems
7.​3.​1 Frequency Response Computation Using MATLAB
7.​4 Representation of Sampling in Frequency Domain
7.​4.​1 Sampling of Low-Pass Signals
7.​5 Reconstruction of a Band-Limited Signal from Its Samples
7.​6 Problems
Further Reading
8 The z -Transform and Analysis of Discrete Time LTI Systems
8.1 Definition of the z -Transform
8.2 Properties of the Region of Convergence for the z -Transform
8.3 Properties of the z -Transform
8.4 z -Transforms of Some Commonly Used Sequences
8.5 The Inverse z -Transform
8.5.1 Modulation Theorem in the z -Domain
8.5.2 Parseval’s Relation in the z -Domain
8.6 Methods for Computation of the Inverse z -Transform
8.6.1 Cauchy’s Residue Theorem for Computation of the Inverse z -Transform
8.6.2 Computation of the Inverse z -Transform Using the Partial Fraction
Expansion
8.6.3 Inverse z -Transform by Partial Fraction Expansion Using MATLAB
8.6.4 Computation of the Inverse z -Transform Using the Power Series
Expansion
8.6.5 Inverse z -Transform via Power Series Expansion Using MATLAB
8.6.6 Solution of Difference Equations Using the z -Transform
8.7 Analysis of Discrete-Time LTI Systems in the z -Transform Domain
8.​7.​1 Transfer Function
8.​7.​2 Poles and Zeros of a Transfer Function
8.​7.​3 Frequency Response from Poles and Zeros
8.​7.​4 Stability and Causality
8.​7.​5 Minimum-Phase, Maximum-Phase, and Mixed-Phase Systems
8.​7.​6 Inverse System
8.​7.​7 All-Pass System
8.​7.​8 All-Pass and Minimum-Phase Decomposition
8.8 One-Sided z -Transform
8.​8.​1 Solution of Difference Equations with Initial Conditions
8.9 Solution of State-Space Equations Using z -Transform
8.​10 Transformations Between Continuous-Time Systems and Discrete-Time
Systems
8.​10.​1 Impulse Invariance Method
8.​10.​2 Bilinear Transformation
8.​11 Problems
8.​12 MATLAB Exercises
Further Reading
Index
© Springer International Publishing AG, part of Springer Nature 2018
K. Deergha Rao, Signals and Systems, https://doi.org/10.1007/978-3-319-68675-2_1

1. Introduction
K. Deergha Rao1
(1) Department of Electronics and Communication Engineering, Vasavi College of
Engineering (Affiliated to Osmania University), Hyderabad, Telangana, India


1.1 What is a Signal?
A signal is defined as any physical quantity that carries information and varies with time,
space, or any other independent variable or variables. The world of science and
engineering is filled with signals: speech, television, images from remote space probes,
voltages generated by the heart and brain, radar and sonar echoes, seismic vibrations,
signals from GPS satellites, signals from human genes, and countless other applications.

1.2 What is a System?


A system is defined mathematically as a transformation that maps an input signal x(t) into
an output signal y(t) as illustrated in Figure 1.1. This can be denoted as

(1.1)

where ℜ is an operator.

Figure 1.1 Schematic representation of a system

For example, a communication system itself is a combination of transmitter, channel,


and receiver. A communication system takes speech signal as input and transforms it into
an output signal, which is an estimate of the original input signal.

1.3 Elementary Operations on Signals


In many practical situations, signals related by a modification of the independent variable t
are to be considered. The useful elementary operations on signals including time shifting,
time scaling, and time reversal are discussed in the following subsections.
1.3.1 Time Shifting
Consider a signal x(t). If it is time shifted by t 0, the time-shifted version of x(t) is
represented by x(t − t 0). The two signals x(t) and x(t − t 0) are identical in shape but time
shifted relative to each other. If t 0 is positive, the signal x(t) is delayed (right shifted) by t
0. If t 0 is negative, the signal is advanced (left shifted) by t 0. Signals related in this
fashion arise in applications such as sonar, seismic signal processing, radar, and GPS. The
time shifting operation is illustrated in Figure 1.2. If the signal x(t) shown in Figure 1.2(a)
is shifted by t0 = 2 seconds, x(t − 2) is obtained as shown in Figure 1.2(b), i.e., x(t) is
delayed (right shifted) by 2 seconds. If the signal is advanced (left shifted) by 2 seconds,
x(t + 2) is obtained as shown in Figure 1.2(c), i.e., x(t) is advanced (left shifted) by
2 seconds.

Figure 1.2 Illustration of time shifting

1.3.2 Time Scaling


The compression or expansion of a signal is known as time scaling. The time-scaling
operation is illustrated in Figure 1.3. If the signal x(t) shown in Figure 1.3(a) is
compressed in time by a factor 2, x(2t) is obtained as shown in Figure 1.3(b). If the signal
x(t) is expanded by a factor of 2, x(t/2) is obtained as shown in Figure 1.3(c).
Figure 1.3 Illustration of time scaling

1.3.3 Time Reversal


The signal x(−t) is called the time reversal of the signal x(t). The x(−t) is obtained from
the signal x(t) by a reflection about t = 0. The time reversal operation is illustrated in
Figure 1.4. The signal x(t) is shown in Figure 1.4(a), and its time reversal signal x(−t) is
shown in Figure 1.4(b).

Figure 1.4 Illustration of time reversal

Example 1.1 Consider the following signals x(t) and x i (t), i = 1,2,3. Express them using
only x(t) and its time-shifted, time-scaled, and time-inverted version.
Solution

1.4 Classification of Signals


Signals can be classified in several ways. Some important classifications of signals are:

1.4.1 Continuous-Time and Discrete-Time Signals


Continuous-time signals are defined for a continuous of values of the independent
variable. In the case of continuous-time signals, the independent variable t is continuous as
shown Figure 1.5(a).

Figure 1.5 (a) Continuous-time signal, (b) discrete-time signal

Discrete-time signals are defined only at discrete times, and for these signals, the
independent variable n takes on only a discrete set of amplitude values as shown in Figure
1.5(b).

1.4.2 Analog and Digital Signals


An analog signal is a continuous-time signal whose amplitude can take any value in a
continuous range. A digital signal is a discrete-time signal that can only have a discrete set
of values. The process of converting a discrete-time signal into a digital signal is referred
to as quantization.

1.4.3 Periodic and Aperiodic Signals


A signal x(t) is said to be periodic with period T(a positive nonzero value), if it exhibits
periodicity, i.e., x(t + T) = x(t), for all values of t as shown in Figure 1.6(a). Periodic
signal has the property that it is unchanged by a time shift of T.

Figure 1.6 (a) Periodic signal, (b) aperiodic signal

A signal that does not satisfy the above periodicity property is called an aperiodic
signal. The signal shown in Figure 1.6(b) is an example of an aperiodic signal.
Example 1.2 For each of the following signals, determine whether it is periodic or
aperiodic. If periodic, find the period.
(i) x(t) = 5 sin(2πt)

(ii) x(t) = 1 + cos(4t + 1)

(iii) x(t) = e −2t

(iv)

(v)
Solution

(i) It is periodic signal,

(ii) It is periodic,

(iii) It is aperiodic,

(iv) It is periodic.

(v) Since x(t) is a complex exponential multiplied by a decaying exponential, it is


aperiodic.

Example 1.3 If a continuous-time signal x(t) is periodic, for each of the following
signals, determine whether it is periodic or aperiodic. If periodic, find the period.

(i) x 1(t) = x(2t)


(ii) x 2(t) = x(t/2)
Solution Let T be the period of x(t). Then, we have

(i) For x 1(t) to be periodic,

Since , x 1(t) is periodic with fundamental period .

As x 1(t) is compressed version of x(t) by half, the period of x 1(t) is also compressed
by half.

(ii) For x 2(t) to be periodic,

Since x 2(t) = x 2(t + 2T ), x 2(t) is periodic with fundamental period 2T. As x 2(t) is
expanded version of x(t) by two, the period of x 2(t) is also twice the period of x(t).
Proposition 1.1 Let continuous-time signals x 1(t) and x 2(t) be periodic signals with
fundamental periods T 1 and T 2, respectively. The signal x(t) that is a linear combination
of x 1(t) and x 2(t) is periodic if and only if there exist integers m and k such that mT 1 = kT
2 and
(1.2)

The fundamental period of x(t) is given by mT 1 = kT 2 provided that the values of m and k
are chosen such that the greatest common divisor (gcd) between m and k is 1.
Example 1.4 For each of the following signals, determine whether it is periodic or
aperiodic. If periodic, find the period.

(i) x(t) = 2 cos(4πt) + 3 sin(3πt)

(ii) x(t) = 2 cos(4πt) + 3 sin(10t)


Solution

(i) Let x 1(t) = 2 cos(4πt) and x 2(t) = 3 sin(3πt).


The fundamental period of x 1(t) is

The fundamental period of x 2(t) is

The ratio is a rational number. Hence, x(t) is a periodic signal.

The fundamental period of the signal x(t) is 4T 1 = 3T 2 = 2 seconds.

(ii) Let x 1(t) = 2 cos(4πt) and x 2(t) = 3 sin(10t).


The fundamental period of x 1(t) is
The fundamental period of x 2(t) is

The ratio is not a rational number. Hence, x(t) is an aperiodic signal.

Example 1.5 Consider the signals

Determine whether x 3(t) = x 1(t)x 2(t) is periodic or aperiodic. If periodic, find the period.
Solution Decomposing signals x 1(t) and x 2(t) into sums of exponentials gives

Then,

It is seen that all complex exponentials are powers of e j(π/5). Hence, it is periodic.
Period is

1.4.4 Even and Odd Signals


The continuous-time signal is said to be even when x(−t) = x(t). The continuous-time
signal is said to be odd when x(−t) = −x(t). Odd signals are also known as
nonsymmetrical signals. Examples of even and odd signals are shown in Figure 1.7(a) and
Figure 1.7(b), respectively.
Figure 1.7 (a) Even signal, (b) odd signal

Any signal can be expressed as sum of its even and odd parts as

(1.3)

The even part and odd part of a signal are

(1.3a)

(1.3b)

Some important properties of even and odd signals are:

(i) Multiplication of an even signal by an odd signal produces an odd signal.


Proof Let y(t) = x e (t)x o (t)

(1.4)

Hence, y(t) is an odd signal.

(ii) Multiplication of an even signal by an even signal produces an even signal.


Proof Let y(t) = x e (t)x e (t)
(1.5)

Hence, y(t) is an even signal.

(iii) Multiplication of an odd signal by an odd signal produces an even signal.


Proof Let y(t) = x o (t)x o (t)

Hence, y(t) is an even signal.


It is seen from Figure 1.7(a) that the even signal is symmetric about the vertical axis, and
hence

(1.6)

From Figure 1.6(b), it is also obvious that

(1.7)

Eqs. (1.6) and (1.7) are valid for no impulse or its derivative at the origin. These
properties are proved to be useful in many applications.
Example 1.6 Find the even and odd parts of x(t) = e j2t.
Solution From Eq. (1.3),

where
Example 1.7 If x e (t) and x o (t) are the even and odd parts of x(t), show that

(1.8)

Solution

Since 2

Example 1.8 For each of the following signals, determine whether it is even, odd, or
neither (Figure 1.8)

Figure 1.8 Signals of example 1.8

Solution By definition a signal is even if and only if x(t) = x(−t), while a signal is odd if
and only if x(t) = −x(−t).

(a) It is readily seen that x(t) ≠ x(−t) for all t and x(t) ≠ −x(−t) for all t; thus x(t) is
neither even nor odd.
(b) Since x(t) is symmetric about t = 0, x(t) is even.

(c) Since x(t) = −x(−t), x(t) is odd in this case.


1.4.5 Causal, Noncausal, and Anticausal Signal
A causal signal is one that has zero values for negative time, i.e., t < 0. A signal is
noncausal if it has nonzero values for both the negative and positive times. An anticausal
signal has zero values for positive time, i.e., t > 0. Examples of causal, noncausal, and
anticausal signals are shown in Figure 1.9(a), 1.9(b), and 1.9(c), respectively.

Figure 1.9 (a) Causal signal, (b) noncausal signal, (c) anticausal signal

Example 1.9 Consider the following noncausal continuous-time signal. Obtain its
realization as causal signal.

Solution

1.4.6 Energy and Power Signals


A signal x(t) with finite energy, which means that amplitude → 0 as time → ∞, is said to
be energy signal, whereas a signal x(t) with finite and nonzero power is said to be power
signal. The instantaneous power p(t) of a signal x(t) can be expressed by

(1.9)

The total energy of a continuous-time signal x(t) can be defined as

(1.10a)

for a complex valued signal

(1.10b)

Since the power is the time average of energy, the average power is defined as
. The signal x(t) expressed by Eq. (1.11), which is shown in
Figure 1.10(a), is an example of energy signal.

(1.11)

Figure 1.10(a) Energy signal

The energy of the signal is given by

The signal x(t) shown in Figure 1.10(b) is an example of a power signal. The signal is
periodic with period 2. Hence, averaging x 2(t) over infinitely large time interval is the
same as averaging over one period, i.e., 2. Thus, the average power P is
Figure 1.10(b) Power signal

Thus, an energy signal has finite energy and zero average power, whereas a power
signal has finite power and infinite energy.
Example 1.10 Compute energy and power for the following signals, and determine
whether each signal is energy signal, power signal, or neither.

(i) x(t) = 4sin(2πt), −∞ < t < ∞.

(ii) x(t) = 2e −2|t|, −∞ < t < ∞.

(iii)

(iv) x(t) = e −at for real value of a

(v) x(t) = cos (t)

(vi)
Solution (i)
The energy of the signal is infinite, and its average power is finite; x(t) is a power
signal.

(ii) x(t) = 2e −2|t|

The energy of the signal is finite, and its average power is zero; x(t) is an energy
signal.

(iii)
Using L’Hospital’s rule, we see that the power of the signal is zero. That is

The energy of the signal is infinite and its average power is zero; x(t) is neither energy
signal nor power signal.

(iv) x(t) = e −at for real value of a

Using L’Hospital’s rule, we see that the power of the signal is infinite. That is,
The energy of the signal is infinite and its average power is infinite; x(t) is neither
energy signal nor power signal.

(v) x(t) = cos(t)

The energy of the signal is infinite and its average power is finite; x(t) is a power
signal.

(vi)
.

The energy of the signal is infinite and its average power is finite; x(t) is a power
signal.
Example 1.11 Consider the following signals, and determine the energy of each signal
shown in Figure 1.11. How does the energy change when transforming a signal by time
reversing, sign change, time shifting, or doubling it?
Figure 1.11 Signals of example 1.11

Solution

The time reversal, sign change, and time shifting do not affect the signal energy.
Doubling the signal quadruples its energy. Similarly, it can be shown that the energy of k
x(t) is k 2 E x .
Proposition 1.2 The sum of two sinusoids of different frequencies is the sum of the
power of individual sinusoids regardless of phase.
Proof Let us consider a sinusoidal signal x(t) = Acos(Ωt + θ). The power of x(t) is given
by

(1.12)

Thus, a sinusoid signal of amplitude A has a power regardless of the values of its
frequency Ω and phase θ.
Now, consider the following two sinusoidal signals:

The power of the sum of the two sinusoidal signals is given by

The first and second integrals on the right-hand side are the powers of the two
sinusoidal signals, respectively, and the third integral becomes zero since

Hence,
(1.13)

It can be easily extended to sum of any number of sinusoids with distinct frequencies

1.4.7 Deterministic and Random Signals


For any given time, the values of deterministic signal are completely specified as shown in
Figure 1.12(a). Thus, a deterministic signal can be described mathematically as a function
of time. A random signal takes random statistically characterized random values as shown
in Figure 1.12(b) at any given time. Noise is a common example of random signal.

Figure 1.12 (a) Deterministic signal, (b) random signal

1.5 Basic Continuous-Time Signals


1.5.1 The Unit Step Function
The unit step function is defined as

(1.14)

which is shown in Figure 1.13.

Figure 1.13 Unit step function

It should be noted that u(t) is discontinuous at t = 0.

1.5.2 The Unit Impulse Function


The unit impulse function also known as the Dirac delta function, which is often referred
as delta function is defined as

(1.15a)

(1.15b)

The delta function shown in Figure 1.14(b) can be evolved as the limit of the
rectangular pulse as shown in Figure 1.14(a).

(1.16)

Figure 1.14 (a) Rectangular pulse, (b) unit impulse

As the width Δ → 0, the rectangular function converges to the impulse function δ(t)
with an infinite height at t = 0, and the total area remains constant at one.
Some Special Properties of the Impulse Function
Sampling property
If an arbitrary signal x(t) is multiplied by a shifted impulse function, the product is
given by

(1.17a)

implying that multiplication of a continuous-time signal and an impulse function


produces an impulse function, which has an area equal to the value of the continuous-time
function at the location of the impulse. Also, it follows that for t 0 = 0,

(1.17b)

Shifting property
(1.18)

Scaling property

(1.19)

The unit impulse function can be obtained by taking the derivative of the unit step
function as follows:

(1.20)

The unit step function is obtained by integrating the unit impulse function as follows:

(1.21)

1.5.3 The Ramp Function


The ramp function is defined as

(1.22a)

which can also be written as

(1.22b)

The ramp function is shown in Figure 1.15.

Figure 1.15 The ramp function


1.5.4 The Rectangular Pulse Function
The continuous-time rectangular pulse function is defined as

(1.23)

which is shown in Figure 1.16.

Figure 1.16 The rectangular pulse function

1.5.5 The Signum Function


The signum function also called sign function is defined as

(1.24)

which is shown in Figure 1.17.

Figure 1.17 The signum function

1.5.6 The Real Exponential Function


A real exponential function is defined as

(1.25)

where both A and σ are real. If σ is positive, x(t) is a growing exponential signal. The
signal x(t) is exponentially decaying for negative σ. For σ = 0, the signal x(t) is equal to a
constant. Exponentially decaying signal and exponentially growing signal are shown in
Figure 1.18(a) and (b), respectively.

Figure 1.18 Real exponential function. (a) Decaying, (b) growing

1.5.7 The Complex Exponential Function


A real exponential function is defined as

(1.26)

Hence

(1.26a)

Using Euler’s identity

(1.27)

Substituting Eq. (1.27) in Eq. (1.26a), we obtain

(1.28)

Real sine function and real cosine function can be expressed by the trigonometric
identities as

1.5.8 The Sinc Function


The continuous-time sinc function is defined as
(1.28)

which is shown in Figure 1.19

Figure 1.19 The sinc function

Example 1.12 State whether the following signals are causal, anticausal, or noncausal.

(a) x(t) = e −2t u(t)


(b) x(t) = tu(t) − t(u(t − 1) + e (1−t) u(t − 1))

(c) x(t) = e t cos (2πt)u(1 − t)


Solution (a)

It is causal since x(t) = 0 for t < 0


(b)
It is causal since x(t) = 0 for t < 0
(c)

Example 1.13 Determine and plot the even and odd components of the following
continuous-time signal

Solution
Example 1.14 Simplify the following expressions:

(a)

(b)

(c)
Solution

(a)
(b)

(c)

1.6 Generation of Continuous-Time Signals Using MATLAB


An exponentially damped sinusoidal signal can be generated using the following
MATLAB command:
x(t) = A ∗ sin (2 ∗ pi ∗ f 0 ∗ t + θ) ∗ exp (−a ∗ t)where a is positive for decaying
exponential.
Example 1.15 Write a MATLAB program to generate the following exponentially
damped sinusoidal signal.
   x(t) = 5 sin (2πt)e −0.4t    − 10 ≤ t ≤ 10
Solution The following MATLAB program generates the exponentially damped
sinusoidal signal as shown in Figure 1.20.

Figure 1.20 Exponentially damped sinusoidal signal with exponential parameter a = 0.4.

MATLAB program to generate exponentially damped sinusoidal signal


Example 1.16 Generate unit step function over [−5,5] using MATLAB
Solution The following MATLAB program generates the unit step function over [−5,5]
as shown in Figure 1.21.

Figure 1.21 Unit step function

MATLAB program to generate unit step function over [−5,5]

Example 1.17 Generate the following rectangular pulse function rect(t) using
MATLAB:

Solution The following MATLAB program generates the rectangular pulse function as
shown in Figure 1.22.
Figure 1.22 Rectangular pulse function

MATLAB program to generate rectangular pulse function

1.7 Typical Signal Processing Operations


1.7.1 Correlation
Correlation of signals is necessary to compare one reference signal with one or more
signals to determine the similarity between them and to determine additional information
based on the similarity. Applications of cross correlation include cross-spectral analysis,
detection of signals buried in noise, pattern matching, and delay measurements.

1.7.2 Filtering
Filtering is basically a frequency domain operation. Filter is used to pass certain band of
frequency components without any distortion and to block other frequency components.
The range of frequencies that is allowed to pass through the filter is called the passband,
and the range of frequencies that is blocked by the filter is called the stopband. A low-pass
filter passes all low-frequency components below a certain specified frequency Ω c , called
the cutoff frequency, and blocks all high-frequency components above Ω c . A high-pass
filter passes all high-frequency components above a certain cutoff frequency Ω c and
blocks all low-frequency components below Ω c . A band-pass filter passes all frequency
components between two cutoff frequencies Ωc1 and Ωc2 where Ωc1 < Ωc2 and blocks all
frequency components below the frequency Ωc1 and above the frequency Ωc2. A band-stop
filter blocks all frequency components between two cutoff frequencies Ωc1 and Ωc2 where
Ωc1 < Ωc2 and passes all frequency components below the frequency Ωc1 and above the
frequency Ωc2. Notch filter is a narrow band-stop filter used to suppress a particular
frequency, called the notch frequency.

1.7.3 Modulation and Demodulation


Transmission media, such as cables and optical fibers, are used for transmission of signals
over long distances; each such medium has a bandwidth that is more suitable for the
efficient transmission of signals in the high-frequency range. Hence, for transmission over
such channels, it is necessary to transform the low-frequency signal to a high-frequency
signal by means of a modulation operation. The desired low-frequency signal is extracted
by demodulating the modulated high-frequency signal at the receiver end.

1.7.4 Transformation
The transformation is the representation of signals in the frequency domain, and inverse
transform converts the signals from the frequency domain back to the time domain. The
transformation provides the spectrum analysis of a signal. From the knowledge of the
spectrum of a signal, the bandwidth required to transmit the signal can be determined. The
transform domain representations provide additional insight into the behavior of the signal
and make it easy to design and implement algorithms, such as those for filtering,
convolution, and correlation.

1.7.5 Multiplexing and Demultiplexing


Multiplexing is used in situations where the transmitting media is having higher
bandwidth, but the signals have lower bandwidth. Thus, multiplexing is the process in
which multiple signals, coming from different sources, are combined and transmitted over
a single channel. Multiplexing is performed by multiplexer placed at the transmitter end.
At the receiving end, the composite signal is separated by demultiplexer performing the
reverse process of multiplexing and routes the separated signals to their corresponding
receivers or destinations.
In electronic communications, the two basic forms of multiplexing are time-division
multiplexing (TDM) and frequency-division multiplexing (FDM). In time-division
multiplexing, transmission time on a single channel is divided into non-overlapped time
slots. Data streams from different sources are divided into units with same size and
interleaved successively into the time slots. In frequency-division multiplexing (FDM),
numerous low-frequency narrow bandwidth signals are combined for transmission over a
single communication channel. A different frequency is assigned to each signal within the
main channel. Code-division multiplexing (CDM) is a communication networking
technique in which multiple data signals are combined for simultaneous transmission over
a common frequency band.
1.8 Some Examples of Real-World Signals and Systems
1.8.1 Audio Recording System
An audio recording system shown in Figure 1.23(a) takes an audio or speech as input and
converts the audio signal into an electrical signal, which is recorded on a magnetic tape or
a compact disc. An example of recorded voice signal is shown in Figure 1.23(b).

Figure 1.23 (a) Audio recording system, (b) the recorded voice signal “don’t fail me again”

1.8.2 Global Positioning System


The satellite-based global positioning system (GPS) consists of a constellation of 24
satellites at high altitudes above the earth. Figure 1.24 shows an example of the GPS used
in air, sea, and land navigation. It requires signals at least from four satellites to find the
user position (X, Y, and Z) and clock bias from the user receiver. The measurements
required in a GPS receiver for position finding are the ranges, i.e., the distances from GPS
satellites to the user. The ranges are deduced from measured time or phase differences
based on a comparison between the received and receiver- generated signals. To measure
the time, the replica sequence generated in the receiver is to be compared to the satellite
sequence.

Figure 1.24 A pictorial representation of GPS positioning

The correlator in the user GPS receiver determines which codes are being received, as
well as their exact timing. When the received and receiver-generated sequences are in
phase, the correlator supplies the time delay. Now, the range can be obtained by
multiplying the time delay by the velocity of light. For example, assuming the time delay
as 3 ms (equivalent to 3 blocks of the C/A code of satellite 12), the correlation of satellite
12 producing a peak after 3 ms [Rao06] is shown in Figure 1.25.

Figure 1.25 The correlation of satellite 12 producing a peak

1.8.3 Location-Based Mobile Emergency Services System


Mobile emergency services (MES) refer to the use of mobile positioning technology to
pinpoint mobile users for purposes of providing enhanced wireless emergency dispatch
services (including fire, ambulance, and police) to mobile phone users. In this emergency
service system, user should have assisted GPS-enabled mobile handset unit. Network
service providers will support “Mobile Location Protocol (MLP).” The MLP serves as the
interface between a location server and a location services (LCS) client.
Whenever user requires an emergency service, he will dial the specified number for
emergency calling. Dialing of emergency service number will generate an “emergency
location immediate service (ELIS).”
ELIS is used to retrieve the position of a mobile subscriber that is involved in an
emergency call or has initiated an emergency service in some other way. The service
consists of the following messages: emergency location immediate request (ELIR) and
emergency location immediate answer (ELIA).
When user has dialed the emergency number, emergency location immediate request is
sent to network service provider.
After receiving the emergency location immediate request from the user, network
service provider extracts the position information and sends emergency location
immediate answer to the mobile user, and service provider asks him to select the service
from ambulance, police, and fire services. Mobile user selects the service, which he
actually needs.
The service provider would find the nearest emergency service center and send an
emergency location report to that center. Whenever an emergency location report is
received, a mark will appear on the corresponding digital map. This mark will indicate the
user’s location. A schematic block diagram of location-based mobile emergency service
system and tracking a mobile user are shown in Figure 1.26(a) and (b), respectively.

Figure 1.26 (a) Schematic block diagram (b) tracking a mobile user of location-based mobile emergency service system

1.8.4 Heart Monitoring System


In cardiac cells of the human body, a small electrical current is produced by the movement
of sodium (Na+) and potassium (K+) ions. The electrical potential generated by these ions
is known as an electrocardiogram (ECG) signal. The ECG signal is used by physicians to
analyze heart conditions. The ECG signal is very small (normally 0.0001 to 0.003 volt).
These signals are within the frequency range of 0.05 to 100 Hz. A typical one cycle ECG
tracing of a normal heartbeat consists of a P wave, a QRS complex, and a T wave as
shown in Figure 1.27. A small U wave is normally visible in 50 to 75% of ECGs.

Figure 1.27 One cycle of ECG signal

The processing of ECG signal yields information, such as amplitude and timing,
required for a physician to analyze a patient’s heart condition. Detection of R-peaks and
computation of R-R interval of an ECG record are important requirements of
comprehensive arrhythmia analysis systems. Heart rate is computed as
.

An ECG signal with variations in heart rate is shown in Figure 1.28.


Figure 1.28 An ECG signal with variations in heart rate

1.8.5 Human Visual System


The human visual system (HVS) can widely perform a number of image processing
operations in a manner superior to anything we are currently able to execute with
computers. To perform such signal processing operations, we have to understand the way
HVS works.
When the reflection from an object (light ray) is observed by the eye, first, it passes
through the cornea, eventually through the aqueous humor, the iris, the lens, the vitreous
humor, and finally reaching the retina. The retina consists photosensitive cells called cones
and rods, which are responsible to convert the incident light energy into neural signals that
are carried to human brain by the optic nerve (Figure 1.29).

Figure 1.29 Human visual system

1.8.6 Magnetic Resonance Imaging


When an oscillating strong magnetic field is applied at a certain frequency on a certain
part of the human body, the hydrogen atoms in the body emit radio-frequency waves to
form image of the particular part of the body, which is captured by the MRI machine. An
MRI imaging system and a MRI image with brain tumor are shown in Figure 1.30(a) and
(b), respectively.

Figure 1.30 (a) MRI imaging system, (b) MRI image with brain tumor

1.9 Problems

1. Classify the following continuous-time signals as periodic or aperiodic. If periodic,


determine the period.

(i)

(ii)

(iii)

(iv)

(v) x(t) = e j(4t + π/5)

(vi)
(vii) x(t) = cos(2πt)u(t)

(viii) x(t) = cos 2(t)

2. A periodic signal x 1(t) has a period 2, and another periodic signal x 2(t) has a period
3. Find the fundamental frequency and period for the signal y(t) = x 1(t) + x 2(t).

3. Classify the following continuous-time signals as even or odd signals or neither even
nor odd. Determine power and energy for each incase of power or energy signal.

(i) x(t) = (1+ t 2)cos 2(5t)

(ii) x(t) = u(t)

(iii) x(t) = tu(t)

(iv) x(t) = tsin(2t)

(v) x(t) = t + cos(2t)

(vi) x(t) = e −2t sin(2t))

4. Consider the following continuous-time signal:


Determine the values of T for which the signal is

(i) An even function

(ii) An odd function

5. Classify the following continuous-time signals as power or energy signals or neither.


Determine power and energy for each incase of power or energy signal.

(i) x(t) = sin(2πt)cos(πt)

(ii) x(t) = tu(t)

(iii) x(t) = e −3t u(t)

(iv) x(t) = e −j3t

6. Determine energy for each of the following signals and comment on the results.
7. What is the energy of the signal x(t) = cx(at − b), where a ≠ 0?

8. Verify that e −ct is neither energy nor a power signal for a complex value of c with
nonzero real part.

9. Show that the energy of x(t) ± y(t) is E x + E y , if x(t) and y(t) are orthogonal.

10. Derive an expression for the power of the following continuous-time signal x(t) = A 1
cos(Ω1 t + θ 1) + A 2 cos(Ω2 t + θ 2) for Ω1 = Ω2.

11. Determine the power of the signal x(t) = Ae jΩt.

12. Find power for each of the following signals:


(i) x(t) = (5 + 3sin(2t))cos(5t)

(ii) x(t) = 5 cos(5t) cos (10t)


(iii) x(t) = 2sin(5t) cos (10t)

13. Find odd and even components for each of the following signals:
(a) x(t) = u(t)

(b) x(t) = e −at u(t)

14. Evaluate the following expressions:


(i)

(ii)

(iii)

(iv)

(v)

(vi)
1.10 MATLAB Exercises

1. Use MATLAB to generate the continuous-time signal shown in Figure 1.p1.1.

2. Generate and plot each of the following continuous-time signals using MATLAB:
(i) x(t) = 10 sin(2πt) cos (πt − 4) for −10 ≤ t ≤ 10

(ii) x(t) = 2e −0.1t sin(2πt) for −5 ≤ t ≤ 5

Figure p1.1 Signal of MATLAB exercise 1

Further Reading
1. Pierce, J.R., Noll, A.M.: Signals: The Science of Telecommunications. American Library, New Delhi (1960)
2. Lathi, B.P.: Linear Systems and Signals, 2nd edn. Oxford University Press, New York (2005)
3. Mandal, M., Asif, A.: Continuous and Discrete Time Signals and Systems. Cambridge University Press, Cambridge
(2007)
© Springer International Publishing AG, part of Springer Nature 2018
K. Deergha Rao, Signals and Systems, https://doi.org/10.1007/978-3-319-68675-2_2

2. Continuous-Time Signals and Systems


K. Deergha Rao1
(1) Department of Electronics and Communication Engineering, Vasavi College of
Engineering (Affiliated to Osmania University), Hyderabad, Telangana, India


This chapter presents time-domain analysis of continuous-time systems. It develops
representation of signals in terms of impulses . The notions of linearity , time-invariance,
causality , stability, memorability, and invertibility are introduced. It has shown that the
input-output relationship for linear time-invariant (LTI) continuous systems is described in
terms of a convolution integral. The differential equation representation of LTI continuous
systems and classical solutions of differential equations are also presented. Next, block-
diagram representation of LTI continuous-time systems is introduced. Furthermore, a brief
discussion on singularity functions is provided. Finally, the state-space representation of
continuous-time LTI systems is described.

2.1 The Representation of Signals in Terms of Impulses


Consider pulse or staircase approximation to continuous-time signal x(t) as shown in
Figure 2.1. Then, the approximation signal can be expressed as sum of all these pulse
signals. Define

(2.1)

Figure 2.1 Representation of a signal in terms of impulses

Since δ Δ(t)Δ = 1, can be expressed as


(2.2)

As Δ approaches zero, the above approximation can be written as

(2.3)

Also, as Δ → 0, the summation approaches an integral, and the pulse approaches unit
impulse. Therefore, Eq. (2.3) can be rewritten as

(2.4)

Thus, a continuous-time signal can be represented as weighted superposition of shifted


impulses . Here the superposition is integration due to nature of the continuous-time input.
The weight x(τ) dτ on the impulse δ(t − τ) is determined from the value of the input signal
x(t) at the time of occurrence of each impulse.

2.2 Continuous-Time Systems


2.2.1 Linear Systems
Let x 1(t) and x 2(t) are the inputs applied to a system characterized by the transformation
operator ℜ[] and y 1(t) and y 2(t) are the system outputs. A linear system should satisfy the
principles of homogeneity and superposition. Hence, the following equations hold for a
linear system

(2.5a)

(2.5b)

Principle of homogeneity:

(2.6a)
(2.6b)

Principle of superposition:

(2.7)

Linearity :

(2.8)

where a and b are arbitrary constants.

2.2.2 Time-Invariant System


A system is time invariant if the behavior and characteristics of the system are fixed over
time. A system is time invariant if a time shift in the input signal results in an identical
time shift in the output signal. For example, a time-invariant system should produce y(t − t
0) as the output when x(t − t 0) is the input. Mathematically it can be specified as

(2.9)

Example 2.1 Check for linearity and time-invariance of the following system

Solution Linearity:
Let x 1(t) and x 2(t) be two distinct inputs applied to the system, then

If an input equal to sum of the inputs ax 1(t), bx 2(t),


x(t) = ax 1(t) + bx 2(t) is applied, then

Hence, the system is linear.


Time-invariance:

The output y(t) of the system delayed by t 0 can be written as

For example, for an input x 1(t) = x(t − t 0), the output y1(t) can be written as

Hence, it is a time variant system .


Example 2.2 Check for linearity and time-invariance of the following system:

Solution Linearity:
Let x 1(t) and x 2(t) be two distinct inputs applied to the system, then

If an input equal to sum of the inputs ax 1(t), bx 2(t),x(t) = ax 1(t) + bx 2(t) is applied,
then

Hence, the system is nonlinear.


Time-invariance:

The output y(t) of the system delayed by t 0 can be written as


For example, for an input x 1(t) = x(t − t 0), the output y1(t) can be written as

Hence, it is a time-invariant system .


Example 2.3 Determine if the following continuous-time systems are linear or nonlinear:

(i)

(ii) 2y(t) + 3 = x(t)

(iii)

(iv)
Solution

(i) Let x 1(t) and x 2(t) be two distinct inputs applied to the system, then

If an input equal to sum of the inputs ax 1(t), bx 2(t),


x(t) = ax 1(t) + bx 2(t) is applied, then
Hence, the system is linear.

(ii)

Let x 1(t) and x 2(t) be two distinct inputs applied to the system, then

If an input equal to sum of the inputs ax 1(t), bx 2(t),x(t) = ax 1(t) + bx 2(t) is applied,
then

Hence, the system is nonlinear.

(iii) Let x 1(t) and x 2(t) be two distinct inputs applied to the system, then

If an input equal to sum of the inputs ax 1(t), bx 2(t),


x(t) = ax 1(t) + bx 2(t) is applied, then

Hence, the system is linear.

(iv) Let x 1(t) and x 2(t) be two distinct inputs applied to the system, then
If an input equal to sum of the inputs ax 1(t), bx 2(t),
x(t) = ax 1(t) + bx 2(t) is applied, then

The system is nonlinear.


Example 2.4 Determine if the following continuous-time systems are time invariant or
time variant :
(i) y(t) = x(−t),

(ii)

(iii) y(t) = x(4t)


(iv)

Solution

(i) y(t) = ℜ[x(t)] = x(−t)


The output y(t) of the system delayed by t 0 can be written as

For example, for an input x 1(t) = x(t − t 0), the output y 1(t) can be written as

Hence, it is a time-varying system .


(ii) Let x(t) = δ(t), then
Now, for an input x 1(t) = x(t − 6), the output y 1(t) can be written as

Hence, it is a time-varying system .

(iii) The output y(t) of the system delayed by t 0 can be written as

For example, for an input x 1(t) = x(t − t 0), the output y 1(t) can be written as

Hence, it is a time-varying system.

(iv) y(t) = ℜ[x(t)] = (2 + sin (t))x(t)


The output y(t) of the system delayed by t 0 can be written as

For example, for an input x 1(t) = x(t − t 0), the output y 1(t) can be written as

Hence, it is a time-varying system.


(v)

The output y(t) of the system delayed by t 0 can be written as

For example, for an input x 1(t) = x(t − t 0), the output y 1(t) can be written as

Hence, it is a time-invariant system.


Example 2.5 Consider an LTI system with the response y(t) as shown in Figure 2.2 to the
input signal x(t) = u(t) − u(t − 2).

Figure 2.2 Response y(t) to the input x(t)

Determine and sketch the response of the system to the following inputs:

(i) x 1(t) = x(t) − x(t − 4)

(ii) x 2(t) = x(t) + x(t + 2)


Solution

(i) x 1(t) = x(t) − x(t − 4)


Since it is an LTI system, the response y 1(t) to the input x 1(t) is given by y 1(t) = y
(t) − y(t − 4) as depicted in Figure 2.3.

Figure 2.3 Response y 1(t) to the input x 1(t)

(ii) x 2(t) = x(t) + x(t + 2)


Since it is an LTI system, the response y 2(t) to the input x 2(t) is given by y
2(t) = y(t) + y(t + 2) as depicted in Figure 2.4.

Figure 2.4 Response y 2(t) to the input x 2(t)

2.2.3 Causal System


The causal system generates the output depending upon present and past inputs only. A
causal system is non-anticipatory.

2.2.4 Stable System


When the system produces bounded output for bounded input, then the system is called
bounded-input and bounded-output stable. If the signal is bounded, then its magnitude will
always be finite.

2.2.5 Memory and Memoryless System


The output of a memory system at any specified time depends on the inputs at that
specified time and at other times. Such systems have memory or energy storage elements.
The system is said to be static or memoryless if its output depends upon the present input
only.

2.2.6 Invertible System


A system is said to be invertible if the input can be recovered from its output. Otherwise
the system is noninvertible system.

2.2.7 Step and Impulse Responses


If the input to the system is unit impulse input δ(t), the system output is called the impulse
response and denoted by h(t):

(2.10)

If the input to the system is a unit step input u(t), then the system output is called the
step response s(t);that is,

(2.11)

2.3 The Convolution Integral


The output of a system for an input expressed as weighted superposition as in Eq. (2.4) is
given by

(2.12)

From the linearity property of the system, Eq. (2.12) can be rewritten as

(2.13)

For a time-invariant system, ℜ[δ(t − τ)] = h(t − τ). Hence, we obtain

(2.14)

Thus, the output y(t) of a linear time-invariant system to an arbitrary input x(t) is
obtained in terms of the unit impulse input δ(t). Eq. (2.14) is referred to as the
convolutional integral and is denoted by the symbol * as
(2.15)

2.3.1 Some Properties of the Convolution Integral


2.3.1.1 The Commutative Property

(2.16)

Proof This property can be proved by a change of variable.


By the definition of the convolution integral

(2.17)

Then

(2.18)

2.3.1.2 The Distributive Property

(2.19)

Proof By the definition of the convolution integral

(2.20)

2.3.1.3 The Associative Property


(2.21)

Proof The left-hand side of the property can be expressed by

(2.22)

where x 1(t) * x 2(t) is expressed as a convolution integral. Expanding the second


convolution gives

(2.23)

Reversing the order of integration gives

(2.24)

Similarly, the right-hand side of the property can be written as

(2.25)

Now, it is to be shown that

(2.26)

In the right hand τ 1 integration, let v = τ 1 + τ 2 and dτ 1 = dv.


Then


(2.27)

Next, let and−dτ 2 = du


Then

(2.28)

(2.29)

The above right-hand side and left-hand side integrals are the same except for change
of the variables. Hence, the associative property is proved.

2.3.1.4 Convolution with an Impulse

(2.30)

Proof By definition

(2.31)

Since δ(t − τ) is an impulse at τ = t and by sampling property of the impulse,

(2.32)

Hence

2.3.1.5 Convolution with Delayed Input and Delayed Impulse


Response
If y(t) = x(t) * h(t), then
(2.33)

Proof By the convolution integral , we have

(2.34)

and

(2.35)

Let τ − t 1 = υ. Then τ = υ + t 1, and Eq. (2.35) becomes

(2.36)

It is observed that replacing t by t − t 1 − t 2 in Eq. (2.34), we obtain Eq. (2.36). Thus,


it is proved that

Example 2.6 Determine the continuous-time convolution of x(t) and h(t) for the
following:

(i) x(t) = u(t)


h(t) = u(t)

(ii) x(t = u(t − a)


h(t) = u(t − b)

(iii) x(t) = u(t + 1) − u(t − 1)


h(t) = u(t + 1) − u(t − 1)

(iv) x(t) = e −(t−2) u(t − 2)


h(t) = u(t + 2)

(v)

(vi) x(t) = u(t)


h(t) = e −t u(t)

(vii) x(t) = 2(u(t) − u(t − 2))


h(t) = e −t/2 u(t)
Solution

(i)

(ii)
Since u(t)*u(t) = tu(t),
(iii)

as shown in Figure 2.5

Figure 2.5 The convolution of x(t) and h(t)

(iv)

Letting τ 1 = τ – 2,

(v)
Hence, y(t) is a shifted version of x(t) as shown in Figure 2.6.

Figure 2.6 The shifted version of x(t)


(vi)

(vii)

Example 2.7 Consider LTI system with the impulse response h(t); for an input x(t), the
output y(t) is as shown in Figure 2.7.

Figure 2.7 Response y(t) to the input x(t)

Determine the output of the system for an input


x 1(t) = x(t) − x(t − 2))
Solution Since the system is LTI, for an input x(t − 2), the output is y(t − 2) as shown in
Figure 2.8.

Figure 2.8 Response y(t − 2) to the input x(t − 2)

The output y 1(t) for the input x 1(t) = x(t) − x(t − 2) is given by
y 1(t) = y(t) − y(t − 2), which is shown in Figure 2.9.
Figure 2.9 Response y 1(t) to the input x 1(t)

Example 2.8 Consider a LTI system with input and output related through the equation

(i) Determine the impulse response h(t) of the system.

(ii) Determine the output y(t) of the system for the input
x(t) = u(t + 1) − u(t − 3).
Solution

(i)
Let τ 1 = τ − 3, then

Hence,
h(t) = e −(t − 3) u(t − 3)

(ii)
x(t − τ) and h(τ) are shown in Figure 2.10.
Figure 2.10 x(t − τ) and h(τ)

Using Figure 2.10, y(t) can be written as

2.3.2 Graphical Convolution


An understanding of graphical interpretation of convolution is very useful in computing
the convolution of more complex signals. The stepwise procedure for graphical
convolution is as follows :
Step 1: Make x(τ) fixed.
Step 2: Invert h(τ) about the vertical axis (t = 0) to obtain h(−τ).
Step 3: Shift the h(−τ) along the τ axis by t0 seconds so that the shifted h(−τ) is
representing h(t0 − τ).
Step 4: The area under the product of x(τ) and h(t0 − τ) is y(t0), the value of
convolution at t = t0.
Step 5: Repeat steps 3 and 4 for different values of positive and negative to obtain
y(t) for all values of t.
Example 2.9 Graphically determine the continuous-time convolution of h(t) and x(t) for
the following:

(i)

Solution
To compute y(t) = x(t) * h(t), first h(−τ) is to be obtained by inverting h(τ) about the
vertical axis. Then, the product of x(τ) and h(t − τ) is formed, point by point, and this
product is integrated to compute y(t). Thus, the overlap area between the rectangles
forming x(τ) and h(t − τ) is y(t).
Clearly, y(0) = 0 because there is no overlap between the rectangles forming x(τ) and
h(t − τ) at t = 0. For 0 < t < 8, there is overlap between the rectangles forming x(τ) and
h(t − τ). For t ≥ 8, there is no overlap, and hence, y(8) = 0. These are illustrated in Figure
2.11 with the final result for y(t). The shaded portion represents the overlap area of the
product x(τ) and h(t − τ).
Figure 2.11 Steps in the convolution and the final result

Example 2.10 Determine graphically y(t) = x(t) * h(t) for the following x(t) and h(t)
shown.

Solution
There is no overlap area between x(τ) and h(−τ) at t = 0, y(0) = 0.
For 0 < t < 6, there is overlap between the rectangles forming x(τ) and h(−τ). For t ≥ 6,
there is no overlap, and hence, y(6) = 0. These are illustrated in Figure 2.12 with the final
result for y(t). The shaded portion represents the overlap area of the products x(τ) and
h(t − τ).
Figure 2.12 Steps in the convolution and the final result

Example 2.11 Consider the RC circuit shown in Figure 2.13. Determine the V out (t) for V
in (t) = u(t − 1) − u(t − 2), and assume the time constant RC = 1 sec. Assume the capacitor
is initially discharged.

Figure 2.13 RC circuit

Solution The impulse response of the RC low-pass filter is


The steps involved in the convolution are illustrated in Figure 2.14.

Figure 2.14 Illustration of steps in the convolution

which is shown in Figure 2.15.

Figure 2.15 Times versus V out (t)

Example 2.12 If (t) = x(t) * h(t), then show that


y(2t) = 2x(2t) * h(2t)
Solution

Letting we have

Example 2.13 If x(t) and h(t) are odd signals , then show that
y(t) = x(t) * h(t) is an even signal.
Solution y(t) = x(t) * h(t)

Since x(t) and h(t) are odd signals,

Hence, y(t) is even because y(t) = y(−t).


Example 2.14 Consider an LTI system with the impulse response h(t) = e −t u(t). Find the
system response for the input x(t) = sin2tu(t).
Solution

Hence,
The above equation can be rewritten as

Therefore,

Example 2.15 If the response of an LTI system to input x(t) is the output y(t), show that
the response of the system to is and using this result determines the impulse
response of an LTI system having the response y(t) = sin2t for an input x (t) = e −4tu(t).
Solution

Differentiating both sides with respect to t,

For given y(t), and for given

From sampling property impulse function, it is known that x(t) δ (t) = x(0)δ(t).
Since e −4t δ(t) = e −0 δ(t) = δ(t), can be written as
Let x 1(t) = 4e −4t u(t), then by homogeneity , the corresponding output

Let the corresponding output

As it is LTI system, if (x 1(t) + x 2(t)) is the input to the system, the corresponding
output is(y 1(t) + y 2(t))
since x 1(t) + x 2(t) = 4e −4t − 4e −4t + δ(t) = δ(t), the impulse response h(t) = y 1(t) + y
2(t) = 4 sin 2t + 2 sin 2t

Example 2.16 Consider a continuous-time LTI system with the unit step response s(t):

(i) Deduce that the response y(t) of the system to the input x(t) is

and also show that

(ii) Determine the response of an LTI system with step response

to an input x(t) = e t u(t).


Solution
The step response s(t) is
(i)

Consider the following equivalence.

From the above equivalence, we obtain

Thus,

Since y(t) = x(t) ∗ h(t) and if h(t) = δ(t) and y(t) = x(t) as x(t) ∗ δ(t) = x(t),thus, putting
h(t) = δ(t) in , it becomes

Since

(ii)

Since
Example 2.17 Consider h(t) be the triangular pulse and x(t) be the unit impulse train as
shown in Figure 2.16. Determine y(t) = x(t) * h(t) for T = 2.

Figure 2.16 x(t) and h(t) of Example 2.17

Solution

which is shown in Figure 2.17.

Figure 2.17 Time versus y(t)

2.3.3 Computation of Convolution Integral Using MATLAB


MATLAB provides a function conv() that performs a discrete-time convolution of two
discrete-time sequences. A new function convint() that uses conv() to numerically
integrate the continuous-time convolution is as follows.
The computation of convolution of continuous-time signals using MATLAB is illustrated
through the following numerical examples.
Example 2.18

(i) Verify the result of Example 2.9 using MATLAB.

(ii) Verify the result of Example 2.10 using MATLAB.


Solution

(i) The following MATLAB program 2.1 is used to compute the convolution of x(t) and
h(t) of Example 2.9.

Program 2.1

The output y(t) = x(t) * h(t) of the above program is shown in Figure 2.18. It is
observed to be the same as that shown in Figure 2.11. Thus, it is verified.
Figure 2.18 Time versus y(t)

(ii) The following MATLAB program 2.2 is used to compute the convolution of x(t) and
h(t) of Example 2.10.

Program 2.2

The output y(t) = x(t) * h(t) of tie above program is shown in Figure 2.19 It is observed
to be the same as that shown in Figure 2.12. Thus, it is verified
Figure 2.19 Time versus y(t)

Example 2.19 Consider the RC circuit of Example 2.11 with time constant .
Determine the V out (t) using MATLAB for V in (t) = (u(t − 3) − u(t − 5)). Assume the
capacitor is initially discharged.
Solution The impulse response of the RC circuit is given by

The following MATLAB program 2.3 is used to compute the convolution of v in (t) and
h(t).
Program 2.3

The output V out (t) = V in (t) * h(t) of the above program is shown in Figure 2.20.
Figure 2.20 Time versus V out (t)

2.3.4 Interconnected Systems


2.3.4.1 Cascade Connection of Systems
The system shown in Fig. 2.21 is formed by connecting two systems in cascade. The
impulse responses of the systems are given by h 1(t) and h 2(t), respectively. Let y(t) be the
output of the first system. By the definition of convolution

(2.37)

Figure 2.21 (a) Cascade connection of two systems. (b) Equivalent system

Then, the output of the overall system y(t) is given by

(2.38)

By the associativity property of convolution, Eq. (2.38) can be rewritten as

(2.39)

Hence, the impulse response of the overall system is given by


(2.40)

2.3.4.2 Parallel Connection of Two LTI Systems


The system shown in Fig. 2.22 is formed by connecting two systems in parallel. The
impulse responses of the systems are given by h 1(t) and h 2(t), respectively. Let y 1(t) and
y 2(t) be the outputs of the first system and second system, respectively. By the definition
of convolution

(2.41)

(2.42)

Figure 2.22 (a) Parallel connection of two systems. (b) Equivalent system

Then, the output of the overall system y(t) is given by

(2.43)

By the distributive property of convolution, Eq. (2.43) can be rewritten as

Hence, the impulse response of the overall system is given by

(2.44)

Example 2.20 An LTI system consists of two subsystems in cascade. The impulse
responses of the subsystems are, respectively, given by
Find the impulse response of the overall system.
Solution The overall impulse response of the system is given by

2.3.5 Periodic Convolution


If the signals x 1(t) and x 2(t) are periodic with common period T, it can be easily shown
that the convolution of x 1(t) and x 2(t) does not converge. In such a case, the periodic
convolution of x 1(t) and x 2(t) is defined as

(2.45)

Example 2.21 Let y(t) be the periodic convolution of x 1(t) and x 2(t). Show that

Solution

(i)
Since x 2(t) is periodic with period T, x 2(t + T − τ) = x 2(t − τ)

2.4 Properties of Linear Time-Invariant Continuous-Time


System
2.4.1 LTI Systems With and Without Memory
The output y(t) of a memoryless system depends only on the present input x(t). If the
system is LTI, then the relationship between the y(t) and x(t) for a memoryless system is

(2.46)

where c is an arbitrary constant. Since the output of a continuous-time system can be


written as

the corresponding impulse response is h(t) = cδ(t).


Thus, a continuous-time system is memoryless if and only if

(2.47)

2.4.2 Causality for LTI Systems


The output of a continuous-time system can be written as

Since the impulse response h(τ) = 0 for τ < 0 for a causal continuous-time system, the
output of a causal system can be expressed by the following convolution integral:

(2.48)

2.4.3 Stability for LTI Systems


A continuous-time system is BIBO stable if and only if the impulse response is absolutely
integrable, that is,
(2.49)

Example 2.22 Check stability of continuous-time system having the following impulse
responses:

(i) h(t) = e −t u(t)

(ii) h(t) = e −t cos (2t)u(t)

(iii) h(t) is periodic and nonzero


Solution

(i)
Indicating that h(t) is absolutely integrable and, hence, h(t) is the impulse response of a
stable system ,

(ii)
Since e −τ|cos (2τ)| is exponentially decaying for 0 ≤ t ≤ ∞ , h(t) is absolutely
summable, and hence, h(t) is the impulse response of a stable system .

(iii) If h(t) is periodic with period T, then

where N → ∞
Since h(t) is nonzero hence, h(t) is absolutely summable and, hence,
h(t) is the impulse response of an unstable system .
Example 2.23 Determine if each of the following system is causal or stable :

(i) h(t) = e −t u(t − 1)

(ii) h(t) = e −t u(−t + 1)

(iii) h(t) = e −2t u(t + 10)

(iv) h(t) = te −t u(t)

(v) h(t) = e +t u(−t − 1)

(vi) h(t) = e −2|t|


Solution

(i) Causal because h(t) = 0 for t < 0. Stable because .

(ii) Not causal because h(t) ≠ 0 for t < 0. Unstable because .

(iii) Not causal because h(t) ≠ 0 for t < 0. Stable because .

(iv) Causal because h(t) = 0 for t < 0. Stable because .

(v) Not causal because h(t) ≠ 0 for t < 0. Stable because .


(vi) Not causal because h(t) ≠ 0 for t < 0. Unstable because .
2.4.4 Invertible LTI System
A system is invertible if its input x(t) can be recovered from its output y(t) = x(t) * h(t).
The cascade of an LTI system having impulse response h(t) with a LTI inverse system
having impulse response g(t) = h −1 (t) is shown in Figure 2.23.

Figure 2.23 Cascade connection of an LTI system and its inverse

The process of recovering x(t) from x(t) * h(t) is called deconvolution as it corresponds
to reverse of the convolution operation.
The overall impulse response of the invertible system shown in Figure 2.23 is the
convolution of h(t) and g(t). It is required that the output of the invertible system is
equivalent to the input:

(2.50)

implying that

(2.51)

As an example, it is verified that the inverse system for a continuous-time integrator is


a differentiator as follows:

(2.52)

Hence, the input-output relation for the inverse system shown in Figure 2.24 is

(2.53)
Figure 2.24 Input-output relation for the inverse system

Example 2.24

(i) An echo of an auditorium can be modeled as a LTI system with an impulse response
consisting of a train of impulses:

The inverse LTI system with impulse response g(t) is

where g(t) is also an impulse train that is modeled as

Obtain the relationship between h k and g k .


Solution y(t) = x(t) * h(t) and x(t) = g(t) *y(t), then

However,

Let n = m + k, then m = n-k, and g(t) * h(t) can be rewritten as


Hence,

Implying that

and so on, solution of the above equations leads to

(ii) Consider the following echo generation model characterized by

where 0 < a < 1 and T is delay.


Construct the corresponding inverse system and obtain its impulse response.
Solution Assuming y(t) = 0 for t < 0 and x(t) = 0 for t < 0, the impulse response h(t) of
the echo generation system is given by
Thus, h 0 = 1, h 1 = a, h i = 0 for i > 2.
The inverse system has to obtain x(t) from the output y(t). Hence, the inverse system is
characterized by

and represented as depicted in Figure 2.25.

Figure 2.25 An inverse system

The impulse response g(t) of the inverse system is given by

Hence, g 0 = 1, g 1 = −a.
Example 2.25 Check y(t) = x(2t) for causality and invertibility.
Solution

At time t = 1

indicating that the value of y(t) at time t = 1 depends on x(t) at a time t = 2. Therefore,
y(t) = x(2t) is not causal.
y(t) is invertible;

2.5 Systems Described by Differential Equations


2.5.1 Linear Constant-Coefficient Differential Equations
A general Nth-order linear constant-coefficient differential equation is given by

(2.54)

where coefficients a n and b n are real constants. The order N refers to the highest
derivative of y(t) in Eq. (2.54). For example, consider the RC circuit considered in
Example 2.11, the input x(t) and the output y(t) = v o (t). If the current flowing through the
RC circuit is i(t), using Kirchhoff’s voltage law, we write

(2.55)

which can be rewritten as

(2.56)

Since substituting in Eq. (2.56), we obtain the following


first-order constant-coefficient differential equation

(2.57)

relating the voltage across the capacitor y(t) and the input x(t).
Example 2.26 Find the differential equation relating the current y(t) and the input voltage
x(t) for the RLC circuit shown in Figure 2.26 assuming R = 3 Ohms, L = 1 Henry, and

Figure 2.26 RLC circuit


Solution Using Kirchhoff’s voltage law, we write the following loop equation for the
given RLC circuit:

For R = 3, L = 1, and the above equation becomes

Differentiating this equation, we obtain

Example 2.27 Find the differential equation relating the input voltage and the
output voltage for the operational amplifier circuit shown in Figure 2.27.

Figure 2.27 Operational amplifier circuit

Solution

Rewriting the current node equations , we get

which can be rewritten as


Substituting the above equation for , the input-output relation can be written as

which is rewritten as

2.5.2 The General Solution of Differential Equation


The general solution of Eq. (2.54) for a particular input x(t) is given by

(2.58)

where y c (t) is called the complementary solution and y p (t) is called the particular
solution . The complementary solution y c (t) is obtained by setting x(t) = 0 in Eq. (2.54).
Thus y c (t) is the solution of the following homogeneous differential equation

(2.59)

Example 2.28 Consider the RC circuit of Example 2.11 with time constant RC = 1 sec.
Determine the voltage across the capacitor for an input x(t) = e − 2 t u (t). Assume the
capacitor is initially discharged.
Solution As the time constant RC = 1, the input x(t) and the output of the RC
circuit are related by
The particular solution for the exponential input is of the form

Substituting y p (t) in the above differential equation, we get

Solving for A, we obtain A = 1 and

To obtain complementary solution, let us assume

Substituting this into

yields

Thus, k = −1 and

Now,

at t = 0 y(0) = B – 1.
Since the capacitor is initially discharged, y(0) = 0, and we obtain B = 1.
Hence, the voltage across the capacitor is given by

2.5.3 Linearity
The system specified by Eq. (2.54) is linear only if all of the initial conditions are zero.
For instance, in the Example 2.11, if the capacitor is not assumed to be discharged
initially, then
A linear system has the property that zero input produces zero output.
However, if we let x(t) = 0, then

(2.60)

Thus, this system is nonlinear if y(0) ≠ 0.


If the capacitor is assumed to be discharged initially, then
Then for x(t) = 0,

(2.61)

the system is linear

2.5.4 Causality
A linear system described by Eq. (2.54) is causal when it is initially relaxed. It implies that
if x(t) = 0 for t ≤ t 0, then y(t) = 0 for t ≤ t 0, thus, the response for t > to with the initial
conditions

(2.62a)

(2.62b)
2.5.5 Time-Invariance
For a linear causal system, initial rest also implies time-invariance.
For example, consider the system described by

(2.63)

Let y 1(t) be the response to an input x 1(t) and

(2.64)

so that

(2.65)

and

(2.66)

Now, let x 2(t) = x 1(t − τ) and y 2 (t) be the corresponding response. From Eq. (2.64),
we get

(2.67)

Then y 2(t) should satisfy

(2.68)

and

(2.69)

From Eq. (2.65), we write


By letting y 2(t) = y 1(t − τ), we obtain from Eq. (2.66)

Eqs. (2.68) and (2.69) are satisfied and thus the system is time invariant .

2.5.6 Impulse Response


The impulse response h(t) of the continuous-time LTI system described by Eq. (2.54)
satisfies the differential equation

(2.70)

with the initial rest condition.


For example, let us consider the RC circuit of Example 2.11 with the time constant
RC = 1 sec described by the following differential equation:

The impulse response h(t) should satisfy the differential equation

Then, the complimentary solution h c (t) satisfies

To obtain complementary solution, let us assume


Substituting this into

yields

Thus, k = −1 and

The particular solution h p (t) is zero since h p (t) cannot contain δ(t). Thus,

To find the constant B, substituting h(t) = Be −tu(t)into

yields

such that B = 1 and hence


Example 2.29 Consider the RL circuit shown in Figure 2.28:

Figure 2.28 RL circuit

(i) Determine the impulse response.

(ii) Determine the step response .


Solution Writing the loop equation using Kirchhoff’s voltage law assuming i(t) is the
current flowing through the circuit, we obtain

But

Substituting in the above equation, we get

which can be rewritten as


(i) The impulse response h(t) should satisfy the differential equation

Then, the complimentary solution h c (t) satisfies

To obtain complementary solution, let us assume

Substituting this into

yields

Thus, and

The particular solution h p (t) is zero since h p (t)) cannot contain δ(t). Thus,

To find the constant B, substituting into


yields

such that and hence

(ii) The step response s(t) is given by

2.5.7 Solution of Differential Equations Using MATLAB


The response y(t) of a system described by Eq. (2.54) for an input x(t) can be determined
by using the MATLAB command dsolve(‘eqn1’,‘eqn2’, …) which accepts symbolic
equations representing ordinary differential equations and initial conditions. Several
equations or initial conditions may be grouped together, separated by commas, in a single
input argument.
Example 2.30 Verify the result of Example 2.28 using MATLAB.
Solution The relation between the output y(t) and the input x(t) is related by

The output response y(t) is determined and displayed by the MATLAB commands
y = dsolve(‘Dy+y=exp(−2*t)’, , ‘y(0)=0’, ‘t’);
disp ([‘y(t) = (‘,char(y), ‘)u(t) ‘ ]);

The displayed output is

Example 2.31 Consider the RLC circuit of Example 2.26 and determine the current y(t)
for the input voltage x(t) = 10e−3t u(t) where the initial inductor current is zero and the
initial capacitor voltage is equal to 5 volts.
Solution The relation between the output y(t) and the input x(t) is related by

The current y(t) is determined and displayed by the MATLAB commands


y = dsolve(‘D2y+3*Dy+2*y=-30*exp(-3*t)’, ‘y(0)=0’,
‘Dy(0)=5’, ‘t’);
disp ([‘y(t) = (‘, char(y), ‘)u(t) ‘ ]);

The displayed y(t) is

2.5.8 Determining Impulse Response and Step Response for a


Linear System Described by a Differential Equation Using
MATLAB
In general, for a system described by Eq. (2.54), the impulse response can be determined
by using the following MATLAB command:

For example, for the differential equation given by


we use the following MATLAB program 2.4 to determine impulse response and step
response .
Program 2.4

The impulse response and step response obtained from the above program are shown
in Figure 2.29(a) and (b), respectively.
Figure 2.29 (a) impulse response , (b) step response

2.6 Block-Diagram Representations of LTI Systems Described


by Differential Equations
The block diagram of a continuous system describes how the internal operations are
ordered, whereas the differential equation description gives only the input and output
relation. Hence, the block diagram is more detailed representation of continuous-time
systems than the differential equation description. Integrators are preferred to
differentiators in the block-diagram representation of continuous-time systems as the
integrators can be easily built from analog components and noise in a system will be
smoothed out.
Let us define the following three basic elements adder, scalar multiplier , and
integrator used in the block-diagram representation of continuous-time systems (Figure
2.30).

Figure 2.30 Block-diagram representation of basic elements (a) adder, (b) multiplier, (c) integrator

Consider the system described by Eq. (2.54), which is repeated here for convenience
assuming M = N:

(2.71)

If it is assumed that the system is at rest, then the Nth integral of , and

the Nth integral of is . Hence, taking the Nth integral of Eq. (2.71), we
obtain the integral description of the system as

(2.72)

Since y (0)(t) = y(t), Eq. (2.72) can be rewritten as

(2.73)

The direct form I and the direct form II implementations of Eq. (2.73) are shown in
Figure 2.31(a) and (b), respectively.
Figure 2.31 Block-diagram representation for continuous-time system described by integral Eq. (2.73) (a) direct form I
, (b) direct form II

Example 2.32 Obtain the block-diagram representation of a system described by

Solution
Figure 2.32 Block diagram representation in direct form II for a second order continuous time system described by the
above differential equation is shown in Figure 2.32

2.7 Singularity Functions


The unit impulse δ(t) is one of a class of signals known as singularity functions. Consider
a LTI system for which the input and the output are related by

(2.74)

The unit impulse response of the considered system is the derivative of the unit
impulse which is referred as the unit doublet u 1(t).
By the convolution representation of LTI systems, we represent

(2.75)

for any input signal x(t). Similarly, for an LTI system described by

(2.76)

we obtain

(2.77)

where u 2(t) = u 1(t) ∗ u 1(t) is the second derivative of unit impulse.


Thus, u k (t) for k > 0 is the kth derivative of the unit impulse and is the impulse
response of a LTI system that takes the kth derivative of the input:
(2.78)

If we consider a system described by Eq. (2.75) with input x(t) = 1,then

(2.79)

Hence, the unit doublet has zero area.


In addition to the singularity functions, the successive integrals of the unit impulse
function, it is known that

(2.80)

and hence the unit step function is the impulse response of an integrator, and we have

(2.81)

Similarly, the impulse response of a system consisting of two integrators in cascade


can be denoted by u −1(t) which can be expressed as the convolution of u(t) with itself:

(2.82)

Since u(t) = 0 for t < 0 and u(t) = l, u −1(t) can be expressed as

(2.83)

Example 2.33 For a system if y(t) = x(t) * h(t), show that

(i)

(ii)
Solution

(i)

(ii)

2.8 State-Space Representation of Continuous-Time LTI


Systems
2.8.1 State and State Variables
The state of a system at time t 0 is the minimal information required that is sufficient to
determine the state and the output of the system for all times t ≥ t 0 for the known system
input at all times t ≥ t 0.The variables that contain this information are called state
variables.

2.8.2 State-Space Representation of Single-Input Single-Output


Continuous-Time LTI Systems
Consider a single-input single-output continuous-time LTI system described by the
following Nth-order differential equation:

(2.84)

where y(t)) is the system output and ʊ(t) is the system input.
Define the following useful set of state variables

(2.85)

Taking derivatives of the first N − 1 state variables of the above, we get

(2.86)
Rearranging Eq. (2.84) and using Eq. (2.86), we obtain

(2.87)

(2.88)

Denoting ,

Eqs. (2.86), (2.87), and (2.88) can be written in matrix form as

(2.89a)

(2.89b)

Define a Nx1 dimensional vector called state vector as

(2.90)

The derivative of X(t) becomes


(2.91)

More compactly Eqs. (2.89a) and (2.89b) can be written as

(2.92)

(2.93)

where

Eqs. (2.92) and (2.93) are called N-dimensional state-space representation or state
equations of the system.
In general state equations of a system are described by

(2.94)

(2.95)

Example 2.34 Obtain the state-space representation of a system described by the


following differential equation:

Solution: The order of the differential equation is three. Hence, the three-state variables
are
The first derivatives of the state variables are

The state-space representation in matrix form is given by

Example 2.35 Obtain the state-space representation for the electrical circuit shown in
Figure 2.33 considering , i1, and as state variables and v c1 as output y(t).

Figure 2.33 Third-order electrical circuit

Solution The state variables for the circuit are


From the relationship between the voltage and current i 1, we obtain

Kirchhoff’s voltage equation around the closed loop gives

This equation can be rewritten as

The current i 3 = x 3 and the current

By Kirchhoff’s current law

implying that

Hence,

The voltage is taken as the output y(t):


The state-space representation of the circuit is given by

Example 2.36 Obtain state-space representation of the circuit shown in Figure 2.34
considering the current through the inductor and voltage across the capacitor as state
variables and voltage across the capacitor as the output y(t).

Figure 2.34 Electrical circuit of Example 2.36

Solution The state variables for the circuit are

Since the voltage across the capacitor is equal to the voltage across the series inductor and
resistor branch,
we obtain

This equation can be rewritten as


Kirchhoff’s voltage equation around the closed loop gives

Hence

By Kirchhoff’s current law

Therefore

This equation can be rewritten as

The voltage is taken as the output y(t):

The state-space representation of the circuit is given by


2.8.3 State-Space Representation of Multi-input Multi-output
Continuous-Time LTI Systems
The state-space representation of continuous-time system with m inputs and l output and
N state variables can be expressed as

(2.96)

(2.97)

where

2.9 Problems

1. Check the following for linearity and time-invariance:


(i)

(ii) y(t) = tx(t)

(iii) y(t) = t 2 x 2(t)

(iv)
(v) y(t) = ln (x(t))

(vi) y(t) = x(t) + cons tan t

2. Determine which of the following systems are linear and which are nonlinear:
(i)

(ii)

(iii)

(iv)

3. An amplifier has an output y(t) = cos (ωt). If y(t) is limited by clipping resulting in
the clipped output y c (t), check for linearity and time-invariance of y c (t).

4. An input signal x(t) and two possible outputs of a linear time-invariant system are
shown in Figure P2.1. Which outputs are possible given that the system is linear and
time invariant?

Figure P2.1 Input signal and two possible outputs of problem 4



5. If x(t) = u(t + 1) − u(t − 1), compute (x * x * x)(t) and sketch.

6. Determine graphically the convolution y(t) = x(t) * h(t) for the following:
(i)

(ii)

(iii)

(iv)

(v)

7. Consider a continuous-time LTI system with the step response

Determine and sketch the output of this system to the input


8. Consider h(t) be the triangular pulse and x(t) be the unit impulse train as shown in
Figure 2.16 of Example 2.17. Determine y(t) = x(t) * h(t) and sketch it.

(i) for T = 3 (ii) for

9. An LTI system consists of two subsystems in cascade. The impulse responses of the
subsystems are, respectively, given by

Find the impulse response of the overall system.

10. If y(t) = x(t) * h(t) shows that the area of the convolution y(t) is the product of the
areas of the signals that are being convolved x(t) and h(t), that is,

11. Compute and sketch the periodic convolution of the square-wave signal x(t) shown
in Fig. P2.2 with itself.
Figure P2.2 Square wave signal of problem 11

12. Consider an LTI system with impulse response h(t):


(i) Check for its stability, if h(t) is periodic and nonzero.

(ii) Check for causality of the inverse of the LTI system if h(t) is causal .

(iii) Check for its stability if h(t) is causal.

13. Consider the system described by

Determine the impulse response h(t) of the system.

14. Consider the system described by


(i) Determine the step response of the system.

(ii) Determine the impulse response from the step response.

15. Determine the response y(t) of the OP-Amp circuit shown in Figure P2.3 for an input
x(t) = u(t)

Figure P2.3 OP-Amp circuit of problem 15

16. Determine the impulse response y(t) of the OP-Amp circuit shown in Figure P2.4.
Figure P2.4 OP-Amp circuit of problem 16

17. Determine the impulse response h(t) for a system described by

18. Draw block diagrams for direct form II implementation of the corresponding
systems

(i)

(ii)

(iii) d 2 y/dt 2 − ady/dt = a dx/dt + abx(t)

19. For a given signal x(t)


(i) Show that

(ii) Determine the value of


(iii) Find an expression for x(t) u 2(t) similar to (i).

20. Obtain the state-space representation for the electrical circuit shown
In Figure P2.5 considering i 1, , and as state variables and current through the
inductor as the output y(t)

Figure P2.5 Electrical circuit of prblem 20

2.10 MATLAB Exercises

1. Verify the solution of problem 2 using MATLAB.

2. Write a MATLAB program to compute the convolution of the input x(t) and the
impulse response h(t) shown in Figure M2.1.

Figure M2.1 Input and Impulse response of MaTLAB exercise 2


3. If x(t) = u(t − 1) − u(t − 2), write a MATLAB program to compute the result y 10 (t)
convolving ten x(t) functions together, that is

and comment on the result.

4. Write a MATLAB program to find the impulse response h(t) and step response for a
system described by

Further Reading
1. Oppenheim, A.V., Willsky, A.S.: Signals and Systems. Prentice-Hall, Englewood Cliffs (1983)
2. Hsu, H.: Signals and Systems Schaum’s Outlines, 2nd edn. McGraw-Hill, New York (2011)
3. Kailath, T.: Linear Systems. Prentice-Hall, Englewood Cliffs (1980)
4. Zadeh, L., Desoer, C.: Linear System Theory. McGraw-Hill, New York (1963)
© Springer International Publishing AG, part of Springer Nature 2018
K. Deergha Rao, Signals and Systems, https://doi.org/10.1007/978-3-319-68675-2_3

3. Frequency Domain Analysis of Continuous-


Time Signals and Systems
K. Deergha Rao1
(1) Department of Electronics and Communication Engineering, Vasavi College of
Engineering (Affiliated to Osmania University), Hyderabad, Telangana, India


The continuous-time signals and systems are often characterized conveniently in a
transform domain. This chapter describes the transformations known as Fourier series and
Fourier transform which convert time-domain signals into frequency-domain (or spectral)
representations. The frequency domain representation of continuous-time signals are
described along with the conditions for the existence of Fourier series for periodic signals
and Fourier transform for nonperiodic signals and their properties. Finally, the frequency
response of continuous-time systems is discussed.

3.1 Complex Exponential Fourier Series Representation of the


Continuous-Time Periodic Signals
It is recalled from Chapter 1 that a signal x(t) is periodic if

(3.1)

The fundamental period T 0 is small minimum, positive nonzero value of T for which
Eq. (3.1) is satisfied, and is referred to as the fundamental angular frequency.

A sinusoidal signal x(t) = cos (ω 0 t) and the complex exponential signal are
the two examples of periodic signals.
The complex exponentials related harmonically are expressed by

(3.2)

The fundamental frequency of each of these signals is a multiple of Ω0, and hence
each is periodic with period T 0 .
Thus, a linear combination of complex exponentials related harmonically can be
written as

(3.3)

Eq. (3.3) is the Fourier series representation of a periodic signal x(t).


The Fourier coefficients a k can be determined as follows:

Multiplying Eq. (3.3) both sides by , we obtain

(3.4)

Integrating Eq. (3.4) both sides from 0 to T 0, we have

(3.5)

Interchanging the integration and summation, Eq. (3.5) can be rewritten as

(3.6)

(3.7)

Hence,

(3.8)

Thus, Eq. (3.6) becomes


(3.9)

Therefore, the Fourier coefficients a n are given by

(3.10)

Thus, the Fourier series of a periodic signal is defined by Eq. (3.3) referred to as the
synthesis equation and Eq. (3.10) as the analysis equation.

3.1.1 Convergence of Fourier Series


The sufficient conditions for guaranteed convergence of Fourier series are the following
Dirichlet conditions:

1. x(t) must be absolutely integral over any period, that is,

(3.11)

which guarantees that each Fourier coefficient ak has finite value.

2. x(t) must have finite number of maxima and minima during any single period of it.

3. x(t) must have finite number of discontinuities in any finite interval of time and each
of these discontinuities being finite.

3.1.2 Properties of Fourier Series


Linearity Property If x 1(t) and x 2(t) are two continuous-time signals with Fourier
series coefficients a n and b n , then the Fourier series coefficients of a linear combination
of x 1(t) and x 2(t), that is, c 1 x 1(t) + c 2 x 2(t), are given by
where c 1 and c 2 are arbitrary constants.
Time Shifting Property  If x(t) is a continuous-time periodic signal with period T 0 and
the Fourier coefficients a n , then the Fourier series coefficients of the x(t − t 0) are given
by .

Proof  Since , the Fourier coefficients a n of a periodic signal x(t) are given by

Let the Fourier series coefficients of x(t − t 0) be

Letting τ = t − t 0

Conjugate Property If x(t) is a continuous-time periodic signal with the Fourier


coefficients a n , then the Fourier series coefficients of the x*(t) are given by .

Proof , then replacing n by −n, we get

Taking both sides conjugate of this equation, we have


Symmetry for Real Valued Signal If x(t) is a continuous-time real valued signal with
the Fourier coefficients a n , then

where * stands for the complex conjugate.


Proof From conjugate property, we know that

implies that Re(a n) = Re(a –n), i.e., the real part of a n is even, andIm(a n) = −(Im(a −n)),
i.e., the imaginary part of a n is odd.
If x(t) is a continuous-time real and even signal, i.e., x*(t) = x(t) x(t) = x(−t), it can be
easily shown that a n = a −n and .
Similarly, if x(t) is a continuous-time real and odd signal, i.e., x*(t) = x(t) x(t) = −x(−t),
it can be easily shown that a n = −a −n and .
Frequency Shifting Property If x(t) is a continuous-time periodic signal with period T 0
and the Fourier coefficients a n , then the Fourier series coefficients of the are
given by a n−K .
Proof The Fourier coefficients a n of a periodic signal x(t) are given by
Let d n be the Fourier coefficients of , then

Thus, it is proved.
Time Reversal Property If x(t) is a continuous-time periodic signal with the Fourier
coefficients a n , then the Fourier series coefficients of the x(−t) are given by a −n .
Time Scaling Property If x(t) is a continuous-time periodic signal with period T 0 and
the Fourier coefficients a n , then the Fourier series coefficients of the x(αt) α > 0 are given
by a n with period .

Proof  Since , the Fourier coefficients a n of a periodic signal x(t) are given by

Let the Fourier series coefficients of x(αt) be , then

Letting τ = αt

Hence, .

Differentiation in Time If x(t) is a continuous-time periodic signal with period T 0 and


the Fourier coefficients a n , then the Fourier series coefficients of the are given by

Proof The Fourier series representation of of x(t) is


Differentiating this equation both sides with respect to t, we obtain

Since

giving the Fourier series representation of . Comparing this with the Fourier
series representation of coefficients a n , it is clear that the Fourier coefficients of are

Integration Property If x(t) is a continuous-time periodic signal with period T 0 and the
Fourier coefficients a n , then the Fourier series coefficients of the ∫x(t) are given by

Proof The Fourier series representation of x(t) is

Integrating this equation both sides with respect to t , we obtain

Since
giving the Fourier series representation of ∫x(t)dt. Comparing this with the Fourier
series representation of coefficients a n , it is clear that the Fourier coefficients of ∫x(t)dt
are

Periodic Convolution If x 1(t) and x 2(t) are two continuous-time signals with common
period T 0 and Fourier coefficients a n and b n , respectively, then the Fourier series
coefficients of the convolution integral of x 1(t) and x 2(t) are given by T 0 a n b n .
Proof The periodic convolution integral of two signals with common period T 0 is
defined by

(3.12)

Let c n be the Fourier series coefficients of y(t), then

(3.13)

Letting t − τ = t 1, Eq. (3.13) becomes

Interchanging the order of integration, the above equation can be rewritten as

(3.14)

By definition of Fourier series

Hence c n = T 0 a n b n. . Thus, it is proved.


Multiplication Property
If x 1(t) and x 2(t) are two continuous-time signals with Fourier coefficients a n and b n ,
respectively, then the Fourier series coefficients of a new signal x 1(t)x 2(t) are given by
implying that signal multiplication in the time domain is equivalent to
discrete-time convolution in the frequency domain .
Proof Let d n be the Fourier coefficients of the new signal x 1(t)x 2 (t). By definition of
Fourier series representation of a signal, we have

Parseval’s Theorem If x(t) is a continuous-time signal with period T 0 and Fourier


coefficients a n , then the average power P of x(t) is given by

(3.15)

Proof The average power P of a periodic signal x(t) is defined as

(3.16)

Assuming that x(t) is complex valued x(t)x*(t) = |x(t)|2 and x*(t) can be expressed in
terms of its Fourier series as

(3.17)

Eq. (3.16) can be rewritten as

(3.18)

Substituting Eq. (3.17) in Eq. (3.18), we obtain

(3.19)
Interchanging the order of integration, Eq. (3.19) can be rewritten as

(3.20)

By definition of the Fourier series

Thus,

The properties of continuous-time Fourier series are summarized in Table 3.1.


Table 3.1 Some properties of continuous-time Fourier series

Property Periodic signal Fourier series coefficients

Linearity c 1 x 1(t) + c 2 x 2(t) c 1 a n + c 2 b n

Time shifting x(t − t 0)

Time reversal x(−t) a −n

Conjugate x *(t)

Symmetry x(t) real


x e (t) (x(t) real)

x o (t) (x(t) real)

Re[a n ]
jIm[a n ]

Frequency a n−K
shifting

Time scaling x(αt), α > 0 a n


(periodic with period )

Differentiation in time

Integration ∫x(t)dt

Periodic convolution property T 0 a n b n.

Multiplication property x 1(t)x 2(t)

Half-Wave Symmetry
If the two halves of one period of a periodic signal are of identical shape, except that one
is the negation of the other, the periodic signal is said to have a half-wave symmetry.
Formally, if x(t) is a periodic signal with period T 0, then x(t) has half-wave symmetry if

Example 3.1 Prove that Fourier series representation of a periodic signal with half-wave
symmetry has no even-numbered harmonics.
Proof A periodic signal x(t) with half-wave symmetry is given by

For 

Substituting in the above equation , we get

By using time shifting property, we obtain


Since

Example 3.2 Find Fourier series of the following periodic signal with half-wave
symmetry as shown in Figure 3.1.

Figure 3.1 Periodic signal with half-wave symmetry

Solution The period T 0 = 6 and

Fourier coefficients are given by

For odd n

Example 3.3 Find Fourier series of the following periodic signal with half-wave
symmetry (Figure 3.2)
Figure 3.2 Periodic signal with half-wave symmetry

Solution The period T 0 = 8 and

Fourier coefficients are given by

For odd n

Example 3.4 Consider the periodic signal x(t) given by

(i) Determine the fundamental period and frequency of x(t)

(ii) Show that x(t) is a real signal


(iii) Find energy of the signal
Solution

(i) x(t) is the sum of two periodic signals with periods and The ratio
is a rational number.

The fundamental period of the signal x(t) is 3T 1 = 2T 2 = 2π. The fundamental


frequency .

(ii) x(t) is exponential Fourier series representation of the form

where

and a n = 0 for all other. It is noticed that for all n. Hence, x(t) is a real signal.

(iii) The average power of the signal x(t) is

By Parseval’s theorem,
Example 3.5 Find the Fourier series coefficients for each of the following signals:

(i) x(t) = cos(Ω0 t) + sin (2Ω0 t)

(ii) x(t) = 2 cos(Ω0 t) + sin2 (2Ω0 t)


Solution

(i)

a n = 0 for all other n.

(ii)

a n = 0 for all other n.


Example 3.6 Find Fourier series coefficients of the following continuous-time periodic
signal and plot the magnitude and phase spectrum of it:
Solution

Since Ω 0 = 2π

a n = 0 for all other n.


The magnitudes of Fourier coefficients are

The magnitude spectrum and phase spectrum are shown in Figures 3.3 and 3.4,
respectively.

Figure 3.3 Magnitude spectrum of x(t)


Figure 3.4 Phase spectrum of x(t)

Since x(t) is a real valued, its magnitude spectrum is even and the phase spectrum is
odd.
Example 3.7

(i) Obtain x(t) for the following non-zero Fourier series coefficients of a continuous-time
real valued periodic signal x(t) with fundamental period of 8.

(ii) Consider a continuous periodic signal with the following magnitude spectra shown
in Figure 3.5. Find the DC component and average power of the signal.

Figure 3.5 Magnitude spectrum of a signal

Solution (i)

(ii) The DC component is given by


By using Parseval’s relation , the average power is computed as

Example 3.8 Which of the following signals cannot be represented by the Fourier series?
(i) x(t) = 4 cos(t) + 6 cos(t)
(ii) x(t) = 3 cos(πt) + 6 cos(t)
(iii) x(t) = cos(t) + 0.75
(iv) x(t) = 2 cos(3πt) + 3 cos(7πt)
(v) x(t) = e −|t| sin (5πt)
Solution

(i) x(t) = 4 cos(t) + 6 cos(t) is periodic with period 2π.

(ii) x(t) = 3 cos(πt) + 6 cos(t)


The first term has period

The second term has period

The ratio is not a rational number. Hence, x(t) is not a periodic signal.

(iii) x(t) = cos(t) + 0.75 is periodic with period 2π.


(iv) x(t) = 2 cos(3πt) + 3 cos(7πt)
The first term has period

The second term has period

The ratio is a rational number. Hence, x(t) is a periodic signal.

(v) Due to decaying exponential function, it is not periodic. So Fourier series cannot be
defined for it.

Hence, (ii) and (v) cannot be represented by Fourier series . Since the remaining three
are periodic; they can be represented by Fourier series.
Example 3.9 Find the Fourier series of a periodic square wave with period T 0 defined
over one period by

Solution For n=0,

For n ≠ o,
Since ,

Example 3.10 Find the Fourier series of the periodic signal shown in Figure 3.6.

Figure 3.6 Periodic signal

Solution The period T 0 = 2.

For n=0,

For n ≠ o,

Since e −jnπ + e jnπ = 2(−1) n


3.2 Trigonometric Fourier Series Representation
The trigonometric Fourier series representation of a periodic signal x(t) is expressed by

(3.21)

The Fourier coefficients a n and b n are given by

(3.22a)

(3.22b)

3.2.1 Symmetry Conditions in Trigonometric Fourier Series


If x(t) is an even periodic signal, then

(3.23a)

(3.23b)

b n = 0 for all n.
If x(t) is an odd periodic signal, thena 0 = 0;a n = 0 for all n

(3.24)

Therefore, for every even signal b n = 0. Hence, Fourier series of an even signal
contains DC term and cosine terms only. Fourier series of an odd signal contains sine
terms only.
Example 3.11 Find the trigonometric Fourier series representation of the periodic signal
shown in Figure 3.7 with A = 3 and period T 0 = 2π.

Figure 3.7 Periodic signal

Solution The period T 0=2π. .

The periodic signal x(t) defined over one period is

Since x(t) has odd symmetry, a 0 = 0 and a n = 0

The trigonometric Fourier series representation of x(t) is given by

Example 3.12 Find the trigonometric Fourier series representation of the periodic
triangle wave shown in Figure 3.8 with A = 2 period T 0 = 2.

Figure 3.8 Periodic triangle wave

Solution The period T 0=2. .

The periodic triangle wave x(t) defined over one period with A=2 is
Since x(t) has odd symmetry, a 0 = 0 and a n = 0

The trigonometric Fourier series representation of x(t) is

Example 3.13 Find the trigonometric Fourier series representation of the following
periodic signal with period T 0 = 2.

Solution The period T 0=2. .

Since x(t) has even symmetry, b n = 0

For n = 1,
For n = 2,

For n = 3

For n = 4,5,6,… .

The trigonometric Fourier series representation of x(t) is

Example 3.14 If the input to the half-wave rectifier is an AC signal x(t) = cos(2πt), find
trigonometric Fourier series representation of output signal of the half-wave rectifier.
Solution The output y(t) of half-wave rectifier is

The sinusoidal input and output of half-wave rectifier are shown in Figure 3.9.
Figure 3.9 Input and output of half-wave rectifier

Since y(t) is a real and even function, its Fourier coefficients are real and even. The
period T 0=1.

For n = 0,

Hence, the DC component is

For n≠0

The trigonometric Fourier series representation of y(t) is


3.3 The Continuous Fourier Transform for Nonperiodic Signals
Consider a nonperiodic signal x(t) as shown in Figure 3.10 (a) with finite duration, i.e.,
x(t) = 0 for |t| >T 1. From this nonperiodic signal, a periodic signal can be constructed
as shown in Figure 3.10(b).

Figure 3.10 (a) Nonperiodic signal (b) periodic signal obtained from (a)

The Fourier series representation of is

(3.25a)

(3.25b)

Since for and also since x(t) = 0 outside this interval , so we have

(3.26)
Define

(3.27)

Then

(3.28)

and can be expressed in terms of X(jΩ), that is,

(3.29)

As T 0 tends to infinity, and summation becomes integration, Eq. (3.29)


becomes

(3.30)

Eq. (3.27) is referred to as the Fourier transform of x(t), and Eq. (3.30) is called the
inverse Fourier transform .

3.3.1 Convergence of Fourier Transforms


The sufficient conditions referred to as the Dirichlet conditions for the convergence of
Fourier transform are:

1. x(t) must be absolutely integrable, that is,

(3.31)

2. x(t) must have a finite number of maxima and minima within any finite interval.
3. x(t) must have a finite number of discontinuities within any finite interval, and each of

these discontinuities is finite.

Although the above Dirichlet conditions guarantee the existence of the Fourier
transform for a signal, if impulse functions are permitted in the transform, signals which
do not satisfy these conditions can have Fourier transforms.
Example 3.15 Determine x(0) and X(0) using the definitions of the Fourier transform and
the inverse Fourier transform
Solution By the definition of Fourier transform , we have

Substituting Ω = 0 in this equation, we obtain

By the definition of the inverse Fourier transform, we have

Substituting t=0, it follows that

3.3.2 Fourier Transforms of Some Commonly Used


Continuous-Time Signals
The unit impulse
The Fourier transform of the unit impulse function is given by

(3.32)
implying that the Fourier transform of the unit impulse contribute equally at all
frequencies
Example 3.16 Find the Fourier transform of x(t) = e −bt u(t) for b > 0.
Solution

Example 3.17 Find the Fourier transform of x(t) = 1.


Solution By definition of the inverse Fourier transform and sampling property of the
impulse function, we have

Hence, and thus

Example 3.18 Find the Fourier transforms of the following

(i) sin (Ω0t)

(ii) cos (Ω0t)


Solution

(i)
By the sampling property of the impulse function, we have

Hence,
Thus

Therefore,

(ii)

Example 3.19 Find the Fourier transform of the rectangular pulse signal shown in Figure
3.11

Figure 3.11 Rectangular pulse signal

Solution

Since can be written in terms of the sinc function as

Hence,
Example 3.20 Consider the Fourier transform X(jΩ) of a signal shown in Figure 3.12.
Find the inverse Fourier transform of it.

Figure 3.12 Fourier transform of a signal

Solution

By the inverse Fourier transform definition, we have

Example 3.21 Determine the Fourier transform of Gaussian signal

Solution

Letting
Letting

Since

Substituting

3.3.3 Properties of the Continuous-Time Fourier Transform


Linearity If x 1(t) and x 2(t) are two continuous-time signals with Fourier transforms X
1(jΩ) and X 2(jΩ), then the Fourier transform of a linear combination of x 1(t) and x 2(t) is
given by

(3.33)

where a 1 and a 2 are arbitrary constants.


Example 3.22 Find the Fourier transform of an impulse train with period T as given by
Solution A periodic signal x(t) with period T is expressed by

Taking Fourier transform both sides, we obtain

Using and the linearity property, we have

If x(t) is an impulse train with period T as given by

Since

Symmetry for Real Valued Signal If x(t) is a continuous-time real valued signal with
Fourier transform X(jΩ), then

(3.34)

where * stands for the complex conjugate.


Proof

Since x(t) is real x*(t) = x(t), we get

The X(jΩ) can be expressed in rectangular form as

If x(t) is real, then

implying that the real part is an even function of Ω and the imaginary part is an odd
function of Ω.
For real x(t) in polar form

indicating that the magnitude is an even function of Ω and the phase is an odd function
of Ω.
Symmetry for Imaginary Valued Signal If x(t) is a continuous-time imaginary valued
signal with Fourier transform X(jΩ), then

(3.35)

Proof
Since x(t) is purely imaginary, we get x(t) = −x*(t), and we get

Symmetry for Even and Odd Signals

(i) If x(t) is a continuous-time real valued and has even symmetry, then

(3.36a)

(ii) If x(t) is a continuous-time real valued and has odd symmetry, then

(3.36b)

Proof Since x(t) is real x(t) = x*(t) and x(t) has even symmetry x(t) = x(−t), we get

Letting τ =−t, we obtain


The condition X*(jΩ) = X(jΩ) holds for the imaginary part of X(jΩ) to be zero.
Therefore, if x(t) is real valued and has even symmetry, then X(jΩ) is real.
Similarly, for real valued x(t) having odd symmetry, it can be shown that X*(jΩ)=
−X(jΩ) and X(jΩ) is imaginary.
Time Shifting If x(t) is a continuous-time signal with Fourier transform X(jΩ), then the
Fourier transform of x(t-t 0) the delayed version of x(t) is given by

(3.37)

Proof

Letting τ = t−t 0 , we obtain

Therefore, time shifting results in unchanged magnitude spectrum but introduces a


phase shift in its transform, which is a linear function of Ω.
Example 3.23 Find the Fourier transform of δ(t−t 0)
Solution

Hence,

Frequency Shifting If x(t) is a continuous-time signal with Fourier transform X(jΩ),


then the Fourier transform of the signal is given by
(3.38)

Proof

Thus, multiplying a sequence x(t) by a complex exponential in the time domain


corresponds to a shift in the frequency domain.
Time and Frequency Scaling If x(t) is a continuous-time signal with Fourier transform
X(jΩ), then the Fourier transform of the signal x(at) is given by

(3.39)

where a is a real constant.


Proof

Letting τ = at, we obtain

Thus,

Differentiation in Time If x(t) is a continuous-time signal with Fourier transform


X(jΩ), then the Fourier transform of the is given by
(3.40)

Proof By the definition of the inverse Fourier transform, it is known that

Differentiating this equation both sides with respect to t, we obtain

Taking the Fourier transform of this equation both sides, we get

Thus, differentiation in the time domain corresponds to multiplication by jΩ in the


frequency domain.
By repeated application of this property, we obtain

Example 3.24 Determine the Fourier transform of x(t) = u(t)


Solution Decomposing the unit step function into even and odd components, it is written
as

where the even component and the odd component

Hence,
Thus,

Therefore,

Example 3.25 Determine the Fourier transform of

(i) x(t) = sin(Ω0t)u(t)

(ii) x(t) = cos(Ω0t)u(t)


Solution (i)

since

By frequency shifting property, we get

(ii)
Since

By frequency shifting property, we get

Example 3.26 Determine the Fourier transform of

(i)

(ii)
Solution (i)

Let x 1 (t) = e −bt u(t)

By frequency shifting property


Similarly,

Hence,

(ii)

Differentiation in Frequency If x(t) is a continuous-time signal with Fourier transform


X(jΩ), then the Fourier transform of the -jtx(t) is given by

(3.41)

Proof By the definition of the Fourier transform, it is known that

Differentiating this equation with respect to Ω, we have


implying that

Thus, differentiation in the frequency domain corresponds to multiplication by −jt in


the time domain.
can also be expressed as

Example 3.27 Find the Fourier transform of the following continuous-time signal:

Solution For

For n = 2, x(t) = te −bt u(t)


By differentiation in frequency property,

For
For

Thus

for

Integration If x(t) is a continuous-time signal with Fourier transform X(jΩ), then the
Fourier transform of the is given by

(3.42)

Proof Letting and differentiating both sides, we obtain

Now taking the Fourier transform of both sides, it yields


Hence

Parseval’s theorem If x(t) is a continuous-time signal with Fourier transform X(jΩ),


then the energy E of x(t) is given by

(3.43)

where |X(jΩ)|2 is called the energy density spectrum.


Proof The energy E of x(t) is defined as

(3.44)

Assuming that x(t) is complex value x(t)x ∗(t) = |x(t)|2 and x*(t) can be expressed in
terms of its Fourier transform as

(3.45)

Eq. (3.44) can be rewritten as

(3.46)

Substituting Eq (3.45) in Eq. (3.46), we obtain

(3.47)
Interchanging the order of integration , Eq. (3.47) can be rewritten as

(3.48)

By definition of the Fourier transform

Thus,

Example 3.28 Consider a signal x(t) with its Fourier transform given by

(i) Determine the energy of the signal x(t)

(ii) Find x(t)


Solution

(i) By Parseval’s theorem , the energy E of x(t) is given by


(ii)

can be written as

where

which are depicted as

Since

By linearity property,

The Convolution Property


If x 1(t) and x 2(t) are two continuous-time signals with Fourier transforms X 1(jΩ) and X
2(jΩ), then the Fourier transform of the convolution integral of x 1(t) and x 2(t) is given by

(3.49)
Hence, convolution of two sequences x 1(t) and x 2(t) in the time domain is equal to the
product of their frequency spectra.
Proof By the definition of convolution integral ,

(3.50)

Taking the Fourier transform of Eq. (3.50), we obtain

(3.51)

Interchanging the order of integration, Eq. (3.51) can be rewritten as

(3.52)

By the shifting property, .

Hence, Eq. (3.52) becomes

(3.53)

By the definition of continuous-time Fourier transform,

Thus

Example 3.29 Determine the Fourier transform of the triangular output signal y(t) of an
LTI system as shown in Figure 3.13 .
Figure 3.13 Triangular output signal

Solution A triangular signal can be represented as the convolution of two rectangular


pulse signals x 1(t) and x 2(t) defined by

By definition,

By the convolution property, Y(jΩ) the Fourier transform of y(t) is given by

Example 3.30 Consider an LTI continuous-time system with the impulse response
. Find the output y(t) of the system for an input

Solution Since
shown as

The output y(t) of the system is given by

By convolution property, we have

which is depicted as

Therefore,

Duality property
For a given Fourier transform pair
By interchanging the roles of time and frequency, a new Fourier transform pair is
obtained as

For example, the duality exists between the Fourier transform pairs of Examples 3.19
and 3.20 as given by

The Modulation Property


Due to duality between the time domain and frequency domain, the multiplication in the
time domain corresponds to convolution in the frequency domain .
If x 1(t) and x 2(t) are two continuous-time signals with Fourier transforms X 1(jΩ) and
X 2(jΩ), then the Fourier transform of the product of x 1(t) and x 2(t) is given by

(3.54)

This can be easily proved by dual property. Eq. (3.54) is called the modulation
property since the multiplication of two signals often implies amplitude modulation .
Example 3.31 Find the Fourier transform of

Solution Let and x 2(t) = x(t)


Example 3.32 Let y(t) be the convolution of two signals x 1(t) and x 2(t) defined by

Determine the Fourier transform of y(t).


Solution By dual property, the Fourier transform of sinc(t) is given by

The Fourier transform of x 1(t) is given by

The Fourier transform of x 2(t) is given by

By modulation property

By convolution property
There is no overlap between the two transforms X 1(jΩ) and X 2(jΩ), and hence

Therefore,

Example 3.33 Determine the value of

Solution Since the Fourier transform of sinc (2t) is , using Parseval’s theorem ,
Example 3.34 Consider a signal x(t) with its Fourier transform given by

Find the Fourier transform the system output y(t) given by

Solution By the definition of the inverse Fourier transform, we have

Hence,

By modulation property , we obtain

The Fourier transform properties of continuous-time signals are summarized in Table


3.2 and 3.3.
Table 3.2 Some properties of continuous-time Fourier transforms

Property Aperiodic signal Fourier transform


Linearity a 1 x 1(t)+a 2 x 2(t) a 1 X 1(jΩ)+a 2 X 2(jΩ)

Time shifting x(t − t 0)

Symmetry

x e (t) (x(t) real)


x o (t) (x(t) real)

Re[X(jΩ)
jIm[X(jΩ)]

Time reversal x(−n) X(e −jω)

Frequency shifting X(j(Ω−Ω0))

Time and frequency scaling x(at)

Differentiation in time jΩX(jΩ)

Differentiation in frequency −jtx(t)

Integration

Convolution x 1(t) ∗ x 2(t) X l(jΩ)X 2(jΩ)


property

Modulation x 1(t) x 2(t)


property


Table 3.3 Basic Fourier transform pairs

Signal Fourier transform

δ(t) 1

1 2πδ(Ω)

2πδ(Ω−Ω0)
sin(Ω0t) π[δ(Ω−Ω 0 )−δ(Ω−Ω 0 )]

cos(Ω0t) π[δ(Ω−Ω 0 )+δ(Ω−Ω 0 )]/j

Periodic square wave with period T 0

and x(t) = x(t+T 0)

δ(t−t 0)

u(t)

Sgn(t)

3.4 The Frequency Response of Continuous-Time Systems


As in chapter 2, the input output relation of a useful class of continuous-time LTI systems
satisfies the linear constant coefficient differential equation

(3.55)

where coefficients a n and b n are real constants.


From convolution property, it is known that

(3.56)

which can be rewritten as

(3.57)

Applying Fourier transform to both sides of Eq. (3.55), we obtain

(3.58)

By linearity property, Eq. (3.58) becomes

(3.59)

From the differentiation property, Eq. (3.59) can be rewritten as

(3.60)

which can be rewritten as

(3.61)

Thus, the frequency response of a continuous-time LTI system is given by


(3.62)

The function H(jΩ) is a rational function being a ratio of polynomials in (jΩ).

3.4.1 Distortion During Transmission


Eq. (3.56) implies that the transmission of an input signal x(t) through the system is
changed into an output signal y(t). The X(jΩ) and Y(jΩ) are the spectra of the input and
output signals, and H(jΩ) is the frequency response of the system.
During the transmission, the input signal amplitude spectrum |X(jΩ)| is changed
to|X(jΩ)||H(jΩ)|. Similarly, the input signal phase spectrum ∠X(jΩ) is changed to
∠X(jΩ) + ∠ H(jΩ). An input signal spectral component of frequency Ω is modified in
amplitude by a |H(jΩ)| factor and is shifted in phase by an angle ∠H(jΩ).
Thus, the output waveform will be different from the input waveform during
transmission through the system introducing distortion.
Example 3.35 Consider an LTI system described by the following differential equation

Find the Fourier transform of the impulse response of the system.


Solution Apply the Fourier transform on both sides of the differential equation, then we
obtain

The Fourier transform of the impulse response H(jΩ) is given by

Example 3.36 Find the frequency response H(jΩ) of the following circuit
Solution

Taking Fourier transform both sides of this equation, we obtain

3.5 Some Communication Application Examples


3.5.1 Amplitude Modulation (AM) and Demodulation
Amplitude Modulation
In amplitude modulation, the amplitude of the carrier signal c(t) is varied in some manner
with the baseband signal (message signal) m(t) also known as the modulating signal.
The AM signal is given by

(3.63)

(3.63a)

For convenience, if it is assumed that θ c = 0


(3.64)

(3.65)

(3.66)

(3.67)

(3.68)

Eq. (3.61) implies that the AM shifts the message signal so that it is centered at ±Ωc.
The message signal m(t) can be recovered if Ωc >Ωm so that the replica spectra do not
overlap. The AM modulation in frequency domain is illustrated in Figure 3.14.

Figure 3.14 Amplitude modulation

Amplitude Demodulation
The message signal m(t) can be extracted by multiplying the AM signal s(t) by the same
carrier c(t) and passing the resulting signal through a low-pass filter as shown in Figure
3.15.

Figure 3.15 Amplitude demodulation

(3.69)

The amplitude demodulation in frequency domain is illustrated in Figure 3.16.


Figure 3.16 Illustration of amplitude demodulation in frequency domain

3.5.2 Single-Sideband (SSB) AM


The double-sideband modulation is used in Section 3.5.1. By removing the upper sideband
by using a low-pass filter with cutoff frequency Ω c or a lower sideband by high-pass filter
with cutoff frequency Ω c , single-sideband modulation that requires half the bandwidth
can be used. The frequency domain representation of single-sideband modulation is shown
in Figure 3.17. However, the single-sideband modulation requires nearly ideal filters and
increases the transmitter cost.
Figure 3.17 Single-sideband AM

3.5.3 Frequency Division Multiplexing (FDM)


In frequency division multiplexing, multiple signals are transmitted over a single
wideband channel using a single transmitting antenna. Different carriers with adequate
separation are used to modulate for each of these signals with no overlap between the
spectra of the modulated signals. The different modulated signals are summed before
sending to the antenna. At the receiver, to recover a specific signal, the corresponding
frequency is extracted through a band-pass filter. The FDM spectra for three modulated
signals are shown in Figure 3.18.

Figure 3.18 Illustration of FDM for three signals

3.6 Problems

1. Find the exponential Fourier series representation for each of the following signals:
(i) x(t) = cos(Ω0 t)

(ii) x(t) = sin(Ω0 t)

(iii)
(iv) x(t) = sin2(t)

(v) x(t) = cos(6t) + sin (4t)

(vi)

2. What signal will have the following Fourier series coefficients

3. If x 1 (t) and x 2(t) are periodic signals with fundamental period T 0, find the Fourier
series representation of x(t) = x 1(t)x 2(t).

4. Consider the periodic signal x(t) given by

Determine the trigonometric Fourier series representation of the signal x(t).

5. Find the Fourier series of a periodic signal x(t) with period 3 defined over one period
by
6. Find the Fourier series of a periodic signal x(t) with period 6 defined over one period
by

7. Find Fourier series of the periodic signal shown in Figure P3.1.

Figure P3.1 Periodic signal of problem 7

8. Determine the exponential Fourier series representation of the periodic signal


depicted in Figure P3.2

Figure P3.2 Periodic signal of problem 8

9. Determine trigonometric Fourier series representation of the signal shown in Figure


P3.3

Figure P3.3 Periodic signal of problem 9


10. Plot the magnitude and phase spectrum of the periodic signal shown in Figure P3.4

Figure P3.4 Periodic signal of problem 10

11. Find the Fourier transform of

12. Find the Fourier transform of

13. Find the Fourier transform of rectangular pulse given by

14. Find the Fourier transform of

15. Find the Fourier transform of the complex sinusoidal pulse given by

16. Find Fourier transform of the following signal shown in Figure P3.5
Figure P3.5 Signal x(t)

17. Consider the following two signals x(t) and y(t) as shown in Figure P3.6 . Determine
Fourier transform y(t) using the Fourier transform of x(t), time shifting property, and
differentiation property

Figure P3.6 Signals x(t) and y(t)

18. Find the inverse Fourier transform of

19. Find the inverse Fourier transform of

20. Consider the following communication system shown in Figure P3.7 to transmit two
signals simultaneously over the same channel.
Figure P3.7 Communication system

Plot the spectra of x(t), y(t), and z(t) for given the following spectra of the two input
signal shown in Figure P3.8.

Figure P3.8 Spectra of x 1(t) and x 2(t)

21. Determine y(t) of an LTI system with input


and the following H(jΩ) depicted in Figure P3.9 where H(jΩ) is H(jnΩ 0)
evaluated at frequency nΩ0.

Figure P3.9 H(jΩ) of LTI system of problem 21

22. Sketch amplitude single-sideband modulation and demodulation if the message


signal m(t) = cos(Ω m t).

Further Reading
1. Lanczos, C.: Discourse on Fourier Series. Oliver Boyd, London (1966)
2. Körner, T.W.: Fourier Analysis. Cambridge University Press, Cambridge (1989)
3. Walker, P.L.: The Theory of Fourier Series and Integrals. Wiley, New York (1986)
4. Churchill, R.V., Brown, J.W.: Fourier Series and Boundary Value Problems, 3rd edn. McGraw-Hill, New York (1978)
5. Papoulis, A.: The Fourier Integral and Its Applications. McGraw-Hill, New York (1962)
6. Bracewell, R.N.: Fourier Transform and Its Applications, rev, 2nd edn. McGraw-Hill, New York (1986)
7. Morrison, N.: Introduction to Fourier Analysis. Wiley, New York (1994)
8. Lathi, B.P.: Linear Systems and Signals, 2nd edn. Oxford University Press, New York (2005)
9. Oppenheim, A.V., Willsky, A.S.: Signals and Systems. Englewood Cliffs, NJ, Prentice- Hall (1983)
© Springer International Publishing AG, part of Springer Nature 2018
K. Deergha Rao, Signals and Systems, https://doi.org/10.1007/978-3-319-68675-2_4

4. Laplace Transforms
K. Deergha Rao1
(1) Department of Electronics and Communication Engineering, Vasavi College of
Engineering (Affiliated to Osmania University), Hyderabad, Telangana, India


The Laplace transform is a generalization of the Fourier transform of a continuous time
signal. The Laplace transform converges for signals for which the Fourier transform does
not. Hence, the Laplace transform is a useful tool in the analysis and design of continuous
time systems. This chapter introduces the bilateral Laplace transform , the unilateral
Laplace transform , the inverse Laplace transform, and properties of the Laplace transform
. Also, in this chapter, the LTI systems, including the systems represented by the linear
constant coefficient differential equations , are characterized and analyzed using the
Laplace transform. Further, the solution of state-space equations of continuous time LTI
systems using Laplace transform is discussed.

4.1 The Laplace Transform


4.1.1 Definition of Laplace Transform
The Laplace transform of a signal x(t) is defined as

(4.1)

The complex variable s is of the forms = σ + jΩ, with a real part σ and an imaginary
part Ω. The Laplace transform defined by Eq. (4.1) is called as the bilateral Laplace
transform.

4.1.2 The Unilateral Laplace Transform


The unilateral Laplace transform plays an important role in the analysis of causal systems
described by constant coefficient linear differential equations with initial conditions.
The unilateral Laplace transform is mathematically defined as
(4.2)

The difference between Eqs. (4.1) and (4.2) is on the lower limit of the integration. It
indicates that the bilateral Laplace transform depends on the entire signal, whereas the
unilateral Laplace transform depends on the right-sided signal , i.e., x (t) = 0 for t < 0.

4.1.3 Existence of Laplace Transforms


The Laplace transform is said to exist if the magnitude of the transform is finite, that is,
|X(s)| < ∞.
Piecewise continuous A function x(t) is piecewise continuous on a finite interval
a ≤ t ≤ b, if x is continuous on [a,b], except possibly at finitely many points at each of
which x has a finite left and right limit.
Sufficient Condition
The sufficient condition for existence of Laplace transforms is that if x(t) is piecewise
continuous on (0, ∞) and there exist some constants k and M such that |x(t)| ≤ Me kt , then
X(s) exists for s > k.
Proof As x(t) is piecewise continuous on (0, ∞), x(t)e −st is integrable on (0, ∞).

(4.3)

For s > k, |ℒ{x(t)}| < ∞ .

4.1.4 Relationship Between Laplace Transform and Fourier


Transform
When the complex variable s is purely imaginary, i.e., s = jΩ, Eq. (4.1) becomes

(4.4)

Eq. (4.4) is the Fourier transform of x(t), that is,

(4.5)

If s is not purely imaginary, Eq. (4.1) can be written as


(4.6)

Eq. (4.6) can be rewritten as

(4.7)

The right hand side of Eq. (4.7) is the Fourier transform of x(t)e −σt. Thus, the Laplace
transform can be interpreted as the Fourier transform of x(t) after multiplication by a real
exponential signal.

4.1.5 Representation of Laplace Transform in the S-Plane


The Laplace transform is a ratio of polynomials in the complex variable, which can be
represented by

(4.8)

where N(s) is the numerator polynomial and D(s) represents the denominator polynomial.
The Eq. (4.8) is referred to as rational. The roots of the numerator polynomial are referred
to as zeros of X(s) = 0 because for those values of s, X(s) becomes zero. The roots of the
denominator polynomial are called the poles of X(s), as for those values of s, X(s) = ∞. A
rational Laplace transform can be specified by marking the locations of poles and zeros by
x and o in the s-plane, which is called as pole-zero plot of the Laplace transform. For a
signal, the Laplace transform converges for a range of values of s. This range is referred to
as the region of convergence (ROC) , which is indicated as shaded region in the pole-zero
plot.

4.2 Properties of the Region of Convergence


Property 1 ROC of X(s) consists of strips parallel to the jΩ axis .
The ROC of X(s) contains the values of s = σ + jΩ for which the Fourier transform of
x(t)e −σt converges. Thus, the ROC of X(s) is on the real part of s not on the frequency Ω.
Hence, ROC of X(s) contains strips parallel to the jΩ axis
Property 2 ROC of a rational Laplace transform should not contain poles.
In the ROC, X(s) should be finite for all s since X(s) is infinite at a pole and Eq. (4.1)
does not converge at a pole. Hence, the ROC should not contain poles
Property 3 ROC is the entire s-plane for a finite duration x(t) , if there is at least
one value of s for which the Laplace transform converges.
Proof A finite duration signal is zero outside a finite interval as shown in Figure 4.1. Let
us assume that x(t)e −σt is absolutely integrable for some value of σ = σ 1 such that

(4.9)

Figure 4.1 Finite duration signal

Then, the line ℜe(s) = σ 1 is in the ROC . For ℜe(s) = σ 2 also to be in the ROC, it is
required that

(4.10)

If σ 2 > σ 1 such that is decaying, then the maximum value of


becomes for nonzero x(t) over the interval.

Hence,

(4.11)

The RHS of Eq. (4.11) is bounded and hence the LHS. Thus, the ℜe(s) > σ 1 must also
be in the ROC. Similarly, if σ 2 < σ 1 , it can be shown that is absolutely
integrable. Hence, the ROC is the entire s-plane.
Property 4: If ROC of a right-sided signal contains the line ℜe(s) = σ 1, then
ℜe(s) > σ 1 will also be in the ROC for all values of s.
Proof For a right-sided signal, x(t) = 0 prior to some finite time t 1 as shown in Figure
4.2
Figure 4.2 Right-sided signal

If the Laplace transform converges for some value of σ = σ 1, then

(4.12)

If x(t) is right sided, then

(4.13)

For is absolutely integrable as decays faster than as t → ∞.


Thus, ℜe(s) > σ 1 will also be in the ROC for all values of s.
Property 5: If ROC of left-sided signal contains the line ℜe(s) = σ 1 , then ℜe(s)< σ
1 will also be in the ROC for all values of s.

Proof For left-sided signal, x(t) = 0 after some finite time t 2 as shown in Figure 4.3. This
can be proved easily with the same argument and intuition for the property 4.

Figure 4.3 Left-sided signal

Property 6: If ROC of a two-sided signal contains the line ℜe(s) = σ 0 , then ROC
will contain a strip, which includes the line.
Proof A two-sided signal is of infinite duration for both t > 0 and t < 0 as shown in
Figure 4.4(a)

Figure 4.4 (a) Two-sided signal (b) Right-sided signal . (c) Left-sided signal . (d) ROC of the two-sided signal

Let us choose an arbitrary time t 0 that divides the signal into as sum of right-sided
signal and left-sided signal as shown Figure 4.4(b) and (c). The Laplace transform of x(t)
converges for the values of s for which both the right-handed signal and left-handed signal
converge. It is known from property 4 that the ROC of Laplace transform of right-handed
signal X r (s) consists of a half plane ℜe(s) > σ r for some value σ r ; and from property 5, it
is known that X r (s) consists of a half plane ℜe(s) > σ l for some value σ l . Then the
overlap of these two half planes is the ROC of the two-sided signal x(t) as shown in Figure
4.4(d) with the assumption that σ r < σ l . If σ r is not less than σ l , then there is no overlap.
In this case, X(s) does not exist even X r (s) and X l (s) individually exist.

4.3 The Inverse Laplace Transform


From Eq. (4.6), it is known that the Laplace transform X(σ + jΩ) of a signal x(t) is given
by

(4.14)
Applying the inverse Fourier transform on the above relationship, we obtain

(4.15)

Multiplying both sides of Eq. (4.15) by e σt , it follows that

(4.16)

As s = σ + jΩ and σ is a constant, ds = j dΩ.


Substituting s = σ + jΩ ds = j dΩ in Eq. (4.16) changing the variable of integration
from s to Ω, we arrive at the following inverse Laplace transform

(4.17)

4.4 Properties of the Laplace Transform


Linearity If x 1(t) and x 2(t) are two signals with Laplace transforms X 1(s) and X 2(s)
and ROCs R 1 and R 2, respectively, then the Laplace transform of a linear combination of
x 1(t) and x 2(t) is given by

(4.18)

whose ROC is at least (R 1 ∩ R 2), a 1 and a 2 being arbitrary constants.


Proof

(4.19)

(4.20)

The result concerning the ROC follows directly from the theory of complex variables
concerning the convergence of a sum of two convergent series.
Time Shifting If x(t) is a signal with Laplace transform X(s) and ROC R, then for any
constant t 0 ≥ 0, the Laplace transform of x(t – t 0) is given by

whose ROC is the same as that of X(s).


Proof

(4.22)

Substituting τ = t – t 0,

(4.23)

Shifting in the s-domain If x(t) is a signal with Laplace transform X(s) and ROC R,
then the Laplace transform of the signal is given by

(4.24)

whose ROC is the R + ℜe(s)


Proof

(4.25)

Time Scaling If x(t) is a signal with Laplace transform X(s) and ROC R, then the
Laplace transform of the x(at) for any constant a, real or complex, is given by

(4.26)

whose ROC is the


Proof

(4.27)

Letting τ = at; dt = dτ/a


Then,

(4.28)

Differentiation in the Time Domain If x(t) is a signal with the Laplace transform X (s)
and ROC R, then

(4.29)

with ROC containing R.


Proof This property can be proved by differentiating both sides of the inverse Laplace
transform expression

Then,

(4.30)

From the above expression, it can be stated that the inverse Laplace transform of sX(s)
is .

Differentiation in the s-Domain If x(t) is a signal with the Laplace transform X(s), then

(4.31)
Proof From the definition of the Laplace transform,

Differentiating both sides of the above equation, we get

(4.32)

with ROC = R
Division by t
If x(t) is a signal with Laplace transform X(s), then

(4.33)

provided that exists.

Proof Let , then (t) = t x 1(t).

By using the differentiation in the s-domain property,

(4.34)

which can be rewritten as

(4.35)

Integrating both sides of Eq. (4.35) yields

(4.36)

Integration If x(t) is a signal with Laplace transform X(s) and ROC R, then
(4.37)

with ROC containing R ∩ ℜe(s) > 0.


Proof This property can be proved by integrating both sides of the inverse Laplace
transform expression

Then,

Consequently, the inverse Laplace transform of is

Convolution in the Time Domain If x 1(t) and x 2(t) are two signals with Laplace
transforms X 1(s) and X 2(s) with ROCs R 1 and R 2, respectively, then

(4.38)

with ROC containing R 1 ∩ R 2


Proof

(4.39)

Changing the order of integration, Eq. (4.39) can be rewritten as

(4.40)

Let t 1 = t – τ; dt 1 = dt;
(4.41)

The ROC includes R 1 ∩ R 2 and is large if pole-zero cancellation occurs.


Convolution in the Frequency Domain
If x 1(t) and x 2(t) are two signals with Laplace transforms X 1(s) and X 2(s), then

(4.42)

Proof Let x(t) = x 1(t)x 2(t)


with Laplace transforms X 1(s) and X 2(s) and areas of convergence ℜe(s) > σ 1 and
ℜe(s) > σ 2, respectively, then

(4.43)

According to the inverse integral,

(4.44)

Substituting this relationship in Eq. (4.43), it follows that

(4.45)

Permuting the sequence of integration, we obtain

(4.46)

where
(4.47)

This integral converges for ℜe(s − p) > σ 2 . By substituting Eq. (4.47) in Eq. (4.46),
yields the following proving the property

(4.48)

The above properties of the Laplace transform are summarized in Table 4.1.
Table 4.1 Some properties of the Laplace transform

Property Signal Laplace transform ROC

Linearity a 1 x 1(t) + a 2 x 2(t) a 1 X 1(s) + a 2 X 2(s) At least R 1 ∩ R 2

Time shifting x(t – t 0) Same as R

Shifting in the s-domain X(s – s 0) Shifted version of R

Time scaling x(at)

Differentiation in the time domain sX(s) At least R

Differentiation in the s-domain −tx(t) R

Integration R ∩ ℜe(s) > 0.

Convolution x 1(t) * x 2(t) X 1(s)X 2(s) R 1 ∩ R 2

4.4.1 Laplace Transform Properties of Even and Odd Functions


Even Property
If x(t) is an even function such that x(t) = x(–t), then X(s) = X(−s).
Proof

(4.49)
Consider

(4.50)

Let t 1 = −t; then,

(4.51)

Since x(t) = x(−t) then {x(t)} = {x(−t)}


Thus, X(s) = X(−s).
Odd Property
If x(t) is an odd function such that x(t) = −x(−t), then X(s) = −X(−s).
Proof

(4.52)

Consider

(4.53)

Let t 1 = −t; then

(4.54)

Since x(t) = −x(−t) then {x(t)} = − {x(−t)}


Thus, X(s) = −X(−s).

4.4.2 Differentiation Property of the Unilateral Laplace


Transform
Most of the properties of the bilateral transform tabulated in Table 4.1 are the same for the
unilateral transform. In particular, the differential property of unilateral Laplace transform
is different as it requires that x(t) = 0 for t < 0 and contains no impulses and higher-order
singularities.
If x(t) is a signal with unilateral Laplace transform X (s), then the unilateral Laplace
transform of can be found by using integrating by parts as

(4.55)

Applying this second time yields the unilateral Laplace transform of as given by

(4.56)

where is the evaluated at t = 0+.

Similarly, applying this for third time yields the unilateral Laplace transform

(4.57)

where is the evaluated at t = 0+.

Continuing this procedure for the nth time, the unilateral Laplace transform of is
given by

(4.58)

Example 4.1 Determine whether the following Laplace transforms correspond to the
even time function or odd time function and comment on the ROCs .

(a)

(b)
(c) Comment on the ROCs
Solution

(a)
Hence, the corresponding x(t) is an odd function.

(b)
Hence, the corresponding x(t) is an even function.

(c) The ROCs for (a) and (b) are shown in Figure 4.5(a) and (b), respectively. From
Figure 4.5(a) and (b), it can be stated that for the time function to be even or odd, the
ROC must be two sided.

Figure 4.5 (a) ROC of . (b) ROC of

Example 4.2 A real and even signal x(t) with its Laplace transform X(s) has four poles
with one pole located at , with no zeros in the finite s-plane and X(0) = 16. Find X(s).

Solution Since X(s) has four poles with no zeros in the finite s-plane, it is of the form

As x(t) is real, the poles of X(s) must occur as conjugate reciprocal pairs.
Hence, and X(s) becomes

Since x(t) is even, the X(s) also must be even, and hence the poles must be symmetric
about the jΩ axis. Therefore, .
Thus,

Assuming that the given pole location is that of p 1, that is, ,

We obtain

when s = 0, , and therefore K = 1.

Hence,

4.4.3 Initial Value Theorem


For a signal x(t) with Laplace transform X(s) and x(t) = 0 for t < 0, then

(4.59)

Proof To prove the theorem, let the following integral first be evaluated by using
integration by parts
(4.60)

As s tends to ∞, the Eq. (4.60) can be expressed as

(4.61)

As the integration is independent of s, the calculation of the limit and the integration
can be permuted provided that the integral converges uniformly. If {x(t)} exists, then

(4.62)

is valid. Hence, we get

(4.63)

4.4.4 Final Value Theorem


For a signal x(t) with Laplace transform X(s) and x(t) = 0 for t < 0, then

(4.64)

Proof To prove this, the following integration is to be evaluated

(4.65)

Again one can permute the sequence of determining the limit and the integration
provided the integral converges. The result is

(4.66)

and after integration it follows that


(4.67)

Therefore,

(4.68)

4.5 Laplace Transforms of Elementary Functions


Unit Impulse Function The unit impulse function is defined by

(4.69)

By definition, the Laplace transform of δ (t) can be written as

(4.70)

The ROC is the entire s-plane .


Unit Step Function The unit step function is defined by

(4.71)

The Laplace transform of u(t) by definition can be written as

(4.72)

Hence, the ROC for X(s) is ℜe(s) > 0.


Example 4.3 Find the Laplace transform of x(t) = δ(t – t 0).
Solution By using the time shifting property, we get
The ROC is the entire s-plane.
Example 4.4 Find the Laplace transforms of the following:
(i) x(t) = −e −αt u(−t) (ii) x(t) = e αt u(−t) (iii) x(t) = e −αt u(t)
(iv) x(t) = e αt u(t)
Solution

(i)
Because u(−t) = 1 for t < 0 and u(−t) = 0 for t > 0,

The ROC for X(s) is ℜe(s) < − α

(ii)
Because u(−t) = 1 for t < 0 and u(−t) = 0 for t > 0,

The ROC for X (s) is ℜe(s) < α

(iii) Let x 1(t) = u(t), then

By using the shifting in the s-domain property, we get


The ROC for X(s) is ℜe(s) > − α

(iv) Let x 1(t) = u(t), then

By using the shifting in the s-domain property, we get

The ROC for X(s) is ℜe(s) > α.


Example 4.5 Find the Laplace transform of

Solution Let x 1(t) = u(t), then

and for n = 2, x 2(t) = tu(t).


By using the differentiation in the s-Domain property, we get

Similarly, for n = 3, .

Again, by using the differentiation in the s-domain property, we get


In general, .

Example 4.6 Find the Laplace transform of

Solution Let , then

By using the shifting in the s-domain property, we get

Example 4.7 Find the Laplace transform of x(t) = sin ωt u(t)


Solution

Using the shifting in the s-domain property, we get

Therefore,
Example 4.8 Find the Laplace transform of x(t) = cos ωt u(t).
Solution

Using the shifting in the s-domain property, we get

Therefore,

Example 4.9 Find the Laplace transform of x(t) = e −αtsin ωt u(t).


Solution Let x 1(t) = sin ωt u(t), then

By using the shifting in the s-domain property, we get

Example 4.10 Find the Laplace transform of x(t) = e −αtcos ωt u(t).


Solution Let x 1(t) = cos ωt u(t), then

By using the shifting in the s-domain property, we get


Example 4.11 Find the Laplace transform of

Solution

Example 4.12 Find the Laplace transform of

Solution

Example 4.13 Consider the signal x(t) = e t u(t) + 2e 2t u(t).

(a) Does the Fourier transform of this signal converge?

(b) For which of the following values of a does the Fourier transform of x(t) e −αt
converge?

(i) α = 1
(ii) α = 2.5

(c) Determine the Laplace transform X(s) of x(t).


Sketch the location of the poles and zeros of X(s) and the ROC .
Solution

(a) The Fourier transform of the signal does not converge as x(t) is not absolutely
integrable due to the rising exponentials.

(b) (i) For α = 1, x(t) e −αt = u(t) + 2e t u(t).


Although the growth rate has been slowed, the Fourier transform still does not
converge.
(ii) For α = 2.5, x(t) e −αt = e −1.5t u(t) + 2e −0.5t u(t), the Fourier transform
converges

(c) The Laplace transform of x(t) is

and its pole-zero plot and ROC are as shown in Figure 4.6.

Figure 4.6 Pole-zero plot and ROC


It is noted that if α > 2, s = α + jΩ is in the region of convergence, as it is shown in
part (b) (ii), the Fourier transform converges.
Example 4.14 The Laplace transform H(s) of the impulse response h(t) for an LTI
system is given by

Determine the system output y(t) for all t if the input x(t) is given by x(t) = e −3t/2 + 2e
−t for all t.

Solution From the convolution integral,

Let x(t) = e αt , then

can be recognized as H(s)|s = α .

Hence, if x(t) = e αt , then

Using linearity and superposition, it can be recognized that if x(t) = e −3t/2 + 2e −t, then

So that

Example 4.15 The output y(t) of a LTI system is


for an input

Determine the corresponding input for an output

Solution For the input


x(t) = (2 + 4e −3t )u(t), the Laplace transform is

The corresponding output has the Laplace transform

Hence,

Now, the output


y 1(t) = (1 – e −t – te −t )u(t) has the Laplace transform

Hence, the Laplace transform of the corresponding input is

The inverse Laplace transform of X 1 (s) gives


4.6 Computation of Inverse Laplace Transform Using Partial
Fraction Expansion
The inverse Laplace transform of a rational function X(s) can be easily computed by using
the partial fraction expansion .

4.6.1 Partial Fraction Expansion of X(s) with Simple Poles


Consider a rational Laplace transform X(s) of the form

(4.73)

with the order of N(s) is less than the order of the denominator polynomial.
The poles p 1, p 2, … .., p n are distinct.
The rational Laplace transform X(s) can be expanded using partial fraction expansion
as

(4.74)

The coefficients k1, k2, … ., kn are called the residues of the partial fraction expansion.
The residues are computed as

(4.75)

With the known values of the coefficients k1, k1, … ., kn inverse transform of each
term can be determined depending on the location of each pole relative to the ROC .

4.6.2 Partial Fraction Expansion of X(s) with Multiple Poles


Consider a rational Laplace transform X(s) with repeated poles of the form

(4.76)
with multiplicity r poles at s = p 1.
The X(s) with multiple poles can be expanded as

(4.77)

The coefficients k2, … ., kn can be computed using the residue formula used in Section
4.6.1. The residues k11, k12, …, klr are computed as

(4.78)

(4.79)

(4.80)

and so on.
Example 4.16 Find the time function x(t) for each of the following Laplace transform
X(s)

(a)

(b)

(c)

(d)
Solution
(a)
Using partial fraction expansion,

Solving for k 1 and k 2, k 1 = − 1; k 2=2.

Thus,

Using Table 4.2, we obtain


Table 4.2 Elementary functions and their Laplace transforms

Signal Laplace transform ROC

δ(t) 1 All s

u(t) ℛe(s) > 0

−u(−t) ℛe(s) > 0

δ(t − t 0) All s

e −αt u(t) ℛe(s) > − α

−e −at u(−t) ℛe(s) < − α

ℛe(s) > 0

ℛe(s) > − α

sin ωt u(t) ℜe(s) > 0

cos ωt u(t) ℜe(s) > 0

e −αt sin ωt u(t) ℜe(s) > − α

e −αt cos ωt u(t) ℜe(s) > − α


x(t) = −e −3t u(t) + 2e −4t u(t)

(b)
Using partial fraction expansion,

Solving for k 1 and k 2, k 1 = −1; k 2 = 1.

Using partial fraction expansion,

Solving for k 1, k 11, and k12

Thus,

Using Table 4.2, we obtain


(c)
Using partial fraction expansion,

Solving for k 11 and k12, we get

Hence,

Using Table 4.2, we obtain

(d)
Using partial fraction expansion,

Solving for k 1 and k2, k 1 = 2; k 2 = −1.

Thus,

Using Table 4.2, we obtain


Example 4.17 Find the inverse Laplace transform of the following

Solution

Using Table 4.2, we obtain

Example 4.18 Determine x(t) for the following conditions if X(s) is given by

(a) x(t) is right sided

(b) x(t) is left sided

(c) x(t) is both sided


Solution Using partial fraction, X(s) can be written as
(a)
If x(t) is right sided,

The ROC is to the right of the rightmost pole as shown in Figure 4.7(a).

Figure 4.7 (a) ROC of right-sided x(t) (b) ROC of left-sided x(t) (c) ROC of both-sided x(t)

(b) If x(t) is left sided,

The ROC is to the left of the leftmost pole as shown in Figure 4.7(b)

(c) If x(t) is two sided,

Hence, if x(t) is chosen as,


The ROC is as shown in Figure 4.7(c)
Example 4.19 Determine x(t) first for the following and verify the initial and final value
theorems .

(i)

(ii)
Solution

(i)

(ii)

4.7 Inverse Laplace Transform by Partial Fraction Expansion


Using MATLAB
The MATLAB command residue can be used to find the inverse transform using the
power series expansion. To find the partial fraction decomposition of Y(s), we must first
enter the numerator polynomial coefficients and the denominator polynomial coefficients
as vectors.
The following MATLAB statement determines the residue (r), poles (p), and direct
terms (k) of the partial fraction expansion of H(s) .

where N is the vector of the numerator polynomial coefficients in decreasing order and
the vector D contains the denominator polynomial coefficients in decreasing order.
Example 4.20 Find the inverse Laplace transform of the following using MATLAB

Solution The following MATLAB statements are used to find the Laplace transform of
given X(s)
coefficients of the numerator polynomial in decreasing order

coefficients of the denominator polynomial in decreasing order


[k, p, const] = residue (N and D); % computes residues, poles, and constants
Execution of the above statements gives the following output
k =
2.0000
−1.0000
p =
−3.0000
−2.0000
Thus, the partial fraction decomposition of X(s) is

After getting the partial fraction expansion of X(s), the following MATLAB
statements are used to obtain x(t), that is, the inverse Laplace transform of X(s).
syms s t
X=2/(s+3)-1/(s+2);
ilaplace(X)
Execution of the above three MATLAB statements gives

4.8 Analysis of Continuous-Time LTI Systems Using the


Laplace Transform
4.8.1 Transfer Function
It was stated in Chapter 2 that a continuous time LTI system can be completely
characterized by its impulse response h(t). The output signal y(t) of a LTI system and the
input signal x(t) are related by convolution as

(4.81)

By using the convolution property, we get

(4.82)

indicating the Laplace transform of the output signal y(t) is the product of the Laplace
transforms of the impulse response h(t) and the input signal x(t). The transform H(s) is
called the transfer function or the system function and expressed as

(4.83)

The roots of the denominator of the transfer function are called poles. The roots of the
numerator are called zeros. The places where the transfer function is infinite (the poles)
determine the region of convergence .
Example 4.21 Obtain the system function of the following Sallen-Key low-pass filter
circuit.
Solution For the circuit shown in Figure 4.8, the following relations can be established:
Figure 4.8 Sallen-Key low-pass filter circuit

Rewriting the current node equations, we get

Which can be rewritten as

Substituting the above Eq. for , the input-output relation can be written as

Thus,

Hence, the system function is given by

4.8.2 Stability and Causality


Stabile LTI System
A continuous-time LTI system is stable if and only if the impulse response is absolutely
integrable, that is,

(4.84)

The Laplace transform of the impulse response is known as the system function, which
can be written as

(4.85)

A continuous-time LTI system is stable if and only if the transfer function has ROC
that includes the imaginary axis (the line in complex where the real part is zero).
Causal LTI System
A continuous time LTI system is causal if its output y(t) depends only on the current and
past input x(t) but not the future input. Hence, h(t) = 0 for t < 0.
For causal system, the system function can be written as

(4.86)

If s = σ + jΩ and h(t)e −σt being absolutely integrable for convergence of H(s) leads to
the following convergence condition,

(4.87)

Any large value of σ satisfies the above equation . Thus, the ROC is the region to the
right of a vertical line that passes through the point ℜe(s) = σ as shown in Figure 4.9.
Figure 4.9 ROC of a causal LTI system

In particular, if H(s) is rational, , then the system is causal if and only if its
ROC is the right-sided half plane to the right of the rightmost pole and the order of
numerator N(s) is no greater than that of the denominator D(s), so that the ROC is a right-
sided plane without any poles (even at s → ∞).
Stable and Causal LTI System
As the ROC of a causal system is to the right of the rightmost pole and for a stable system,
the rightmost pole should be in the left half of the s-plane and should include the jΩ axis;
all the poles of a system should lie in the left half of the s-plane (the real parts of all poles
are negative, ℜe(s p ) < 0 for all s p ) for a system to be causal and stable as shown in
Figure 4.10.

Figure 4.10 ROC of a stable and causal LTI system

Stable and Causal Inverse LTI System


For a causal stable system, the poles must lie in the left half of the s-plane. But it is known
that the poles of the inverse system are zeros of the original system. Therefore, the zeros
of the original system should be in the left half of the s-plane.
Example 4.22 Consider a LTI system with the system function

Show the pole-zero locations of the system and ROCs for the following:

(a) the system is causal

(b) the system is stable , noncausal

(c) the system is neither causal nor stable


Solution

(a) The system function can be rewritten as

Since the ROC of a causal system is to the right of the rightmost pole, the pole-
zero plot and ROC of the system are shown in Figure 4.11.

Figure 4.11 Pole-zero plot and ROC of the causal system


(b) For a stable system, the ROC should include the imaginary axis. The pole-zero plot
and the ROC are shown in Figure 4.12.

Figure 4.12 Pole-zero plot and ROC of the noncausal, stable system

(c) Since the system is neither causal nor stable, the ROC should not include the
imaginary axis and not to the right of the rightmost pole. Hence, the pole-zero plot
and the ROC are shown in Figure 4.13.

Figure 4.13 Pole-zero plot and ROC of the system neither causal nor stable

4.8.3 LTI Systems Characterized by Linear Constant


Coefficient Differential Equations
The system function for a system characterized by a linear constant coefficient equation
can be obtained by exploiting the properties of the Laplace transforms. The Laplace
transform transforms a differential equation into an algebraic equation in the s-domain
making it easy to find the time-domain solution of the differential equation.
Consider a general linear constant coefficient differential equation of the form

(4.88)

Taking the Laplace transform of both sides of equation repeated use of the
differentiation property and linearity property , we obtain

(4.89)

Thus, the system function is given by

(4.90)

The system function is rational for a system characterized by a differential equation


with the roots of the numerator polynomial as zeros and the roots of the denominator
polynomial as poles.
Eq. (4.90) does not specify any ROC since a differential equation by itself does not
constrain any region of convergence . However, with the additional knowledge of stability
and causality , the ROC can be specified.
Example 4.23 Consider a continuous LTI system described by the following differential
equation .

(a) Determine the system function

(b) Determine h(t) for each of the following:


(i) the system is stable , noncausal

(ii) the system is causal


(iii) the system is neither causal nor stable
Solution Taking the Laplace transform of both sides of the given differential equation,
we obtain

The above relation can be rewritten as

Now, the system function is given by

The pole-zero plot for the system function is shown in Figure 4.14.

Figure 4.14 Pole-zero plot of the system function

(b) The partial fraction expansion of H(s) yields


(i) For H(s) to be stable, noncausal, the ROC is −2 < ℜe(s) < 3.
Hence,

(ii) For the system to be causal , the ROC is ℜe(s) > 3.


Therefore, .

(iii) For the system to be neither causal nor stable , the ROC is ℜe(s) < − 2.
Hence,

4.8.4 Solution of linear Differential Equations Using Laplace


Transform
The stepwise procedure to solve a linear differential equation is as follows:
Step 1: Take Laplace transform both sides of the equation.
Step 2: Simplify the algebraic equation obtained for Y(s) in the s-domain.
Step 3: Find the inverse transform of Y(s) to obtain y(t), the solution of the
differential equation.
Example 4.24 Find the solution of the following differential equation using Laplace
transform:

Solution
Step 1: Laplace transform both sides of given differential equation is

Step 2: Simplifying the expression for Y(s),


Step 3: Expanding Y(S) using partial fraction expansion ,

Comparing the numerator polynomial with the numerator polynomial of Y(s) of step 2,
we get

Solving for k1 and k2,

Thus,

Inverse transform of Y(s) gives the solution of the differential equation as

Example 4.25 Find the solution of the following differential equation using Laplace
transform

Solution
Step 1: Laplace transform both sides of given differential equation is
Step 2: Simplifying the expression for Y(s),

Step 3: Expanding Y(S) using partial fraction expansion,

Comparing the numerator polynomial with the numerator polynomial of Y(s) of step 2,
we get

Solving for k11, k12, and k13,


k11 = −2, k12 = 0, and k13 = 2.
Thus,

Inverse transform of Y(s) gives the solution of the differential equation as

Example 4.26 Consider the following RLC circuit with R = 5 ohms, L = 1h, and
C = 0.25 f.
(a) Determine the differential equation relating and .

(b) Obtain using Laplace transform for with ,


(Figure 4.15).

Figure 4.15 Series RLC circuit

Solution

(a) By applying Kirchhoff’s voltage law, we can arrive at the following differential
equation:

It is known that , hence the above differential equation can be rewritten as

For given values of R, L, and C, the differential equation becomes

(b) For given vin,


Laplace transform both sides of given differential equation is
Simplifying the expression for vc(s)

Expanding vc(s) using partial fraction expansion

Solving for k11, k12, and k2, we obtain

, , .

Thus,

Inverse transform of Vc(s) gives as

Example 4.27 Consider the following parallel RLC circuit with R=1 ohm, L=1 h.

(a) Determine the differential equation relating I s and I L .

(b) Obtain zero-state response for I L (t) using Laplace transform for I s (t) = e −3t u(t).
(c) Obtain zero-input response for I L (t) using Laplace transform with I L (0) = 1 (Figure

4.16).

Figure 4.16 Parallel RLC circuit

Solution

(a) The I s and I L are related by the following differential equation

(b) For the given I s ,

Applying the unilateral Laplace transform to the above differential equation, we obtain

Since I L (0) = 0 for zero state,

By using partial fraction expansion, I L (s) can be expanded as


The inverse unilateral Laplace transform gives

(c) For the zero-input response, I s (t) = 0 and given that I L (0) = 1, we have to find the
solution of the following differential equation for zero-input response

The unilateral Laplace transform of the above differential equation is

Thus,

The inverse transform of I L (s) is zero-input response given by

4.8.5 Solution of Linear Differential Equations Using Laplace


Transform and MATLAB
Example 4.28 Find the solution of linear differential equation considered in Example
4.24 using MATLAB
Solution The following MATLAB statements are used to find the solution of the
differential equation considered in Example 4.24
Execution of the above MATLAB statements gives the solution of the differential
equation as

Example 4.29 Find the solution of linear differential equation considered in Example
4.25 using MATLAB
Solution The following MATLAB statements are used to find the solution of the
differential equation considered in Example 4.25

Execution of the above MATLAB statements gives the solution of the differential
equation as

4.8.6 System Function for Interconnections of LTI Systems


Series Combination of Two LTI Systems
Impulse response of the series combination of two LTI systems is

(4.91)

and from convolution property of the Laplace transform, the associated system function is
(Figure 4.17)

(4.92)
Figure 4.17 Series combination of two LTI systems

Parallel Combination of Two LTI Systems


Impulse response of the parallel combination of two LTI systems is

(4.93)

and from linearity property of the Laplace transform , the associated system function is
(Figure 4.18)

(4.94)

Figure 4.18 Parallel combination of two LTI systems

4.9 Block-Diagram Representation of System Functions in the


S-Domain
Consider the system function

(4.95)

Let us define the following basic elements for addition, multiplication, differentiation,
and integration in the s-domain (Figure 4.19)
Figure 4.19 Block-diagram representation basic elements (a) adder, (b) multiplier, (c) differentiator, (d) integrator

The block-diagram representation of the above system function can be obtained as the
interconnection of these basic elements similar to the block-diagram representation of
differential equations in the time domain carried out in Section 2.​6 of Chapter 2. The
block-diagram representation of the system function is shown in Figure 4.20.

Figure 4.20 Block-diagram representation of the system function

Example 4.30 Is the system represented by the following block-diagram stable ? (Figure
4.21)
Figure 4.21 Block-diagram representation of a second-order system function

Solution Let the signal at the bottom node of the block diagram be denoted by E(s).
Then we have the following relations

Eliminating the auxiliary signal E(s), we get

From the system function H(s) found in the previous part, we see that the poles of the
system are at s = −1. Since the system is given to be causal , and the rightmost pole of the
system is left of the imaginary axis, the system is stable .

4.10 Solution of State-Space Equations Using Laplace


Transform
For convenience, the state-space equations from Chapter 2 are repeated here:

(4.96)

(4.97)

Taking Laplace transform both sides of Eq. (4.96), we obtain

(4.98)

Eq. (4.98) can be rewritten as


(4.99)

where I is the identity matrix.


From Eq. (4.99), we get

(4.100a)

(4.100b)

Taking inverse Laplace transform both sides of Eq. (4.100b) yields

(4.101)

(4.102)

By using convolution theorem, we obtain

(4.103)

Thus,

(4.104)

Example 4.31 Consider the electrical circuit given in Example 2.​36. Find if
under an initially relaxed condition.
Solution
Since the circuit is initially relaxed, e At X(0) = 0.
Therefore,

Since

integration by parts gives

This equation can be written as


which can be written as

Hence,

4.11 Problems

1. Find the Laplace transforms of the following:


(i) x(t) = e −2t u(t) + e 3t u(−t)

(ii) x(t) = e t u(t) + e −3t u(−t)

2. Find the Laplace transform of .

3. Find the Laplace transform of .

4. Show that the ROC for the Laplace transform of a noncausal signal is the region to
the left of a vertical line in the s-plane.
5. Find Laplace transform of a periodic signal with period T, that is, x(t+T) = x(t).

6. By first determining x(t), verify the final value theorem for the following with
comment

7. The transfer function of an LTI system is

Find the impulse response and region of convergence and the value of α for the
system to be causal and stable.

8. Consider a continuous LTI system described by the following differential equation


(a) Determine the system function

(b) Determine h(t) for each of the following:

(i) The system is causal

(ii) The system is stable

(iii) The system is neither causal nor stable

9. Find the solution of the following differential equation using Laplace transform

10. Consider the following RC circuit with R=2 ohms, C=0.5 f.



(a) Determine the differential equation relating and .

(b) Obtain using Laplace transform for with .

11. Consider the following RLC circuit.


Obtain y(t) using Laplace transform for x(t)=u(t).

12. Determine the differential equation characterizing the system represented by the
following block diagram
13. Consider the following state-space representation of a system. Determine the system
output y(t) with the initial state condition .

4.12 MATLAB Exercises

1. Write a MATLAB program for magnitude response of Sallen-Key low-pass filter .

2. Verify the solution of problem 8 using MATLAB .


3.
Verify the solution of problem 9 using MATLAB.

Further Reading
1. Doetsch, G.: Introduction to the theory and applications of the Laplace transformation with a table of Laplace
transformations. Springer, New York (1974)

2. LePage, W.R.: Complex variables and the Laplace transforms for engineers. McGraw-Hill, New York (1961)
3. Oppenheim, A.V., Willsky, A.S.: Signals and systems. Prentice-Hall, Englewood Cliffs (1983)
4. Hsu, H.: Signals and systems, 2nd edn. Schaum’s Outlines, Mc Graw Hill (2011)
5. Kailath, T.: Linear systems. Prentice-Hall, Englewood Cliffs (1980)
6. Zadeh, L., Desoer, C.: Linear system theory. McGraw-Hill, New York (1963)
© Springer International Publishing AG, part of Springer Nature 2018
K. Deergha Rao, Signals and Systems, https://doi.org/10.1007/978-3-319-68675-2_5

5. Analog Filters
K. Deergha Rao1
(1) Department of Electronics and Communication Engineering, Vasavi College of
Engineering (Affiliated to Osmania University), Hyderabad, Telangana, India


Filtering is an important aspect of signal processing. It allows desired frequency
components of a signal to pass through the system without distortion and suppresses the
undesired frequency components. One of the most important steps in the design of a filter
is to obtain a realizable transfer function H(s), satisfying the given frequency response
specifications. In this chapter, the design of analog low-pass filters is first described.
Second, frequency transformations for transforming analog low-pass filter into band-pass ,
band-stop , or high-pass analog filters are considered. The design of analog filters is
illustrated with numerical examples. Further, the design of analog filters using MATLAB
is demonstrated with a number of examples. Also, the design of special filters by pole and
zero placement is illustrated with examples.

5.1 Ideal Analog Filters


An ideal filter passes a signal for one set of frequencies and completely rejects for the rest
of the frequencies.
Low-Pass Filter
The frequency response of an ideal analog low-pass filter H LP (Ω) that passes a signal for
Ω in the range –Ω c ≤ Ω c can be expressed by

(5.1)

The frequency Ω c is called the cutoff frequency.


The impulse response of the ideal low-pass filter corresponds to the inverse Fourier
transform of the frequency response shown in Figure 5.1.
Figure 5.1 Frequency response of ideal low-pass filter

Hence,

(5.2)

sinc function can be defined as

(5.3)

therefore from sinc function we can express Eq. (5.2) as

(5.4)

Thus,

(5.5)

The impulse response for Ω c = 200 Hz is shown in Figure 5.2.


Figure 5.2 Impulse response for Ω c = 200 Hz

The filter bandwidth is proportional to Ω c and the width of the main lobe is
proportional to . The impulse response becomes narrow with increase in the bandwidth.

High-Pass Filter
The following system (Figure 5.3) is generally used to obtain high-pass filter from a low-
pass filter . The frequency response of an ideal analog high-pass filter H HP (Ω) that passes
a signal for |Ω| > Ω c can be expressed by

(5.6)

and is shown in Figure 5.4. The frequency Ω c is called the cutoff frequency

Figure 5.3 System to obtain a high-pass filter from low-pass filter


Figure 5.4 Frequency response of ideal high-pass filter

From Figure 5.3, the frequency response of the ideal high-pass filter can also be expressed
as

(5.7)

Therefore, the impulse response of an ideal high-pass filter is given by the inverse
Fourier transform of Eq. (5.7).
Hence, the impulse response of the ideal high pass filter is given by

(5.8)

Band-Pass Filter
The frequency response of band-pass filter can be expressed by

(5.9)

which is shown in Figure 5.5.

Figure 5.5 Frequency response of ideal band-pass filter

From Figure 5.5, the frequency response of the ideal band-pass filter can be expressed as

(5.10)
Thus, the impulse response of an ideal band-pass filter is

(5.11)

Band-Stop Filter
The band-stop filter can be realized as a parallel combination of low-pass filter with cutoff
frequency Ωc1 and high-pass filter cutoff frequency Ωc2. The frequency response of band-
stop filter can be expressed by

(5.12)

which is shown in Figure 5.7.


From Figure 5.6, the frequency response of the ideal band-stop filter can also be expressed
as

(5.13)

Figure 5.6 System with summation of high-pass filter and low-pass filter

Figure 5.7 Frequency response of ideal band


Therefore, the impulse response of an ideal band-stop filter is given by the inverse
Fourier transform of Eq. (5.13).
Hence, the impulse response of the ideal band-stop filter is given by

(5.14)

5.2 Practical Analog Low-Pass Filter Design


A number of approximation techniques for the design of analog low-pass filters are well
established in the literature. The design of analog low-pass filter using Butterworth ,
Chebyshev I, Chebyshev II (inverse Chebyshev), and elliptic approximations is discussed
in this section.

5.2.1 Filter Specifications


The specifications for an analog low-pass filter with tolerances are depicted in Figure 5.8,
where
Ω p - Passband edge frequency
Ω s - Stopband edge frequency
δ p - Peak ripple value in the passband
δ s - Peak ripple value in the stopband
Peak passband ripple in dB = α p = −20 log10(1 – δ p ) dB
Minimum stopband ripple in dB = α s = −20 log10 (δ s ) dB

Peak ripple value in passband

Peak ripple value in stopband


Figure 5.8 Specifications of a low-pass analog filter

5.2.2 Butterworth Analog Low-Pass Filter


The magnitude-square response of an N th -order analog low-pass Butterworth filter is
given by

(5.15)

Two parameters completely characterizing a Butterworth low-pass filter are Ωs and N.


These are determined from the specified band edges Ω p and Ω c , peak passband ripple α p
, and minimum stopband attenuation α s . The first (2N − 1) derivatives of |H a (jΩ|2 at
Ω = 0 are equal to zero. Thus, the Butterworth low-pass filter is said to have a maximally
flat magnitude at Ω = 0. The gain in dB is given by 10log10|H a (jΩ|2. At Ω = Ω c , the gain
is 10log10(0.5) = − 3 dB; therefore, Ω c is called the 3 dB cutoff frequency. The loss in dB
in a Butterworth filter is given by

(5.16)

For Ω = Ω p , the passband attenuation is given by

(5.17)
For Ω = Ω s , the stopband attenuation is

(5.18)

Eqs.(5.17) and (5.18) can be rewritten as

(5.19)

(5.20)

From Eqs.(5.19) and (5.20), we obtain

(5.21)

Eq. (5.21) can be rewritten as

(5.22)

From Eq. (5.22), solving for N we get

(5.23)

Since the order N must be an integer, the value obtained is rounded to the next higher
integer. This value of N is used in either Eq. (5.19) or Eq. (5.20) to determine the 3 dB
cutoff frequency Ω c . In practice, Ω c is determined by Eq. (5.20) that exactly satisfies
stopband specification at Ω c , while the passband specification is exceeded with a safe
margin at Ω p . We know that |H(jΩ)|2 may be evaluated by letting s = jΩ in H(s)H(−s),
which may be expressed as
(5.24)

If Ω c = 1, the magnitude response |H N (jΩ)| is called the normalized magnitude


response. Now, we have

(5.25)

where

(5.26)

Since |s k | = 1, we can conclude that there are 2N poles placed on the unit circle in the
s-plane. The normalized transfer function can be formed as

(5.27)

where p l for l = 1, 2,.., N are the left half s-plane poles . The complex poles occur in
conjugate pairs.
For example, in the case of N = 2, from Eq. (5.26), we have

The poles in the left half of the s-plane are

Hence,
and

In the case of N = 3,

The left half of s-plane poles are

Hence

and

The following MATLAB Program 5.1 can be used to obtain the Butterworth
normalized transfer function for various values of N.
Program 5.1 Analog Butterworth Low-Pass Filter Normalized Transfer Function
N=input(‘enter order of the filter ’);
[z,p,k] = buttap(N)% determines poles and zeros
disp(‘Poles are at’);disp(p);
[num,den]=zp2tf(z,p,k);
%Print coefficients in powers of s
disp(‘Numerator polynomial’);disp(num);
disp(‘Denominator polynomial’);disp(den);
sos=zp2sos(z,p,k);%determines coefficients of second order
sections
The normalized Butterworth polynomials generated from the above program for typical
values of N are tabulated in Table 5.1.
Table 5.1 List of normalized Butterworth polynomials

N Denominator of H N (s)

1 S + 1

3 (s + 1)(s 2 + s + 1)

4 (s 2 + 0.76537s + 1)(s 2 + 1.8477s + 1)

5 (s + 1) (s 2 + 0.61803s + 1)(s 2 + 1.61803s + 1)

7 (s + 1) (s 2 + 1.80194s + 1)(s 2 + 1.247s + 1)(s 2 + 0.445s + 1)

The magnitude response of the normalized Butterworth low-pass filter for some
typical values of N is shown in Figure 5.9. From this figure, it can be seen that the
response monotonically decreases both in the passband and the stopband as Ω increases.
As the filter order N increases, the magnitude responses both in the passband and the
stopband are improved with a corresponding decrease in the transition width. Since the
normalized transfer function corresponds to Ω c = 1, the transfer function of the low-pass
filter corresponding to the actual Ω c can be obtained by replacing s by (s/Ω c ) in the
normalized transfer function.
Figure 5.9 Magnitude response of typical Butterworth low-pass filter

Example 5.1 Design a Butterworth analog low-pass filter with 1 dB passband ripple,
passband edge frequency Ω p = 2000π rad/sec, stopband edge frequency Ω s = 10,000π 
rad/sec, and a minimum stopband ripple of 40 dB.
Solution Since α s = 40 dB, α p = 1 dB, Ω p = 2000π, and Ω s = 10,000π,

Hence from (5.23),

Since the order must be an integer, we choose N = 4.


The normalized low-pass Butterworth filter for N = 4 can be formulated as

From Eq. (5.20), we have


The transfer function for Ω c = 9935 can be obtained by replacing s by (s/Ω c ) = (s
/9935)in H N (s).

5.2.3 Chebyshev Analog Low-Pass Filter


Type 1 Chebyshev Low-Pass Filter
The magnitude-square response of an N th -order analog low-pass Type 1 Chebyshev filter
is given by

(5.28)

where T N (Ω) is the Chebyshev polynomial of order N

(5.29)

The loss in dB in a Type 1 Chebyshev filter is given by

(5.30)

For Ω = Ω p , T N (Ω) = 1, and the passband attenuation is given by

(5.31)

From Eq. (5.31), ε can be obtained as


(5.32)

For Ω = Ω s the stopband attenuation is

(5.33)

Since (Ω s /Ω p ) > 1, the above equation can be written as

(5.34)

Substituting Eq. (5.32) for ε in the above equation and solving for N, we get

(5.35)

We choose N to be the lowest integer satisfying (5.35). In determining N using the


above equation, it is convenient to evaluate cosh−1(x) by applying the identity
.

The poles of the normalized Type 1 Chebyshev filter transfer function lie on an ellipse
in the s-plane and are given by

(5.36)

(5.37)

Also, the normalized transfer function is given by

(5.38)
where

(5.39a)

and

(5.39b)

As an illustration, consider the case of N = 2 with a passband ripple of 1 dB. From Eq.
(5.32), we have

Hence

Therefore, from (5.39a), the poles of the normalized Chebyshev transfer function are
given by

Hence

Also, from (5.39b), we have

Thus for N = 2, with a passband ripple of 1 dB, the normalized Chebyshev transfer
function is
Similarly for N = 3, for a passband ripple of 1 dB, we have

Thus,

Also, from (5.39b),

Hence, the normalized transfer function of Type 1 Chebyshev low-pass filter for N=3
is given by

The following MATLAB Program 5.2 can be used to form the Type1 Chebyshev
normalized transfer function for a given order and passband ripple.
Program 5.2 Analog Type 1 Chebyshev Low-Pass Filter Normalized Transfer Function
N=input(‘enter order of the filter’);
Rp=input(‘enter passband ripple in dB’);
[z,p,k] = cheb1ap(N,Rp)% determines poles and zeros
disp(‘Poles are at’);disp(p);
[num,den]=zp2tf(z,p,k);
%Print coefficients in powers of s
disp(‘Numerator polynomial’);disp(num);
disp(‘Denominator polynomial’);disp(den);
The normalized Type 1 Chebyshev polynomials generated from the above program for
typical values of N and passband ripple of 1 dB are tabulated in Table 5.2.
Table 5.2 List of normalized Type 1 Chebyshev transfer functions for passband ripple = 1 dB

N Denominator of H N (s) H 0

1 S + 1.9652 1.9652

2 s 2 + 1.0977s + 1.1025 0.98261

3 s 3 + 0.98834s 2 + 1.2384s + 0.49131 0.49131

4 s 4 + 0.95281s 3 + 1.4539s 2 + 0.74262s + 0.27563 0.24565

5 s 5 + 0.93682s 4 + 1.6888s 3 + 0.9744s 2 + 0.58053s + 0.12283 0.12283

The typical magnitude responses of a Type 1 Chebyshev low-pass filter for N = 3, 5, and 8
with 1 dB passband ripple are shown in Figure 5.10. From this figure, it is seen that Type
1 Chebyshev low-pass filter exhibits equiripple in the passband with a monotonic decrease
in the stopband.

Figure 5.10 Magnitude response of typical Type 1 Chebyshev low-pass filter with 1 dB passband ripple

Example 5.2 Design a Type 1 Chebyshev analog low-pass filter for the specifications
given in Example 5.1.
Solution Since α s = 40 dB, α p = 1 dB, Ω p = 2000π, and Ω s = 10,000π,

Since the order of the filter must be an integer, we choose the next higher integer value 3
for N. The normalized Type 1 Chebyshev low-pass filter for N = 3 with a passband ripple
of 1 dB is given from Table 5.2 as

The transfer function for Ω p = 2000π is obtained by substituting s = (s/Ω p ) =


(s/2000π) in H N (s):

Type 2 Chebyshev Filter


The squared-magnitude response of Type 2 Chebyshev low-pass filter, which is also
known as the inverse Chebyshev filter, is given by

(5.40)

The order N can be determined using Eq. (5.35). The Type 2 Chebyshev filter has both
poles and zeros , and the zeros are on the jΩ axis. The normalized Type 2 Chebyshev low-
pass filter , or the normalized inverse Chebyshev filter (normalized to Ω s = 1), may be
formed as

(5.41)

where

(5.42a)

(5.42b)
(5.42c)

(5.42d)

(5.42e)

(5.42f)

For example, if we consider N = 3 with a stopband ripple of 40 dB, then from (5.42e),

Hence,

Using (5.42c) and (5.42d), we have

Hence
Thus, from (5.42b), the poles are

Also, using (5.42a), the zeros are given by

Finally, from (5.42f),

Therefore, the normalized Type 2 Chebyshev low-pass filter for N = 3 with a stopband
ripple of 40 dB is given by

The following MATLAB Program 5.3 can be used to form the Type2 Chebyshev
normalized transfer function for a given order and stopband ripple.
Program 5.3 Analog Type 2 Chebyshev Low-Pass Filter Normalized Transfer Function
N=input(‘enter order of the filter’);
Rs=input(‘enter stopband attenuation in dB’);
[z,p,k] = cheb2ap(N,Rs);% determines poles and zeros
disp(‘Poles are at’);disp(p);
[num,den]=zp2tf(z,p,k);
%Print coefficients in powers of s
disp(‘Numerator polynomial’);disp(num);
disp(‘Denominator polynomial’);disp(den);
The normalized Type 2 Chebyshev transfer functions generated from the above program
for typical values of N with a stopband ripple of 40 dB are tabulated in Table 5.3.
Table 5.3 List of normalized Type 2 Chebyshev transfer functions for stopband ripple = 40 dB

Order N H N (s)

The typical magnitude response of a Type 2 Chebyshev low-pass filter for N = 4 and 7
with 20 dB stopband ripple is shown in Figure 5.11. From this figure, it is seen that Type 2
Chebyshev low-pass filter exhibits monotonicity in the passband and equiripple in the
stopband.

Figure 5.11 Magnitude response of typical Type 2 Chebyshev low-pass filter with 20 dB stopband ripple

Example 5.3 Design a Type 2 Chebyshev low-pass filter for the specifications given in
Example 5.1.
Solution The order N is chosen as 3, as in Example 5.2, since the equation for order
finding is the same for both Type 1 and Type 2 Chebyshev filters. The normalized Type 2
Chebyshev low-pass filter for N = 3 with a stopband ripple of 40 dB has already been
found earlier and is given by
For Ω s = 10,000π, the corresponding transfer function can be obtained by substituting
s = (s/Ω s ) = (s/10000π) in the above expression for H N (s). Thus, the required filter
transfer function is

5.2.4 Elliptic Analog Low-Pass Filter


The square-magnitude response of an elliptic low-pass filter is given by

(5.43)

where U N (x) is the Jacobian elliptic function of order N and ε is a parameter related to the
passband ripple. In an elliptic filter, a constant k, called the selectivity factor, representing
the sharpness of the transition region is defined as

(5.44)

A large value of k represents a wide transition band, while a small value indicates a
narrow transition band.
For a given set of Ω p , Ω s , α p , and α s , the filter order can be estimated using the
formula

(5.45)

where ρ can be computed using


(5.46)

(5.47)

(5.48)

The following MATLAB Program 5.4 can be used to form the elliptic normalized
transfer function for given filter order and passband ripple and stopband attenuation. The
normalized passband edge frequency is set to 1.
Program 5.4 Analog Elliptic Low-Pass Filter Normalized Transfer Function
N=input(‘enter order of the filter’);
Rp=input(‘enter passband ripple in dB’);
Rs=input(‘enter stopband attenuation in dB’);
[z,p,k] = ellipap(N,Rp,Rs)% determines poles and zeros
disp(‘Poles are at’);disp(p);
[num,den] =zp2tf(z,p,k);
%Print coefficients in powers of s
disp(‘Numerator polynomial’);disp(num);
disp(‘Denominator polynomial’);disp(den);
The normalized elliptic transfer functions generated from the above program for typical
values of N and stopband ripple of 40 dB are tabulated in Table 5.4.
Table 5.4 List of normalized elliptic transfer functions for passband ripple = 1 dB and stopband ripple = 40 dB

Order N H N (s)

4
5

The magnitude response of a typical elliptic low-pass filter is shown in Figure 5.12,
from which it can be seen that it exhibits equiripple in both the passband and the stopband.

Figure 5.12 Magnitude response of typical elliptic low-pass filter with 1 dB passband ripple and 30 dB stopband ripple

Example 5.4 Design an elliptic analog low-pass filter for the specifications given in the
Example 5.1.
Solution

and

Substituting these values in Eq. (5.46) and Eq. (5.47), we get

and hence
Choose N = 3. Then, for N = 3, a passband ripple of 1 dB, and a stopband ripple of
40dB, the normalized elliptic transfer function is as given in Table 5.4. For Ω p = 2000Π,
the corresponding transfer function can be obtained by substituting s = (s/Ω p ) = (s/2000π)
in the expression for H N (s). Thus, the required filter transfer function is

5.2.5 Bessel Filter


Bessel filter is a class of all-pole filters that provide linear phase response in the passband
and characterized by the transfer function

(5.49)

where the coefficients a n are given by

(5.50)

The magnitude responses of a third-order Bessel filter and Butterworth filter are
shown in Figure 5.13 and the phase responses of the same filters with the same order are
shown in Figure 5.14. From these figures, it is seen that the magnitude response of the
Bessel filter is poorer than that of the Butterworth filter, whereas the phase response of the
Bessel filter is more linear in the passband than that of the Butterworth filter.
Figure 5.13 Magnitude response of a third-order Bessel filter and Butterworth filter

Figure 5.14 Phase response of a third-order Bessel filter and Butterworth filter

5.2.6 Comparison of Various Types of Analog Filters


The magnitude response and phase response of the normalized Butterworth , Chebyshev
Type 1, Chebyshev Type 2, and elliptic filters of the same order are compared with the
following specifications:

The following MATLAB program is used to generate the magnitude and phase responses
for these specifications.
Program 5.5 Magnitude and Phase Responses of Analog Filters of Order 8 with a
Passband Ripple of 1 dB and a Stopband Ripple of 35 dB
clear all;clc;
[z,p,k]=buttap(8);
[num1,den1]=zp2tf(z,p,k);[z,p,k]=cheb1ap(8,1);
[num2,den2]=zp2tf(z,p,k);[z,p,k]=cheb2ap(8,35);
[num3,den3]=zp2tf(z,p,k); [z,p,k]=ellipap(8,1,35);
[num4,den4]=zp2tf(z,p,k);
omega=[0:0.01:5];
h1=freqs(num1,den1,omega);h2=freqs(num2,den2,omega);
h3=freqs(num3,den3,omega);h4=freqs(num4,den4,omega);
ph1=angle(h1);ph1=unwrap(ph1);
ph2=angle(h2);ph2=unwrap(ph2);
ph3=angle(h3);ph3=unwrap(ph3);
ph4=angle(h4);ph4=unwrap(ph4);
figure(1),plot(omega,20*log10(abs(h1)),‘-’);hold on
plot(omega,20*log10(abs(h2)),‘—’);hold on
plot(omega,20*log10(abs(h3)),‘: ’);hold on
plot(omega,20*log10(abs(h4)),‘-.’);
xlabel(‘Normalized frequency’);ylabel(‘Gain,dB’);axis([0 5
-80 5]);
legend(‘Butterworth’,‘Chebyshev Type 1’,‘Chebyshev Type
2’,‘Elliptic’);hold off
figure(2),plot(omega,ph1,‘-’);hold on
plot(omega,ph2,‘—’);hold on
plot(omega,ph3,‘: ’);hold on
plot(omega,ph4,‘-.’)
xlabel(‘Normalized
frequency’);ylabel(‘Phase,radians’);axis([0 5 -8 0]);
legend(‘Butterworth’,‘Chebyshev Type 1’,‘Chebyshev Type
2’,‘Elliptic’);
The magnitude and phase responses for the above specifications are shown in Figure 5.15.
The magnitude response of Butterworth filter decreases monotonically both in passband
and stopband with wider transition band. The magnitude response of the Chebyshev Type
1 exhibits ripples in the passband, whereas the Chebyshev Type 2 has approximately the
same magnitude response to that of the Butterworth filter. The transition band of both the
Type 1 and Type 2 Chebyshev filters is the same, but less than that of the Butterworth
filter. The elliptic filter exhibits an equiripple magnitude response both in the passband
and the stopband with a transition width smaller than that of the Chebyshev Type 1 and
Type 2 filters. But the phase response of the elliptic filter is more nonlinear in the
passband than that of the phase response of the Butterworth and Chebyshev filters. If
linear phase in the passband is the stringent requirement, then the Bessel filter is preferred,
but with a poor magnitude response.

Figure 5.15 A comparison of various types of analog low-pass filters : (a) magnitude response and (b) phase response
Another way of comparing the various filters is in terms of the order of the filter
required to satisfy the same specifications. Consider a low-pass filter that meets the
passband edge frequency of 450 Hz, stopband edge frequency of 550 Hz, passband ripple
of 1 dB, and stopband ripple of 35 dB. The orders of the Butterworth, Chebyshev Type 1,
Chebyshev Type2, and elliptic filters are computed for the above specifications and listed
in Table 5.5. From this table, we can see that elliptic filter can meet the specifications with
the lowest filter order.
Table 5.5 Comparison of orders of various types of filters

Filter Order

Butterworth 24

Chebyshev Type 1 9

Chebyshev Type 2 9

Elliptic 5

5.2.7 Design of Analog High-Pass, Band-Pass, and Band-Stop


Filters
The analog high-pass, band-pass, and band-stop filters can be designed using analog
frequency transformations . In this design process, first, the analog prototype low-pass
filter specifications are derived from the desired specifications of the analog filter using
suitable analog-to-analog transformation. Next, by using the specifications so obtained, a
prototype low-pass filter is designed. Finally, the transfer function of the desired analog
filter is determined from the transfer function of the prototype analog low-pass transfer
function using the appropriate analog-to-analog frequency transformation. The low-pass to
low-pass , low-pass to high-pass , low-pass to band-pass , and low-pass to band-stop
analog transformations are considered next.
Low Pass to Low Pass
Let Ω p = 1 and be the passband edge frequencies of the normalized prototype low-
pass filter and the desired low-pass filter , as shown in Figure 5.16. The transformation
from the prototype low pass to the required low pass must convert and
. The transformation such as achieves the above

transformation for any positive value of k. If k is chosen to be , then gets


transformed to Ω p = 1, and . Since Ω p = 1 is the passband edge
frequency for the normalized Type I Chebyshev and elliptic low-pass filters , we have the
design equations for these filters as
(5.51a)

Figure 5.16 Low-pass to low-pass frequency transformation . (a) Prototype Low-pass filter frequency response. (b)
Low-pass filter frequency response

Also, the transfer function for these filters is related to the corresponding
normalized low-pass transfer function H N (s) by

(5.51b)

However, in the case of a Butterworth filter, since Ω = 1 corresponds to the cutoff


frequency of the filter, the transfer function for the Butterworth filter is related to
the normalized low-pass Butterworth transfer function H N (s) by

(5.51c)
where is the cutoff frequency of the desired Butterworth filter and is given by Eq.
(5.19). For similar reasons, the transfer function for the Type 2 Chebyshev filter is
related to the normalized transfer function H N (s) by

(5.51d)

Low Pass to High Pass (Figure 5.17)

Figure 5.17 Low-pass to high-pass frequency transformation . (a) Prototype low-pass filter frequency response. (b)
High-pass filter frequency response

Let the passband edge frequencies of the prototype low-pass and the desired high-pass
filters be Ω p = 1 and , as shown in Figure 5.17. The transformation from prototype low

pass to the desired high pass must transform and . The


transformation such as achieves the above transformation for any
positive value of k. By transforming , the constant k can be determined as

.Thus, design equations are

(5.52a)

and the desired transfer function is related to the low-pass transfer function H N (s)
by

(5.52b)

The above equations (5.52a) and (5.52b) hold for all filters except for Butterworth and
Type 2 Chebyshev filter. For Butterworth

(5.53a)

(5.53b)

For Type 2 Chebyshev filter, the design equations are

(5.53c)

and

(5.53d)

Example 5.5 Design a Butterworth analog high-pass filter for the following
specifications:
Passband edge frequency: 30.777 Hz
Stopband edge frequency: 10 Hz
 Passband ripple: 1 dB
 Stopband ripple: 20 dB
Solution For the prototype analog low-pass filter , we have
Substituting these values in Eq. (5.23), the order of the filter is given by

Hence, we choose N = 3. From Table 5.1, the third-order normalized Butterworth low-
pass filter transfer function is given by

Substituting the values of Ω s and N in Eq. (5.20), we obtain

Solving for Ω c , we get Ω c = 1.4309.


The analog transfer function of the low-pass filter is obtained from the above transfer
function by substituting ; hence,

From the above transfer function, the analog transfer function of the high-pass filter
can be obtained by substituting

Example 5.6 Design a Butterworth analog high-pass filter for the specifications of
Example 5.5 using MATLAB
Solution The following MATLAB code fragments can be used to design H HP (s)
[N,Wn]=buttord(1,3.0777,1,20,‘s’);
[B,A]=butter(N,Wn,‘s’);
[num, den]=1p2hp(B,A,3.0777);
The transfer function H LP (s) of the analog low-pass filter can be obtained by displaying
numerator and denominator coefficient vectors B and A and is given by

The transfer function H HP (s) of the analog high-pass filter can be obtained by
displaying numerator and denominator coefficient vectors num and den and is given by

Low Pass to Band Pass


The prototype low-pass and the desired band-pass filters are shown in Figure 5.18. In this
figure, is the lower passband edge frequency, the upper passband edge frequency,

the lower stopband edge frequency, and the upper stopband edge frequency of the

desired band-pass filter. Let us denote by B p the bandwidth of the passband and by
the geometric mean between the passband edge frequencies of the band-pass filter , i.e.,

(5.54a)

(5.54b)
Figure 5.18 Low-pass to band-pass frequency transformation . (a) Prototype low-pass filter frequency response. (b)
Band-pass filter frequency response

Now, consider the transformation

(5.55)

As a consequence of this transformation, it is seen that


transform to the frequencies Ω = −∞, −1, 0, +1, and ∞, respectively, for the normalized
low-pass filter . Also, the transformation (5.55) transforms the frequencies
, respectively, where
(5.56)

and

(5.57)

In order to satisfy the stopband requirements and to have symmetry of the stopband
edges in the low-pass filter, we choose Ω s to be themin{|A 1|, |A 2|}. Thus, the spectral
transformation (5.55) leads to the following design equations for the normalized low-pass
filter (except in the case of the Type 2 Chebyshev filter)

(5.58a)

where A 1 and A 2 are given by (5.56) and (5.57), respectively, and the desired high-
pass transfer function can be obtained from the normalized low-pass transfer
function H N (s) using (5.55). In the case of the Type 2 Chebyshev filter, the equation
corresponding to (5.58a) is

(5.58b)

Example 5.7 Design a Butterworth IIR digital band-pass filter for the following
specifications:
Lower passband edge frequency: 41.4 Hz
Upper passband edge frequency: 50.95 Hz
Lower stopband edge frequency: 7.87 Hz
Upper stopband edge frequency: 100 Hz
Passband ripple: 2 dB
Stopband ripple: 10 dB
Solution We have
For the prototype analog low-pass filter , Ω p = 1, Ω s = min {|A 1|, |A 2|} = 8.26; α p
= 2 dB α s = 10 dB
Substituting these values in Eq. (5.23), the order of the filter is given by

Let us choose N = 1
The transfer function of the first-order normalized Butterworth low-pass filter is given
by

Substituting the values of Ω s and N in Eq. (5.20), we obtain

Solving for Ω c , we get Ω c = 2.7533


The analog transfer function of the low-pass filter can be obtained from the above
transfer function by substituting

To arrive at the analog transfer function of the band-pass filter , variable s in the above
normalized transfer function is to be replaced by
Example 5.8 Design a band-pass Butterworth filter for the specifications of Example 5.7
using MATLAB
Solution The following MATLAB code fragments can be used to design H BS (s):
Bandwidth = bw = 50.95–41.4 = 9.55; .
[N,Wn]=buttord(1,8.26,2,10,‘s’);
[B,A]=butter(N,Wn,‘s’);
[num, den]=1p2bp(B,A, 45.9271,9.55);% [num, den]=1p2bp(B,A,
Ω o , bw);
The transfer function H LP (s) of the analog low-pass filter can be obtained by displaying
numerator and denominator coefficient vectors B and A. It is given by

The transfer function H BP (s) of the analog band-pass filter can be obtained by
displaying numerator and denominator coefficient vectors num and den. It is given by

Low Pass to Band Stop


The prototype low-pass and the desired band-stop filters are shown in Figure 5.19. In this
figure, is the lower passband edge frequency, the upper passband edge frequency,
the lower stopband edge frequency, and the upper stopband edge frequency of the
desired band-stop filter. Let us now consider the transformation

(5.59)

where is the geometric mean between the stopband edge frequencies of the band-stop
filter , i.e.,

(5.60)
Figure 5.19 Low-pass to band-stop frequency transformation. (a) Prototype low-pass filter frequency response. (b)
Band-stop filter frequency response

As a consequence of this transformation, it is seen that transformed to the

frequency for the normalized low-pass filter . Now, we transform the lower
stopband edge frequency to the stopband edge frequency Ω s of the normalized low-
pass filter; hence,

(5.61a)

where is the bandwidth of the stopband. Also, the upper stopband edge
frequency is transformed to
(5.61b)

Hence, the constant k is given by

(5.61c)

As a consequence, the passband edge frequencies are transformed to

(5.62a)

and

(5.62b)

In order to satisfy the passband requirement as well as to satisfy the symmetry


requirement of the passband edge of the normalized low-pass filter, we have to choose the
higher of as Ω p . Since for the normalized filter (except for the case of Type
2 Chebyshev filter), Ω p = 1, we have to choose Ω s to be the lower of {|A 1|, |A 2|}. Hence,
the design equations for the normalized low-pass filter (except for the Type 2 Chebyshev)
(Figure 5.19) are

(5.63a)

where

(5.63b)

and the transfer function of the required band-stop filter is


(5.63c)

For the Type 2 Chebyshev filter , Eq. (5.63a) would be replaced by

(5.63d)

Example 5.9 Design an analog band-stop Butterworth filter with the following
specifications :
Lower passband edge frequency: 22.35 Hz
Upper passband edge frequency: 447.37 Hz
Lower stopband edge frequency: 72.65 Hz
Upper stopband edge frequency: 137.64 Hz
 Passband ripple: 3 dB
 Stopband ripple: 15 dB
Solution From Eq. (5.63b), we have

Now using (5.63a), we get the specifications for the normalized analog low-pass filter
to be

Substituting these values in Eq. (5.23), the order of the filter is given by

We choose N = 1. The transfer function of the first-order normalized Butterworth low-


pass filter is
Substituting the values of Ω s and N in Eq. (5.20), we obtain

Solving for Ω c , we get Ω c = 1.1818. The analog transfer function of the low-pass
filter is obtained from H N (s) by substituting

To arrive at the analog transfer function of the band-stop filter , we use, in the above
expression, the low-pass to band-stop transformation given by (5.63c), namely,

to obtain

Example 5.10 Design a band-stop Butterworth filter for the specifications of Example
5.9 using MATLAB
Solution The following MATLAB code fragments can be used to design H BS (s):
Bandwidth = bw = 447.37 – 22.35; .
[N,Wn]=buttord(1,6.5397,3,15,‘s’);
[B,A]=butter(N,Wn,‘s’);
[num, den]=1p2bs(B,A,100,425.02);% [num, den]=1p2bs(B,A, Ω o
, bw);
The transfer function H LP (s) of the analog low-pass filter can be obtained by displaying
numerator and denominator coefficient vectors B and A and is given by
The transfer function H BS (s) of the analog band-stop filter can be obtained by
displaying numerator and denominator coefficient vectors num and den and is given by

5.3 Effect of Poles and Zeros on Frequency Response


Frequency response of a system can be obtained by evaluating H(s) for all values of
s = jΩ.

5.3.1 Effect of Two Complex System Poles on the Frequency


Response
Consider the following system function with complex poles

(5.64)

with placement of poles as shown in Figure 5.20(a). The magnitude response of the
system with pole locations shown in Figure 5.20(a) is given by

(5.65)

and is shown in Figure 5.20(b), and its phase response is shown in Figure 5.20(c)
Figure 5.20 (a) Pole locations of H(s). (b) Magnitude response. (c) Pole locations of H(s)

5.3.2 Effect of Two Complex System Zeros on the Frequency


Response
Consider the following system function with complex zeros

(5.66)

with placement of zeros as shown in Figure 5.21(a). The magnitude response of the
system with zeros locations shown in Figure 5.21(a) is given by

(5.67)

and is shown in Figure 5.21(b), and its phase response is shown in Figure 5.21(c)
Figure 5.21 (a) Zero locations of H(s). (b) Magnitude response. (c) Phase response

5.4 Design of Specialized Analog Filters by Pole-Zero


Placement
There are certain specialized filters often used in signal processing applications in addition
to the filters designed in the previous sections. These specialized filters can be directly
designed based on placement of poles and zeros .

5.4.1 Notch Filter


The notch filter removes a single frequency f 0 , called the notch frequency. The notch
filter can be realized with two zeros placed at ±jΩ 0,

As such a filter does not have unity gain at zero frequency. The notch will not be
sharp. By placing two poles close to the two zeros on the semicircle as shown in Figure
5.22(a), the notch can be made sharp with unity gain at zero frequency as shown in Figure
5.22(b).
Figure 5.22 (a) Placing two poles close the two zeros on the semicircle. (b) Magnitude response of (a)

Example 5.11 Design a second-order notch filter to suppress 50 Hz hum in an audio


signal
Solution Choose Ω0 = 100π. Place zeros at s = ±jΩ0 and poles at –Ω0 cos θ ± jΩ0 sin θ.
Then, the transfer function of the second-order notch filter is given by

5.5 Problems

1. Test the impulse response of an ideal low-pass filter for the following properties:
(i) Real valued

(ii) Even

(iii) Causal
2. Consider the first-order RC circuit shown in the figure below

(i) Determine H(Ω), the transfer function from vs to vc. Sketch the magnitude and
phase of H(Ω).

(ii) What is the cutoff frequency for H(Ω)?

(iii) Consider the following system:

(a) Draw the corresponding RC circuit and determine H(Ω), the transfer
function from v to vs. Sketch the magnitude and phase of H(Ω).

(b) What is the corresponding cutoff frequency?

3. Design a continuous time low-pass filter with the following transfer function

with the following specifications


Find the range of values of α that meets the specifications.

4. Consider the following system

If H(Ω) is an ideal band-pass filter , determine for what values of α, it will act as
an ideal band-stop filter .

5. Design an elliptic analog high-pass filter for the specifications of Example 5.5

Further Reading
1. Raut, R., Swamy, M.N.S.: Modern Analog Filter Analysis and Design: a Practical Approach. Springer, WILEY-
VCH Verlag & Co. KGaA, Weinheim, Germany (2010)
2. Antoniou, A.: Digital Filters: Analysis and Design. McGraw Hill Book Co., New York (1979)
3. Parks, T.W., Burrus, C.S.: Digital Filter Design. Wiley, New York (1987)
4. Temes, G.C., Mitra, S.K. (eds.): Modern Filter Theory and Design. Wiley, New York (1973)
5. Vlach, J.: Computerized Approximation and Synthesis of Linear Networks. Wiley, New York (1969)
6. Mitra, S.K.: Digital Signal Processing. McGraw-Hill, New York (2006)
7. Chen, C.T.: Digital Signal Processing, Spectral Computation and Filter Design. Oxford University Press,
NewYork/Oxford, UK (2001)
© Springer International Publishing AG, part of Springer Nature 2018
K. Deergha Rao, Signals and Systems, https://doi.org/10.1007/978-3-319-68675-2_6

6. Discrete-Time Signals and Systems


K. Deergha Rao1
(1) Department of Electronics and Communication Engineering, Vasavi College of
Engineering (Affiliated to Osmania University), Hyderabad, Telangana, India


Discrete-time signals are obtained by the sampling of continuous-time signals . Digital
signal processing deals basically with discrete-time signals, which are processed by
discrete-time systems. The characterization of discrete-time signals as well as discrete-
time systems in time domain is required to understand the theory of digital signal
processing. In this chapter, time-domain sampling and the fundamental concepts of
discrete-time signals as well as discrete-time systems are considered. First, the sampling
process of analog signals is described. Next, the basic sequences of discrete-time systems
and their classification are emphasized. The input-output characterization of linear time-
invariant (LTI) systems by means of convolution sum is described. Further, sampling of
discrete-time signals is introduced. Finally, the state-space representation of discrete-time
LTI systems is described.

6.1 The Sampling Process of Analog Signals


6.1.1 Impulse-Train Sampling
The acquisition of an analog signal at discrete-time intervals is called sampling. The
sampling process mathematically can be treated as a multiplication of a continuous-time
signal x(t) by a periodic impulse train p(t) of unit amplitude with period T. For example,
consider an analog signal x a (t) as shown in Figure 6.1(a), and a periodic pulse train p(t)
of unit amplitude with period T as in Figure 6.1 (b) is referred to as the sampling function ,
the period T as the sampling period , and the fundamental frequency ω T = (2π/T) as the
sampling frequency in radians. Then, the sampled version x p (t) is shown in Figure 6.1 (c).
Figure 6.1 (a) Continuous-time signal, (b) pulse train, (c) sampled version of (b)

In the time domain, we have

(6.1)

where

(6.1a)

x p (t) is the impulse train with the amplitudes of the impulses equal to the samples of x
a (t) at intervals T, 2T, 3T, … .

Therefore, the sampled version of signal x p (t) mathematically can be represented as

(6.2)

6.1.2 Sampling with a Zero-Order Hold


In Section 6.1.1, the sampling process establishes a fact that the band-limited signal can be
uniquely represented by its samples. In a practical setting, it is difficult to generate and
transmit narrow large amplitude pulses that approximate impulses. Hence, it is more
convenient to implement the sampling process using a zero-order hold. It samples analog
signal at a given sampling instant and holds the sample value until the succeeding
sampling instant. A block diagram representation of the analog-to-digital conversion
(ADC) process is shown in Figure 6.2. The amplitude of each signal sample is quantized
into one of the 2 b levels, where b is the number of bits used to represent a sample in the
ADC. The discrete amplitude levels are encoded into distinct binary word of length b bits.

Figure 6.2 A block diagram representation of an analog-to-digital conversion process.

A sequence of samples x(n) is obtained from an analog signal x a (t) according to the
relation

(6.3)

In Eq. (6.2), T is the sampling period , and its reciprocal, F T = 1/T is called the
sampling frequency , in samples per second. The sampling frequency F T is also referred
to as the Nyquist frequency.
Sampling Theorem
The sampling theorem states that an analog signal must be sampled at a rate at least twice
as large as highest frequency of the analog signal to be sampled. This means that

(6.4)

where f max is maximum frequency component of the analog signal. The frequency 2f max
is called the Nyquist rate .
For example, to sample a speech signal containing up to 3 kHz frequencies, the required
minimum sampling rate is 6 kHz, that is, 6000 sample per second. To sample an audio
signal having frequencies up to 22 kHz, the required minimum sampling rate is 44 kHz,
that is, 44000 samples per second.
A signal whose energy is concentrated in a frequency band range f L < |f| < f H is often
referred to as a band-pass signal. The sampling process of such signals is generally
referred to as band-pass sampling. In the band-pass sampling process, to prevent aliasing
effect, the band-pass continuous-time signal can be sampled at sampling rate greater than
twice the highest frequency (f H ):
(6.5)

The bandwidth of the band-pass signal is defined as

(6.6)

Consider that the highest frequency contained in the signal is an integer multiple of the
bandwidth that is given as

(6.7)

The sampling frequency is to be selected to satisfy the condition as

(6.8)

6.1.3 Quantization and Coding


Quantization and coding are two primary steps involve in the process of A/D conversion.
Quantization is a nonlinear and non-invertible process that rounds the given amplitude
x(n) = x(nT) to an amplitude x k that is taken from the finite set of values at time t = nT.
Mathematically, the output of the quantizer is defined as

(6.9)

The procedure of the quantization process is depicted as

The possible outputs of the quantizer (i.e., the quantization levels) are indicated by
where L stands for number of intervals into which the
signal amplitude is divided. For uniform quantization,

(6.10)
where Δ is the quantizer step size.
The coding process in an ADC assigns a unique binary number to each quantization
level. For L levels, at least L different binary numbers are needed. With word length of n
bits, 2 n distinct binary numbers can be represented. Then, the step size or the resolution of
the A/D converter is given by

(6.11)

where A is the range of the quantizer.


Quantization Error
Consider an n bit ADC sampling analog signal x(t) at sampling frequency of FT as shown
in Figure 6.3(a). The mathematical model of the quantizer is shown in Figure 6.3(b). The
power spectral density of the quantization noise with an assumption of uniform probability
distribution is shown in Figure 6.3(c).

Figure 6.3 (a) Quantizer, (b) mathematical model, (c) power spectral density of quantization noise

If the quantization error is uniformly distributed in the range (‐Δ/2,  Δ/2) as shown in


Figure 6.3(b), the mean value of the error is zero, and the variance (the quantization noise
power) is given by
(6.12)

The quantization noise power can be expressed by

(6.13)

The effect of the additive quantization noise on the desired signal can be quantified by
evaluating the signal-to-quantization noise (power) ratio (SQNR) that is defined as

(6.14)

where is the signal power and is the quantization


noise power.

6.2 Classification of Discrete-Time Signals


6.2.1 Symmetric and Anti-symmetric Signals
A real valued signal x(n) is said to be symmetric if it satisfies the condition

(6.15a)

Example of a symmetric sequence is shown in Figure 6.4

Figure 6.4 An example of symmetric sequence

On the other hand, a signal x(n) is called anti-symmetric if it follows the condition
(6.15b)

An example of anti-symmetric sequence is shown in Figure 6.5.

Figure 6.5 An example of anti-symmetric sequence

6.2.2 Finite and Infinite Length Sequences


A signal is said to be of finite length or duration if it is defined only for a finite time
interval:

(6.16)

The length of the sequence is N = N 2 − N 1 + 1. Thus, a finite sequence of length N


has N samples. A discrete-time sequence consisting of N samples is called a N-point
sequence. Any finite sequence can be viewed as an infinite length sequence by adding
zero-valued samples outside the range (N 1, N 2). Also, an infinite length sequence can be
truncated to produce a finite length sequence.

6.2.3 Right-Sided and Left-Sided Sequences


A right-sided sequence is an infinite sequence x(n) for which x(n) = 0 for n < N 1, where N
1 is a positive or negative integer. If N 1 ≥ 0, the right-sided sequence is said to be causal.
Similarly, if x(n) = 0 for n > N 2, where N 2 is a positive or negative integer, then the
sequence is called a left-sided sequence. Also, if N 2 ≤ 0, then the sequence is said to be
anti-causal.

6.2.4 Periodic and Aperiodic Signals


A sequence x(n) = x(n + N) for all n is periodic with a period N, where N is a positive
integer . The smallest value of N for which x(n) = x(n + N) is referred as the fundamental
period. A sequence is called aperiodic, if it is not periodic. An example of a periodic
sequence is shown in Figure 6.6.
Figure 6.6 An example of a periodic sequence

Proposition 6.1 A discrete-time sinusoidal sequence x(n) = A sin (ω 0 n + θ) is periodic


if and only if is a rational number.
The rational number is defined as the ratio of two integers. For the given periodic signal
x(n) = A sin (ω 0 n + θ), its fundamental period N is obtained from the following
relationship

The fundamental period of a discrete-time sinusoidal sequence satisfying the


proposition 6.1 is calculated by setting m equal to a small integer that results in an integer
value for N.
The fundamental period of a discrete-time complex exponential sequence can also be
calculated satisfying the proposition 6.1.
Example 6.1 Determine if the discrete-time sequences are periodic:

(i)

(ii) x(n) = sin2n

(iii)

(iv)
Solution
(i) The value of ω 0 in x(n) is . Since is a rational number, it is periodic
discrete-time sequence. The fundamental period of x(n) is given by For
m = 1, . Hence, is periodic with fundamental period N = 8,

(ii) x(n) = sin 2n is aperiodic because ω 0 N = 2N = 2πm is not satisfied for any integer
value of m in making N to be an integer.

(iii) is periodic and cos 2n is aperiodic. Since the sum of periodic and aperiodic
signals is aperiodic, the signal is aperiodic.

(iv) The value of ω 0 in x(n) is Since is a rational number, it is periodic


discrete-time sequence. The fundamental period of x(n) is given by For

m = 5, Hence, is periodic with fundamental period


N = 16.

6.2.5 Energy and Power Signals


The total energy of a signal x(n), real or complex, is defined as

(6.17)

By definition, the average power of an aperiodic signal x(n) is given by

(6.18a)

The signal is referred to as an energy signal if the total energy of the signal satisfies
the condition 0 < E < ∞. It is clear that for a finite energy signal, the average power P is
zero. Hence, an energy signal has zero average power. On the other hand, if E is infinite,
then P may be finite or infinite. If P is finite and nonzero, then the signal is called a power
signal. Thus, a power signal is an infinite energy signal with finite average power.
The average power of a periodic sequence x(n) with a period I is given by
(6.18b)

Hence, periodic signals are power signals.


Example 6.2 Determine whether the sequence x(n) = a n u(n) is an energy signal or a
power signal or neither for the following cases:

Solution For x(n) = a −n u(n), E is given by

(a) For |a| < 1,

The energy E is finite and the average power P is zero. Hence, the signal x(n) = a n
u(n) is an energy signal for |a| < 1.

(b) For |a| = 1,


The energy E is infinite, and the average power P is finite . Hence, the signal x(n) = a n
u(n) is a power signal for |a| = 1.

(c) For |a| > 1,

The energy E is infinite and also the average power P is infinite. Hence, the signal
x(n) = a n u(n) is neither an energy signal nor a power signal for |a| > 1.
Example 6.3 Determine whether the following sequences
(i) x(n) = e –n u(n), (ii) x(n) = e n u(n), (iii) x(n) = nu(n), and (iv) x(n) = cosπn u(n)
are energy or power signals or neither energy nor power signals.
Solution

(i) x(n) = e −n u(n). Hence, E and P are given by

The energy E is finite and the average power P is zero. Hence, the signal x(n) = e −n
u(n) is an energy signal.
(ii) x(n) = e +n u(n). Therefore, E and P are given by

The energy E is infinite and also the average power P is infinite. Hence, the signal
x(n) = e n u(n) is neither an energy signal nor a power signal.

(iii) x(n) = nu(n). Hence, E and P are given by

The energy E is infinite and also the average power P is infinite. Hence, the signal
x(n) = nu(n) is neither an energy signal nor a power signal.

(iv) x(n) = cos πn u(n). Sincecosπn = (−1) n , E and P are given by

The energy E is not finite and the average power P is finite. Hence, the signal x
(n) = cos πnu(n) is a power signal.

6.3 Discrete-Time Systems


A discrete-time system is defined mathematically as a transformation that maps an input
sequence x(n) into an output sequence y(n). This can be denoted as

(6.19)
where ℜ is an operator.

6.3.1 Classification of Discrete-Time Systems


Linear Systems
A system is said to be linear if and only if it satisfies the following conditions:

(6.20)

(6.21)

where a is an arbitrary constant and y 1(n) and y 2(n) are the responses of the system when
x 1(n) and x 2(n) are the respective inputs. Equations (6.20) and (6.21) represent the
homogeneity and additivity properties, respectively.
The above two conditions can be combined into one representing the principle of
superposition as

(6.22)

where a and b are arbitrary constants.


Example 6.4 Check for linearity of the following systems described by the following
input-output relationships :

(i)

(ii) y(n) = x 2(n)

(iii) y(n) = x(n − n 0), where n 0 is an integer constant


Solution

(i) The outputs y 1(n) and y 2(n) for inputs x 1(n) and x 2(n) are, respectively, given by
The output y(n) due to an input x(n) = ax 1(n) + bx 2(n) is then given by

Hence the system described by is a linear system.

(ii) The outputs y 1(n) and y 2(n) for inputs x 1(n) and x 2(n) are given by

The output y(n) due to an input x(n) = ax 1(n) + bx 2(n) is then given by

Therefore, the system y(n) = x 2(n) is not linear.

(iii) The outputs y 1(n) and y 2(n) for inputs x 1(n) and x 2(n), respectively, are given by

The output y(n) due to an input x(n) = ax 1(n) + bx 2(n) is then given by
Hence, the system y(n) = x(n − n 0) is linear.
Time-Invariant Systems
A time-invariant system (shift invariant system) is one in which the internal parameters do
not vary with time. If y 1(n) is output to an input x 1(n), then the system is said to be time
invariant if, for all n 0, the input sequence x 1(n) = x(n − n 0) produces the output sequence
y 1(n) = y(n − n 0), i.e.,

where n 0 is a positive or negative integer.


Example 6.5 Check for time-invariance of the system defined by
Solution From given Eq., the output y(n) of the system delayed by n 0 can be written as

For example, for an input x 1(n) = x(n − n 0), the output y1(n) can be written as

Substitution of the change of variables k 1 = k − n 0 in the above summation yields

Hence, it is a time-invariant system.


Example 6.6 Check for time-invariance of the down-sampling system with a factor of 2,
defined by the relation

Solution For an inputx 1(n) = x(n − n 0), the output y1(n) of the compressor system can
be written as
From given equation,

Comparing the above two equations, it can be observed that y 1(n) ≠ y(n − n 0). Thus,
the down-sampling system is not time invariant.
Causal System
A system is said to be causal if its output at time instant n depends only on the present and
past input values, but not on the future input values.
For example, a system defined by

is not causal , as the output at time instant n depends on future values of the input. But,
the system defined by

is causal, since its output at time instant n depends only on the present and past values
of the input.
Stable System
A system is said to be stable if and only if every bounded-input sequence produces a
bounded-output sequence. The input x(n) is bounded if there exists a fixed positive finite
value β x such that

(6.23)

Similarly, the output y(n) is bounded if there exists a fixed positive finite value β y such
that

(6.24)

and this type of stability is called bounded-input bounded-output (BIBO) stability.


Example 6.7 Check for stability of the system described by the following input-output
relation

Solution Assume that the input x(n) is bounded such that |x(n)| ≤ β x < ∞ for all n

Hence, y(n) is bounded and the system is stable .


Example 6.8 Check for stability, causality , linearity , and time-invariance of the system
described by ℜ[x(n)] = (−1) n x(n)
This transformation outputs the current value of x(n) multiplied by either ±1.
It is stable, since it does not change the magnitude of x(n) and hence satisfies the
conditions for bounded-input bounded-output stability .
It is causal, because each output depends only on the current value of x(n).

Hence, it is linear.

Therefore, it is not time invariant.


Example 6.9 Check for stability, causality , linearity , and time-invariance of the system
described by ℜ[x(n)] = x(n 2)
Solution Stable, since if x(n) is bounded, x ( n 2 ) is also bounded.
It is not causal, since, for example, if n = 4, then the output y(n) depends upon the
future input because y(4) = ℜ[x(4)] = x(16)
Therefore, it is linear.

Hence, it is not time invariant.

6.3.2 Impulse and Step Responses


Let the input signal x(n) be transformed by the system to generate the output signal y (n).
This transformation operation is given by

(6.25)

If the input to the system is a unit sample sequence (i.e., impulse input δ(n)), then the
system output is called as impulse response and denoted by h(n). If the input to the system
is a unit step sequence u(n), then the system output is called its step response. In the next
section, we show that a linear time-invariant discrete-time system is characterized by its
impulse response or step response.

6.4 Linear Time-Invariant Discrete-Time Systems


Linear time-invariant systems have significant signal processing applications, and hence it
is of interest to study the properties of such systems.

6.4.1 Input-Output Relationship


An arbitrary sequence x(n) can be expressed as a weighted linear combination of unit
sample sequences given by

(6.26)

Now, the discrete-time system response y(n) is given by


(6.27)

From the principle of superposition, the above equation can be written as

(6.28)

Let the response of the system due to input δ(n − k) be h k (n), that is,

Then, the system response y(n) for an arbitrary input x(n) is given by

Since δ(n − k) is a time-shifted version of δ(n), the response h k (n) is the time-shifted
version of the impulse response h(n), since the operator is time invariant. Hence,h k
(n) = h(n ‐ k). Thus,

(6.29)

The above equation for y(n) is commonly called the convolution sum and represented
by

(6.29a)

where the symbol * stands for convolution. The discrete-time convolution operates on
the two sequences x(n) and h(n) to produce the third sequence y(n).
Example 6.10 Determine discrete convolution of the following sequences for large value
of n:

Solution
For large n, tends to zero and hence,

Example 6.11 Determine discrete convolution of the following two finite duration
sequences:

Solution The impulse response h(n) = 0 for n < 0; hence the given system is causal ; and
x(n) = 0 for n < 0, therefore the sequence x(n) is causal sequence:

6.4.2 Computation of Linear Convolution


Matrix Method
If the input x(n) is of length N 1 and the impulse sequence h(n) is of length N 2, then the
convolution sequence is of length N 1 + N 2 − 1. Thus, the linear convolution given by Eq.
(6.29) can be written in matrix form as
(6.30)

The following example illustrates the above procedure for computation of linear
convolution.
Example 6.12 Find the convolution of the sequences x(n) = {6, −3} and h (n) = {−3, 6, 
3}.
Solution Using Eq. (6.19), the linear convolution of x(n) and h(n) is given by

Thus,

Graphical Method for Computation of Linear Convolution


Evaluation of sum at any sample n consists of the following four important operations:

(i) Time reversing or reflecting of the sequence h(k) about k = 0 sample to give h (–k).

(ii) Shifting the sequence h(–k) to the right by n samples to obtain h(n – k).

(iii) Forming the product x(k)h(n – k) sample by sample for the desired value of n.
(iv) Summing the product over the index k in y(n) for the desired value of n.

The length of the convolution sum sequence y(n) is given by n = N 1 + N 2 − 1, where
N 1 is length of the sequence x(n) and N 2 is length of the sequence h(n).
Example 6.13 Compute the convolution of the sequences of Example 6.12 using the
graphical method.
Solution The sequences x(n) and h(n) are shown in Figures 6.7.

Figure 6.7 Sequences x(n) and h(n)


Figure 6.8 Convolution of sequences x(n) and h(n)

6.4.3 Computation of Convolution Sum Using MATLAB


The MATLAB function conv(a,b) can be used to compute convolution sum of two
sequences a and b as illustrated in the following example.
Example 6.14 Compute convolution sum of the sequences x(n) = {2,−1,0,0} and h
(n) = {−1,2,1}, using MATLAB.
Program 6.1 Illustration of convolution
a=[ 2 -1 0 0 ];% first sequence
b=[-1 2 1];% second sequence
c=conv(a,b);% convolution of first sequence and second
sequence
len=length(c)-1;
n=0:1:len;
stem(n,c)
xlabel(‘Time index n’); ylabel(‘Amplitude’);
axis([0 5 -3 5])

Figure 6.9 Sequence generated by the convolution

6.4.4 Some Properties of the Convolution Sum


Starting with the convolution sum given by (6.30), namely, y(n) = x(n) * h(n), we can
establish the following properties:

1) The convolution sum obeys the commutative law

(6.31a)

2) The convolution sum obeys the associative law


(6.31b)

3) The convolution sum obeys the distributive law

(6.31c)

Let us now interpret the above relations physically.

1) The commutative law shows that the output is the same if we interchange the roles of
the input and the impulse response. This is illustrated in Figure 6.10.

Figure 6.10 Interpretation of the commutative law

2) To interpret the associative law, we consider a cascade of two systems whose impulse
responses are h 1 (n) and h 2(n). Then y 1(n) = x(n) * h(n) if x(n) is the input to the
system with the impulse response h 1(n). If y 1(n) is now fed as the input to the system
with impulse response h 2(n), then the overall system output is given by

by associative law

(6.32)

This equivalence is shown in Figure 6.11. Hence, if two systems with impulse
responses h 1(n) and h 2(n) are cascaded, then the overall system response is given by
(6.33)

Figure 6.11 Interpretation of the associative law

This can be generalized to a number of LTI systems in cascade.

3) We now consider the distributive law given by (6.31c). This can be easily interpreted
as two LTI systems in parallel and that the overall system impulse response h(n) of
the two systems in parallel is given by

(6.34)

This is illustrated in Figure 6.12.

Figure 6.12 Interpretation of distributive law

Example 6.15 Consider the system shown in Figure 6.13 with h(n) being real. If y
2(n) = y 1(–n), find the overall impulse response h 1(n) that relates y 2(n) to x(n).

Figure 6.13 Input-output relations for Example 6.15

Solution

From Figure 6.13, we have the following relations :


 y 1(n) = y(−n) ∗ h(n)

Hence, the overall impulse response = h 1(n) = h(n) ∗ h(–n)


Example 6.16 Consider the cascade interconnection of three causal LTI systems as
shown in Figure 6.14. The impulse response h 2(n) is given by

and the overall impulse response h(n) = {1,5,10,11,8,4,1}. Determine the impulse
response h 1(n).

Figure 6.14 Interconnection of three causal LTI systems

Solution Let the overall impulse response of the cascaded system be h(n). Hence,

Since the convolution is associative in nature,

Let h3(n) = h2(n) * h2(n).


Since h 2(n) is nonzero for n = 0 and 1 only, h 3(n) can be written as

Therefore,
Thus, we obtain

Now, h(n) is nonzero in the interval 0 to 6 and h 3(n) is nonzero in the interval 0 to 2:

Hence, h 1(n) will be nonzero in the interval 0 to 4. Then, we have

Let h 1(n) = {a1, a2, a3, a4, a5}.


Therefore, we have

Thus,
6.4.5 Stability and Causality of LTI Systems in Terms of the
Impulse Response
The output of a LTI system can be expressed as

For bounded input x(n)

we have

(6.35)

It is seen from (6.35) that y(n) is bounded if and only if is bounded. Hence,
the necessary and sufficient condition for stability is that

(6.36)

The output y(n 0) of a LTI causal system can be expressed as

For a causal system, the output at n = n 0 should not depend on the future inputs.
Hence, in the above equation, h(k) = 0 for k < 0.
Thus, it is clear that for causality of a LTI system, its impulse response sequence
(6.37)

Example 6.17 Check for the stability of the systems with the following impulse
responses :
(i) Ideal delay, h(n) = δ(n − n d ); (ii) forward difference , h(n) = δ(n + 1) − δ(n).
(iii) Backward difference, h(n) = δ(n) − δ(n − 1); (iv) h(n) = u(n).
(v) h(n) = a n u(n), where |a| < 1, and (vi) h(n) = a n u(n), where |a| ≥ 1.
Solution Given impulse responses of the systems, stability of each system can be tested
by computing the sum

In case of (i), (ii), and (iii), it is clear that S < ∞. As such, the systems corresponding to
(i), (ii), and (iii) are stable.
For the impulse response given in (iv), the system is unstable since

This is an example of an infinite-duration impulse response (IIR) system.


In case of (v), . For |a| < 1, S < ∞, and hence the system is stable. This is an
example of a stable IIR system.
Finally, in case of (vi), |a| ≥ 1, and the sum is infinite, making the system unstable.
Example 6.18 Check the following systems for causality :
(i) (ii)

(iii) , (iv) , and (v) h(n) = u(n + 1) − u(n)

Solution (i) h(n) = 0 for n < 0; hence the system is causal .


(ii) h(n) ≠ 0 for n < 0; hence the system is not causal.
(iii) h(n) ≠ 0 for n < 0; thus, the system is not causal.

(iv) ; hence h(n) ≠ 0 for n < 0; so, the system is not causal.

(v) h(n) = u(n + 1) – u(n), and h(n) ≠ 0 for n < 0; so, the system is not causal .
Example 6.19 Check the following systems for stability :

(i) , (ii) h(n) = u(n + 2) − u(n − 5), (iii) h(n) = 5 n u(−n − 3), (iv)

, and (v)

Solution

(i) The system is stable , since

(ii) h(n) = u(n + 2) – u(n – 5). The system is stable, since S is finite.

(iii) h(n) = 5 n u(–n – 3). Hence, Therefore, the system is


stable .

(iv)
Summing |h(n)| over all positive n, we see that S tends to infinity . Hence, the system
is not stable.

(v)
|h(n)| is upper bounded by . Thus, Hence the system is stable .

6.5 Characterization of Discrete-Time Systems


Discrete-time systems are characterized in terms of difference equations. An important
class of LTI discrete-time systems is one that is characterized by a linear difference
equation with constant coefficients. Such a difference equation may be of two types,
namely, non-recursive and recursive.

6.5.1 Non-Recursive Difference Equation


A non-recursive LTI discrete-time system is one that can be characterized by a linear
constant coefficient difference equation of the form
(6.38)

where b m ’s represent constants. By assuming causality , the above equation can be


written as

(6.39)

In addition, if x(n) = 0 for n < 0 and b m = 0 for m > N, then Eq. (6.39) becomes

(6.40)

Thus an LTI, causal , non-recursive system can be characterized by an Nth-order linear


non-recursive difference equation . The Nth-order non-recursive difference equation has a
finite impulse response (FIR). Therefore, an FIR filter is characterized by a non-recursive
difference equation .

6.5.2 Recursive Difference Equation


The response of a discrete-time system depends on the present and previous values of the
input as well as the previous values of the output. Hence a linear time-invariant causal ,
recursive discrete-time system can be represented by the following Nth-order linear
recursive difference equation :

(6.41)

where a m and b m are constants. An Nth-order recursive difference equation has an infinite
impulse response. Hence, an infinite impulse response (IIR) filter is characterized by a
recursive difference equation.
Example 6.20 An initially relaxed LTI system was tested with an input signal x
(n) = u(n) and found to have a response as shown in Table 6.1.
Table 6.1 Response of an LTI system for an input x(n) = u(n)

n 1 2 3 4 5 …… . 100 …… .

y(n) 1 2 4 6 10 …… . 10 …… .


(i) Obtain the impulse response of the system.

(ii) Deduce the difference equation of the system.


Solution

(i) From Table 6.1, it can be observed that the response y(n) for an input x (n) = u (n) is
given by

Similarly, for an input x(n) = u(n−1), the response y(n-1) is given by

For an input x(n) = u(n)−u(n−1), the response of an LTI system is the impulse response
h(n) given by

(ii) The difference equation is given by

Hence, the difference equation of the system can be written as

6.5.3 Solution of Difference Equations


A general linear constant coefficient difference equation can be expressed as

(6.42)

The solution of the difference equation is the output response y(n). It is the sum of two
components which can be computed independently as

(6.43a)

where y c (n) is called the complementary solution and y p (n) is called the particular
solution.
The complementary solution y c (n) is obtained by setting x(n) = 0 in Eq. (6.42). Thus y
c (n) is the solution of the following homogeneous difference equation:

(6.43b)

where a 0 = 1. To solve the above homogeneous difference equation, let us assume that

(6.43c)

where the subscript c indicates the solution to the homogeneous difference equation.
Substituting y c (n) in Eq. (6.43b), the following equation can be obtained:

(6.44)

which takes the form:

(6.45)

The above equation is called the characteristic equation , which consists of N roots
represented by λ 1, λ 2, … … , λ N. If the N roots are distinct, then the complementary
solution can be expressed as
(6.46)

where α 1, α 2, … … , α N are constants which can be obtained from the specified initial
conditions of the discrete-time system. For multiple roots, the complementary solution y c
(n) assumes a different form. In the case when the root λ 1 of the characteristic equation is
repeated m times, but λ 2, … … , λ N are distinct, then the complementary solution y c (n)
assumes the form

(6.47)

In case the characteristic equation consists of complex roots λ 1 , λ 2 = a ± jb, then the
complementary solution results in y c (n) = (a 2 + b 2)n/2 (C 1 cos nq + C 2 sin nq), where
q = tan–1 b/a and C 1 and C 2 are constants.
We now look at the particular solution y p (n) of Eq. (6.42) The particular solution y p
(n) is any solution that satisfies the difference equation for the specific input signal x(n),
for n ≥ 0, i.e.,

(6.48)

The procedure to find the particular solution y p (n) assumes that y p (n) depends on the
form of x(n). Thus, if x(n) is a constant, then y p (n) is implicitly a constant. Similarly, if
x(n) is a sinusoidal sequence, then y p (n) is implicitly a sinusoidal sequence and so on.
In order to find out the overall solution, the complementary and particular solutions
must be added. Hence,

(6.49)

Example 6.21 Determine impulse response for the case of x(n) = δ(n) of a discrete-time
system characterized by the following difference equation:

(6.50)

Solution First, we determine the complementary solution by setting x(n) = 0 and y (n) =
λ n in Eq. (6.50), which gives us
Hence, the zeros of the characteristic polynomial λ 2 + 2λ − 3 are λ 1 = −3 and λ 2 = 1.
Therefore, the complementary solution is of the form

(6.51)

For impulse x(n) = δ(n), x(n) = 0 for n > 0 and x(0) = 1. Substituting these relations in
Eq. (6.50) and assuming that y(−1) = 0 and y(−2) = 0, we get

i.e., y(0) = 1. Similarly y(1) + 2y(0) – 3y(–1) = x(1) = 0 yields y(1) = –2.
Thus, from Eq. (6.51), we get

Solving these two equations, we obtain α 1 = 3/4 and α 2 = 1/4.


Since x(n) = 0 for n > 0, there is no particular solution . Hence, the impulse response is
given by

(6.52)

Example 6.22 A discrete-time system is characterized by the following difference


equation:

(6.53)

Determine the step response of the system, i.e., x(n) = u(n).


Solution For the given difference equation, total solution is given by
First, we determine the complementary solution by setting x(n) = 0 and y(n) = λ n in
Eq. (6.53), which give us

Hence, the zeros of the characteristic polynomial λ 2 + 5λ + 6 are λ 1 = –3 and λ 2 = –


2.
Therefore, the complementary solution is of the form
y c (n) = α 1(−3) n + α 2(−2) n
The particular solution for the step input is of the form
y p (n) = K
For n > 2, substituting y p (n) = K and x(n) = 1 in Eq. (6.53), we get K + 5K + 6K = 1;
, and .

Therefore, the solution for given difference equation is

(6.54)

For n = 0, Eq. (6.53) becomes


y(0) + 5y(–1) + 6y(–2) = x(0)
Assuming y(–1) = y(–2) = 0, from the above equation, we get y(0) = x(0) = 1 and for
n = 1, y(1) + 5y(0) + 6y(–1) = x(1) = 1, i.e., y(1) = –4.
Then, we get from Eq. (6.54)

Solving these equations, we arrive at and .

Then, the step response is given by


(6.55)

Example 6.23 A discrete-time system is characterized by the following difference


equation:

(6.56)

Determine the response y(n), n ≥ 0 when the system input is x(n) = (–1) n u(n) and the
initial conditions are y(−1) = 1 and y(−2) = −1.
Solution For the given difference equation, the total solution is given by

First, determine the complementary solution by setting x(n) = 0 and y(n) = λ n in Eq.
(6.56); this gives

Hence, the zeros of the characteristic polynomial λ 2 − 2λ + 1 are λ 1 = λ 2 = 1.


It has repeated roots; thus, the complementary solution is of the form
y c (n) = 1 n (α 1 + nα 2).
The particular solution for the step input is of the form
y p (n) = K(−1) n u(n).
Substituting x(n) = (–1) n u(n) and y p (n) = K(-1) n u(n) in Eq. (6.56), we get

K(–1) n u(n) – 2K(–1)n–1 u(n – 1) + K(–1)n–2 u(n – 2) = (–1) n u(n) – (–1)n–1 u(n – 1)
For n = 2, the above equation becomes K + 2K + K = 2; .

Therefore, the particular solution is given by

Then, the total solution for given difference equation is


(6.57)

For n = 0, Eq. (6.56) becomes

Using the initial conditions y(–1) = 1 and y(–2) = –1, we get y(0) = 4.
Then, for n = 1, from Eq. (6.56), we get y(1) = 5. Thus, we get from Eq. (6.57)

Solving these two equations, we arrive at α 1 = (7/2) and α 2 = 2. Thus, the response of
the system for the given input is

(6.58)

6.5.4 Computation of Impulse and Step Responses Using


MATLAB
The impulse and step responses of LTI discrete-time systems can be computed using
MATLAB function:

where b and a are the coefficient vectors of difference equation describing the system, x is
the input data vector, and y is the vector generated assuming zero initial conditions. The
following example illustrates the computation of the impulse and step responses of an LTI
system .
Example 6.24 Determine the impulse and step responses of a discrete-time system
described by the following difference equation:

Solution Program 6.2 is used to compute and plot the impulse and step responses, which
are shown in Figure 6.15a and b, respectively.

Figure 6.15 (a) Impulse response and (b) step response for Example 6.24

Program 6.2: Illustration of Impulse and Step Response Computation


clear;clc;
flag=input(‘enter 1 for impulse response, and 2 for step
response’);
len=input(‘enter desired response length=‘);
b=[l -2];%b coefficients of the difference equation
a=[l 0.l -0.06]; %a coefficients of the difference equation
if flag==l;
x=[l,zeros(l,len-l)];
end
if flag==2;
x=[ones(1,len)];
end
y=filter(b,a,x);
n=0:1:len-1;
stem(n,y)
xlabel(‘Time index n’); ylabel(‘Amplitude’);

6.6 Sampling of Discrete-Time Signals


It is often necessary to change the sampling rate of a discrete-time signal, i.e., to obtain a
new discrete-time representation of the underlying continuous-time signal of the form
x’(n) = x a (nT’). One approach to obtain the sequence x’(n) from x(n) is to reconstruct x a
(t) from x(n) and then resample x a (t) with period T’ to obtain x’(n). Often, however , this
is not a desirable approach, because of the non-ideal analog reconstruction filter, DAC,
and ADC that would be used in a practical implementation. Thus, it is of interest to
consider methods that involve only discrete-time operation.

6.6.1 Discrete-Time Down Sampler


The block diagram representation of a down sampler, also known as a sampling rate
compressor, is depicted in Figure 6.16.

Figure 6.16 Block diagram representation of a down sampler

The down-sampling operation is implemented by defining a new sequence x d (n) in


which every M th sample of the input sequence is kept and (M−1) in-between samples are
removed to obtain the output sequence, i.e., x d (n) is identical to the sequence obtained
from x a (t) with a sampling period T’ = MT

(6.59)

For example, if x(n) = {2,6,3,0,1,2,–5,2,4,7,–1,1,–2,…},


then x d (n) = {2, 1, 4, –2, …} for M = 4, i.e., M−1 = 3 samples are left in between the
samples of x(n) to get x d (n).

6.6.2 Discrete-Time Up-Sampler


The block diagram representation of an up-sampler, also called a sampling rate expander
or simply an interpolator, is shown in Figure 6.17.

Figure 6.17 Block diagram representation of an up-sampler

The output of an up-sampler is given by

(6.60)

Eq. (6.60) implies that the output of an up-sampler can be obtained by inserting (L – 1)
equidistant zero-valued samples between two consecutive samples of the input sequence
x(n), i.e., x e (n) is identical to the sequence obtained from x a (t) with a sampling period
T’ = T/L. For example, if x e (n) = {2,1,4,–2, … .}, then x e (n) = {2,0,0,0,1,0,0,0,4,0,0,0,–
2,0,0,0, …} for L = 4, i.e., L−1 = 3 zero-valued samples are inserted in between the
samples of x(n) to get x e (n).

6.7 State-Space Representation of Discrete-Time LTI Systems


6.7.1 State-Space Representation of Single-Input Single-Output
Discrete-Time LTI Systems
Consider a single-input single-output discrete-time LTI system described by the following
Nth-order difference equation:

(6.61)

where y(n) is the system output and ℧(n) is the system input.
Define the following useful set of state variables:

(6.62)

Then from Eqs. (6.61) and (6.62), we get


(6.63a)

and

(6.63b)

Eqs. (6.63a) and (6.63b) can be written in matrix form as

(6.64a)

(6.64b)

Define a Nx1 dimensional vector called state vector as

(6.65)

More compactly Eqs. (6.64a) and (6.64b) can be written as

(6.66a)
(6.66b)

where

Equation (6.66a) and (6.66b) is called N-dimensional state-space representation or


state equations of the discrete-time system.
Example 6.25 Obtain the state-space representation of a discrete-time system described
by the following differential equation:

Solution The order of the differential equation is three. We have to choose three state
variables:
Let x 1(n) = y(n − 3), x 2(n) = y(n − 2), x 3(n) = y(n − 1)
Then

The state-space representation in matrix form is given by


6.7.2 State-Space Representation of Multi-input Multi-output
Discrete-Time LTI Systems
The state-space representation of discrete-time system with m inputs and 1 output and N
state variables can be expressed as

(6.67a)

(6.67b)

where

6.8 Problems

1. Determine if the following discrete-time signals are periodic:


(i)

(ii)

(iii)

(iv)
(v) x(n) = 6(−1) n

(vi)

(vii)

(viii)

2. Determine if the following discrete-time signals are energy or power signals or


neither. Calculate the energy and power of the signals in each case:

(ix)

(x) x(n) = (−1) n

(xi)

(xii)

(xiii)

3. Determine if the following discrete-time signals are even, odd, or neither even nor
odd:

(i)

(ii)

(iii)

(iv)

4. Check the following for linearity , time-invariance, and causality :

(i) y(n) = 5nx 2(n). (ii) y(n) = x(n)sin2n. (iii) y(n) = e –n x(n + 3)

5. Given the input x(n) = u(n) and the output of a system,

(i) Determine the impulse response h(n)

(ii) Is the system stable ?


(iii) Is the system causal?

6. Check for stability and causality of a system for the following impulse responses:

(i) (ii)

7. Determine if the following signals are periodic, and if periodic, find its period:
(ii) sin n (b) e jπn/3 (c)

8. Determine the convolution of the sum of the two sequences:


x 1(n) = (3,2,1,2) and x 2(n) = (1,2,1,2).

9. Determine the convolution of the sum of the two sequences x 1(n) and x 2(n), if x
n
1(n) = x 2(n) = c u(n) for all n, where c is a constant.

10. Determine the impulse response (i.e., when x(n) = δ(n) of a discrete-time system
characterized by the following difference equation:

11. A discrete-time system is characterized by the following difference equation:

Determine the step response of the system, i.e., x(n) = u(n), given the initial conditions
y(-1) = 1 and y(-2) = -1.

12. A discrete-time system is characterized by the following difference equation:

Determine the response of the system for x(n) = nu(n) and initial conditions y (-1) = 1
and y(-2) = 0.

13. Determine the response of the system described by the following difference
equation:

14. Obtain the state-space representation of a discrete-time system described by the


following differential equation:

6.9 MATLAB Exercises

1. Using the function impz, write a MATLAB program to determine the impulse
response of a discrete-time system represented by
2. Write a MATLAB program to illustrate down-sampling by an integer factor of 4 of a
sum of two sinusoidal sequences, each of length 50, with normalized frequencies of
0.2 Hz and 0.35 Hz.

3. Write a MATLAB program to illustrate up-sampling by an integer factor of 4 of a sum


of two sinusoidal sequences, each of length 50, with normalized frequencies of 0.2 Hz
and 0.35 Hz.

Further Reading
1. Linden, D.A.A.: Discussion of sampling theorem. Proceedings of the IRE. 47, 1219–1226 (1959)
[Crossref]

2. Proakis, J.G., Manolakis, D.G.: Digital Signal Processing Principles, Algorithms and Applications, 3rd edn. Prentice-
Hall, India (2004)
3. Crochiere, R.E., Rabiner, L.R.: Multirate Digital Signal Processing. Prentice-Hall, Englewood Cliffs (1983)
4. Hsu, H.: Signals and Systems, Schaum’s Outlines, 2nd edn, Mc Graw Hill, New York (2011)
5. Mandal, M., Asif, A.: Continuous and Discrete Time Signals and Systems. Cambridge, UK; New York: Cambridge
University Press, (2007)
© Springer International Publishing AG, part of Springer Nature 2018
K. Deergha Rao, Signals and Systems, https://doi.org/10.1007/978-3-319-68675-2_7

7. Frequency Domain Analysis of Discrete-Time


Signals and Systems
K. Deergha Rao1
(1) Department of Electronics and Communication Engineering, Vasavi College of
Engineering (Affiliated to Osmania University), Hyderabad, Telangana, India


This chapter discusses the transform domain representation of discrete-time sequences by
discrete-time Fourier series (DTFS) and discrete-time Fourier transform (DTFT) in which
a discrete-time sequence is mapped into a continuous function of frequency. We first
obtain the discrete-time Fourier series (DTFS) expansion of a periodic sequence. The
periodic convolution and the properties of DTFS are discussed. The Fourier transform
domain representation of discrete-time sequences are described along with the conditions
for the existence of DTFT and its properties. Later, the frequency response of discrete-
time systems, frequency domain representation of sampling process, and reconstruction of
band-limited signals from its samples are discussed.

7.1 The Discrete-Time Fourier Series


If a sequence x(n) is periodic with period N, then x(n) = x(n + N ) for all n. In analogy with
the Fourier series representation of a continuous periodic signal, we can look for a
representation of x(n) in terms of the harmonics corresponding to the fundamental
frequency of (2π/N ). Hence, we may write x(n) in the form

(7.1a)

It can easily be verified from Eq. (7.1a) that x(n) = x(n + N ). Also, we know that there
are only N distinct values for ej2πkn/N, corresponding to k = 0, 1, … . N − 1, these being 1,
ej2πn/N, …, ej2πn(N − 1)/N . Hence, we may rewrite (7.1a) as

(7.1b)

It should be noted that the summation can be taken over any N consecutive values of k.
Eq. (7.1b) is called the discrete-time Fourier series (DTFS) of the periodic sequence x(n)
and a k as the Fourier coefficients. We will now obtain the expression for the Fourier
coefficients a k . It can easily be shown that {ej2πkn/N} is an orthogonal sequence satisfying
the relation

(7.2)

Now, multiplying both sides of (7.1b) by e−j2πln/N and summing over n between 0 and
(N − 1), we get

Hence,

(7.3)

It is common to associate the factor (1/N ) with x(n) rather than a k . This can be done
by denoting Na k by X(k); in such a case, we have

(7.4)

where the Fourier coefficients X(k) are given by

(7.5)

It is easily seen that X(k + N ) = X(k), that is, the Fourier coefficient sequence X(k) is
also periodic of period N. Hence, the spectrum of a signal x(n) that is periodic with period
N is also a periodic sequence with the same period. It is also noted that since the Fourier
series of a discrete periodic signal is a finite sequence, the series always converges, and
the Fourier series gives an exact alternate representation of the discrete sequence x(n).

7.1.1 Periodic Convolution


In the case of two periodic sequences x1(n) and x2(n) having the same period N, linear
convolution as defined by Eq. (6.​29) does not converge. Hence, we define a different form
of convolution for periodic signals by the relation

(7.6)

The above convolution is called periodic convolution . It may be observed that


y(n) = y(n + N ), that is, the periodic convolution is itself periodic of period N. Some
important properties of the DTFS are given in Table 7.1. In this table, it is assumed that x 1
(n) and x2(n) are periodic sequences having the same period N. The proofs are omitted
here, since they are similar to the ones that will be given in Section 7.2 for the
corresponding properties of the DTFT.
Table 7.1 Some important properties of DTFS

Property Periodic sequence DTFS coefficients

Linearity ax1(n) + bx2(n) a and b are constants aX1(k) + bX2(k)

Time shifting x(n − m)

Frequency shifting X(k − l )

Periodic convolution X1(k)X2(k)

Multiplication x1(n)x2(n)


x*(n) X*(−k)

x*(−n) X*(k)

Symmetry properties
Example 7.1 Determine the Fourier series representation for the following discrete-time
signal :

Solution

X(0) = X(1) = X(2) = X(4) = X(5) = X(6) = X(8) = X(9) = X(10) = X(11) = X(12) = X(14) =
X(3) = , X(7) =

Example 7.2 Discrete-time signal x(n) is periodic of period 8, and x(n) = n for 0 ≤ n ≤ 7.
Solution The sequence is periodic with period N = 8.

Using above equation, the DTFS coefficients are computed as


X(0) = 28., X(4) = −4

X(1) = −4+ 9.6569 j, X(5) = −4 − 1.6569 j

X(2) = −4 + 4 j X(6) = −4 − 4 j

X(3) = −4 + 1.6569 j, X(7) = −4 − 9.6569 j

X(k) = {28, −4+ 9.6569j, −4 + 4 j, −4 + 1.6569 j, −4, −4 − 1.6569 j, −4−4i,


−4 − 49.6569 j}

7.2 Representation of Discrete-Time Signals and Systems in


Frequency Domain
7.2.1 Fourier Transform of Discrete-Time Signals
The discrete-time Fourier transform (DTFT) of a finite energy sequence x(n) is defined as

(7.7)

From X(e jω ), x(n) can be computed as

(7.8)

Eq. (7.8) is called the inverse Fourier transform .


Convergence of the DTFT
The existence of DTFT of x(n) depends on the convergence of the series in Eq. (7.7). Now,
we look at the condition for convergence.
Let denote the partial sum of the weighted complex
exponentials in Eq. (7.7). Then for uniform convergence of X(e jω ),

(7.9)

Hence, for uniform convergence of X(e jω ), x(n) must be absolutely summable, i.e.,

(7.10)

Then

(7.11)

guaranteeing the existence of X(e jω ), for all values of ω. Consequently, Eq. (7.10) is
only a sufficient condition for the existence of the DTFT, but is not a necessary condition .

7.2.2 Theorems on DTFT


We will now consider some important theorems concerning DTFT that can be used in
digital signal processing. All these properties can be proved using the definition of DTFT.
The following notation is adopted for convenience:

(7.12a)

(7.12b)

Linearity If x1(n) and x2(n) are two sequences with Fourier transforms X 1 (e jω ) and
X2(e jω ), then the Fourier transform of a linear combination of x1(n) and x2(n) is given by

(7.13)

where a1and a2 are arbitrary constants.


Time Reversal If x(n) is a sequence with Fourier transform X(e jω ), then the Fourier
transform of time reversed sequence x(−n) is given by

(7.14)

Time Shifting If x(n) is a sequence with Fourier transform X(e jω ), then the Fourier
transform of the delayed sequence x(n − k), where k an integer, is given by

(7.15)

Therefore, time shifting results in a phase shift in the frequency domain.


Frequency Shifting If x(n) is a sequence with Fourier transform X(e jω ), then the
Fourier transform of the sequence x(n) is given by

(7.16)

Thus, multiplying a sequence x(n) by a complex exponential in the time domain


corresponds to a shift in the frequency domain.
Differentiation in Frequency If x(n) is a sequence with Fourier transform X(e jω ), then
the Fourier transform of the sequence nx(n) is given by
(7.17)

Convolution Theorem If x1(n) and x2(n) are two sequences with Fourier transforms
X1(e jω )and X2(e jω ), then the Fourier transform of the convolution of x1(n) and x2(n) is
given by

(7.18)

Hence, convolution of two sequences x1(n) and x2(n) in the time domain is equal to the
product of their frequency spectra. In the above equation, since X1(e jω ) and X2(e jω ) are
periodic in ω with period 2π, the convolution is a periodic convolution.
Windowing Theorem If x(n) and w(n) are two sequences with Fourier transforms X (e
jω ) and W(e jω ), then the Fourier transform of the product of x(n) and w(n) is given by

(7.19)

The above result is called the windowing theorem.


Correlation Theorem If x1(n) and x2(n) are two sequences with Fourier transforms
X1(e jω ) and X2(e jω ), then the Fourier transform of the correlation of x1(n) and
x2(n) defined by

(7.20a)

is given by

(7.20b)

which is called the cross energy density spectrum of the signals x1(n) and x2(n).
Parseval’s Theorem If x(n) is a sequence with Fourier transform X(e jω ), then the
energy E of x(n) is given by
(7.21)

where |X(e jω )|2 is called the energy density spectrum.


Proof The energy E of x(n) is defined as

(7.22)

using Eq. (7.8).


Interchanging the integration and summation signs, the above equation can be rewritten as

Thus,

(7.23)

The above theorems concerning DTFT are summarized in Table 7.2


Table 7.2 Some properties of discrete-time Fourier transforms

Property Sequence DTFT

Linearity a1x1(n) + a2x2(n) a1X1(e jω ) + a2X2(e jω )

Time shifting x(n − k) e−jωkX(e jω )

Time reversal x(−n) X(e−jω)

Frequency shifting

Differentiation in the frequency domain nx(n)

Convolution theorem x1(n) * x2(n) X1(e jω ) X2(e jω )


Windowing theorem x1(n)x2(n) X1(e jω ) ∗ X2(e jω )

Correlation theorem X1(e jω )X2(e−jω)

Using the definitions of DTFT pair given by (7.7) and (7.8), we may establish the
DTFT pairs for some useful functions. These are given in Table 7.3
Table 7.3 Some useful DTFT pairs

x(n) DTFT

δ(n) 1

1 (−∞ < n < ∞)

anu(n), |a| < 1

7.2.3 Some Properties of the DTFT of a Complex Sequence


x(n)
From Eq. (7.7), the DTFT of a time reversed sequence x(−n) can be written as

(7.24a)

Similarly, the DTFT of the complex conjugate sequence x*(n) can be expressed as

(7.24b)

From the above two equations, it can be easily shown that


(7.25)

The sequence x(n) can be represented as a sum of conjugate symmetric sequence x e


(n) and a conjugate antisymmetric sequence x o (n) as

(7.26)

where

(7.27)

and

(7.28)

The DTFT X(e jω ) can be split into

(7.29)

where X e (e jω ) and X o (e jω ) are the DTFTs of x e (n) and x o (n), respectively. Using
Eqs. (7.7), (7.25), and (7.27), X e (e jω ) can be expressed as

(7.30)

In a similar way, using Eqs. (7.7), (7.25), and (7.28), Xo(e jω ) can be written as

(7.31)

A complex sequence x(n)can be decomposed into a sum of its real and imaginary parts
as
(7.32)

where

(7.33)

and

(7.34)

The DTFT of x R (n) can be written as

(7.35)

Similarly, the DTFT of jx I (n) can be expressed as

(7.36)

The above properties of the DTFT of a complex sequence are summarized in Table
7.4.
Table 7.4 Some properties of DTFT of a complex sequence

Sequence DTFT

x∗(n) X∗(e−jω)

x∗(−n) X∗(e jω )

x R (n) = Re [x(n)]

jx I (n) = j Im [x(n)]

Re[X(e jω )]
j  Im [X(e jω )]

7.2.4 Some Properties of the DTFT of a Real Sequence x(n)


Since e –jωn = cosωn – jsinωn, the DTFT X ( e jω ) given by Eq. (7.7) can be expressed as

(7.37)

The Fourier transform X(e jω ) is a complex function of ω and can be written as the
sum of the real and imaginary parts as

(7.38)

From Eq. (7.37), the real and imaginary parts of X(e jω ) are given by

(7.39)

and

(7.40)

Since cos(−ωn) = cosωn and sin(−ωn) = −sinωn, we can obtain the following relations
from Eqs. (7.39) and (7.40):

(7.41a)

(7.41b)

indicating that the real part of DTFT is an even function of ω, while the imaginary part
is an odd function of ω. Thus,

(7.42)
In polar form, X(e jω ) can be written as

(7.43)

where

(7.44)

and

(7.45)

Using the above relations, it can easily be seen that |X (e jω )| is an even function of ω,
whereas the function θ(ω)is an odd function of ω.
Now, the DTFT of x e (n), the even part of the real sequence x(n) is given by

(7.46)

Thus, the DTFT of even part of a real sequence is the real part of X (e jω ).
Similarly, the DTFT of x o (n), the odd part of the real sequence x(n), is given by

(7.47)

Hence, the DTFT of the odd part of a real sequence is jX I (ejω).


The above properties of the DTFT of a real sequence are summarized in Table 7.5.
Table 7.5 Some properties of DTFT of a real sequence
X R (e jω ) = X R (e−jω)

X I (e jω ) = − X I (e−jω)

X(e jω ) = X∗(e−jω)

|X(e jω )| = |X(e−jω)|

∠X(e jω ) = − ∠ X(e−jω)

Example 7.3 A causal LTI system is represented by the following difference equation:

(i) Find the impulse response of the system h(n), as a function of parameter a.

(ii) For what range of values would the system be stable ?


Solutions

(i) Given

Taking Fourier transform on both sides of above equation, we get

From the above relation, we arrive at


From the above equation and time shifting property , the impulse response is given by

(ii) Now,

Thus, the system is stable for |a| < 1.


Example 7.4 Find the impulse response of a system described by the following
difference equation:

Solution Taking Fourier transform on both sides of given difference equation, we get

From the above relation, we arrive at

The impulse response h(n) is given by


Example 7.5 Find the DTFT of

Solution Let x1(n) = a n u(n)


The Fourier transform of x 1 (n) is given by

For m = 2,

Using the differentiation property of DTFT, the Fourier transform of na n u(n) is given
by

Using linearity property of the DTFT , the Fourier transform of x(n) is denoted by

For m = 3,

Using the differentiation and linearity properties of DTFT, the Fourier transform of
x(n) is given by
In general, for m = k, the Fourier transform of x(n) is given by

Example 7.6 Let G1(e jω ) denote the DTFT of the sequence g1(n) shown in Figure
7.1(a). Express the DTFT of the sequence g2(n) in Figure 7.1b in terms of G1(e jω ). Do not
evaluate G1(e jω ).

Figure 7.1 (a) Sequence g1(n). (b) Sequence g2(n)

Solution From Figure 7.1(b), g2(n) can be expressed in terms of g1(n) as

Applying DTFT on both sides, we obtain

Example 7.7 Evaluate the inverse DTFT of each of the following DTFTs:

(a)  (b)

Solution
(a)
From Table 7.3,

Hence,

(b)
From Example 7.5,

For m = 2,

Then

Example 7.8 A length-9 sequence x(n) is shown in Figure 7.2


Figure 7.2 A length-9 sequence x(n)

If the DTFT of x(n) is X(e jω ), calculate the following functions without computing
X(e jω ).

(a) X(ej0) (b) X(ejπ) (c)  (d)  (e)

Solution From the given data,


x(−3) =3, x(−2) =0, x(−1) = 1, x(0) = −2, x(1) = −3, x (2) = 4, x (3) = 1, x(4) = 0,
x(5) = −1

(a) X(e j0 )
From the definition of Fourier transform ,

(b) X(e jπ )
From the definition of Fourier transform,
(c)
From the definition of inverse Fourier transform,

Hence,

(d)
From the definition of Parseval’s theorem ,

Hence,

(e)
From differentiation property and Parseval’s theorem of DTFT,
Example 7.9

(a) The Fourier transforms of the impulse responses , h1(n) and h2 (n), of two LTI
systems are as shown in Figure 7.3. Find the Fourier transform of the impulse
response of the overall system , when they are connected in cascade.

Figure 7.3 (a) Fourier transform of h1(n) (b) Fourier transform of h2 (n)

(b) The Fourier transforms of the impulse responses h1(n) and h2(n) of two LTI systems
are as shown in Figure 7.4. Find the Fourier transform of the overall system, when
they are connected in parallel.

Figure 7.4 (a) Fourier transform of h1(n) (b) Fourier transform of h2(n)

Solution

(a) The impulse response h(n) of the overall system is given by

Then, by the convolution property of the Fourier transform, the Fourier transform of
the impulse response of the cascade system is given by
The Fourier transform of impulse response of the cascade system is shown in Figure
7.5(a).

(b) The impulse response h(n) of the overall system is given by

Figure 7.5 (a) Fourier transform of the impulse response of the cascade system (b) Fourier transform of the impulse
response of the parallel system

Hence, the Fourier transform of impulse response of the cascade system is given by

The Fourier transform of the impulse response of the parallel system is shown in
Figure 7.5(b).
7.3 Frequency Response of Discrete-Time Systems
For an LTI discrete-time system with impulse response h(n) and input sequence x(n), the
output y(n) is the convolution sum of x(n) and h(n) given by

(7.48)

To demonstrate the eigenfunction property of complex exponential for discrete-time


systems, consider the input x(n) of the form

(7.49)

Then from Eq. (7.48), the output is given by

(7.50)

The above equation can be rewritten as

(7.51a)

where

(7.51b)

H(e jω ) is called the frequency response of the LTI system whose impulse response is
h(n), e jωn is an eigenfunction of the system, and the associated eigenvalue is H(e jω ). In
general H(e jω ) is complex and is expressed in terms of real and imaginary parts as

(7.52)

where H R (e jω ) and H I (e jω ) are the real and imaginary parts of H(e jω ), respectively.
Furthermore, due to convolution, the Fourier transforms of the system input and output
are related by
(7.53)

where X(e jω ) and Y(e jω ) are the Fourier transforms of the system input and output,
respectively. Thus,

(7.54)

The frequency response function H(e jω ) is also known as the transfer function of the
system. The frequency response function provides valuable information on the behavior of
LTI systems in the frequency domain. However, it is very difficult to realize a digital
system since it is a complex function of the frequency variable ω. In polar form, the
frequency response can be written as

(7.55a)

where |H(e jω )|, the amplitude response term, and θ(ω), the phase-response term, are
given by

(7.55b)

(7.55c)

Phase and Group Delays


If the input is a sinusoidal signal given by

(7.56a)

then from Eq. (7.55a), the output is

(7.56b)
The above equation can be rewritten as

(7.57a)

It can be clearly seen that the above equation expresses the phase response as a time
delay in seconds which is called as phase delay and is defined by

(7.57b)

An input signal consisting of a group of sinusoidal components with frequencies


within a narrow interval about ω experiences different phase delays when processed by an
LTI discrete-time system . As such, the signal delay is represented by another parameter
called group delay defined as

(7.57c)

Example 7.10 Determine the magnitude and phase response of a system whose impulse
response is given by

Solution For , the frequency response is given by

The magnitude response is given by

The phase response is


The magnitude and phase values are tabulated in Table 7.6 for various values of ω and
plotted in Figure 7.6(a) and (b), respectively.
Table 7.6 Magnitude and phase

ω 0 π 2π

|H(e jω )| 2 1.3572 0.8944 0.7148 0.67 0.715 0.894 1.3572 2

θ(ω) 00 −28.675 −26.565 −14.640 00 14.640 26.565 28.6750 00

Figure 7.6 (a) Magnitude and (b) phase responses of h(n) of Example 7.10

Example 7.11 Compute the magnitude and phase responses of the impulse responses
given in Figure 7.7, and comment on the results.

Figure 7.7 (a) Impulse response of h 1 (n) (b) impulse response of h 2 (n)

Solution Since h 1 (n) is an even function of time, it has a real DTFT indicating that the
phase is zero, that is, the phase is a horizontal line; h 2 (n) is the right-shifted version of h 1
(n). Hence, from time shifting property of DTFT, the transform of h 2 (n) is obtained by
multiplying the transform of h 1 (n) by e–j2ω. This changes the slope of the phase linearly
and can be verified as follows:
The frequency response of h1(n) is
The magnitude response of H1(e jω ) is

The phase response of H1(e jω ) is zero.


The frequency response of h 2 (n) is

The magnitude response of H2(e jω ) is

The phase response of H2(e jω ) is given by

The magnitude and phase responses of h 1 (n) and h 2 (n) are shown in Figure 7.8(a),
(b), (c), and (d). From the magnitude and phase responses of h 1 (n) and h 2 (n), it is
observed that h 1 (n) has zero phase and h 2 (n) has a linear phase response, whereas both h
1 (n) and h 2 (n) have the same magnitude responses.
Figure 7.8 (a) Magnitude response of h1(n). (b) Phase response of h 1 (n). (c) Magnitude response of h 2 (n). (d) Phase
response of h 2 (n)

Example 7.12 The trapezoidal integration formula is represented by a recursive


difference equation as y(n) – y(n – 1) = 0.5x(n) + 0.5x(n – 1). Determine H(e jω ) of the
trapezoidal integration formula.
Solution Given

Taking Fourier transform on both sides of the above equation, we get


The magnitude response is given by

The phase response is given as follows:

If 0 < ω < π, then both cos ω/2 and sin ω/2 are positive, and hence the phase is .

If π < ω < 2π, then cos ω/2 is negative, but sin ω/2 is positive; hence the phase is .

7.3.1 Frequency Response Computation Using MATLAB


The M-file function freqz(h, w) in MATLAB can be used to determine the values of the
frequency response of an impulse response vector h at a set of given frequency points ω.
Similarly, the M-file function freqz(b, a, ω) can also be used to find the frequency
response of a system described by the recursive difference equation with the coefficients
in vectors b and a. From frequency response values, the real and imaginary parts can be
computed using MATLAB functions real and imag, respectively. The magnitude and
phase of the frequency response can be determined using the functions abs and angle as
illustrated in the following examples:
Example 7.13 Determine the magnitude and phase response of a system described by the
difference equation, y(n) = 0.5x(n) + 0.5x(n – 2).
Solution If x(n) = δ(n), then the impulse response h(n) is given by
Hence, h(n) sequence is [0.5 0 0.5]. When this sequence is used in Program 7.1 given
below, the resulting magnitude and phase responses are as shown in Figure 7.9(a) and (b),
respectively.

Figure 7.9 (a) Magnitude response of h(n) sequence. (b) Phase response of h(n) sequence

Program 7.1
clear;clc;
w=0:0.05:pi;
h=exp(j*w); %set h=exp(jw)
num=0.5+0*h.^-1+0.5*h.^-2;
den=1;
%Compute the frequency responses
H=num/den;
%Compute and plot the magnitude response
mag=abs(H);
figure(1),plot(w/pi,mag);
ylabel(‘Magnitude’);xlabel(‘\omega/\pi’);
%Compute and plot the phase responses
ph=angle(H)*180/pi;
figure(2),plot(w/pi,ph);
ylabel(‘Phase, degrees’);
xlabel(‘\omega/\pi’)
Example 7.14 Determine the magnitude and phase responses of a system described by
the following difference equation:

Comment on the frequency response of the system.


Solution The following MATLAB program 7.2 is used and the resultant magnitude
response and phase response are shown in Figure 10(b) and (b), respectively.
Figure 7.10 (a) Magnitude response (b) phase response

Program 7.2
clear;close all;
num=[0.0534 -0.0009 -0.0009 0.0534];% numerator coefficients
den=[1 -2.1291 1.7834 -0.5435];% denominator coefficients
w=0:pi/255:pi;
%Compute the frequency responses
H=freqz(num,den,w);
%Compute and plot the magnitude response
mag=abs(H);
figure(1),plot(w/pi,mag);
ylabel(‘Magnitude’);xlabel(‘\omega/\pi’);
%Compute and plot the phase responses
ph=angle(H)*180/pi;
figure(2),plot(w/pi,ph);
ylabel(‘Phase, degrees’);xlabel(‘\omega/\pi’);
The frequency response shown in Figure 7.10 characterizes a low-pass filter with
nonlinear phase.
Example 7.15 Determine the magnitude and phase responses of a system described by
the following difference equation:

Comment on the frequency response of the system.


Solution Program 7.2 with variables num = [ 1 −4 6 −4 1] and den = [1 −3.0538 3.8281
−2.2921 0.5507] is used, and the resultant magnitude and phase responses are shown in
Figure 7.11(a) and (b), respectively. It is observed from this figure that the frequency
response characterizes a narrowband band-pass filter .
Figure 7.11 (a) Magnitude response (b) phase response

Example 7.16 An LTI system is described by the following difference equation:

(a) Find the frequency response H(e jω ) and group delay grd [H(e jω )] of the system.
(b) Determine the difference equation of a new system such that the frequency response
H1(e jω ) of the new system is related to H(e jω ) as H1(e jω ) = H(ej(ω + π)).

Solution (a)

Hence,

Therefore,

(b) By frequency shifting property, e−jπnh(n) ↔ H(ej(ω+π)). Therefore,

Hence, the difference equation of the new system is

7.4 Representation of Sampling in Frequency Domain


As mentioned in Section 6.​1, mathematically, the sampling process involves multiplying a
continuous-time signal x a (t) by a periodic impulse train p(t)
(7.58)

As a consequence, the multiplication process gives an impulse train x p (t), which can
be expressed as

(7.59)

Since x a (t) δ(t – nT) = x a (nT) δ(t – nT), the above reduces to

(7.60)

If we now take the Fourier transform of (7.59), and use the multiplication property of
the Fourier transform, we get

(7.61)

where * denotes the convolution in the continuous-time domain and X p (jΩ), X a (jΩ),
and P(jΩ) are the Fourier transforms of x p (t), x a (t), and p(t), respectively. Since p(t) is
periodic with a period T, it can be expressed as a Fourier series

Since the Fourier transform of is given by F(jΩ) = 2πδ(Ω – Ω T ), we see


that the Fourier transform of p(t) is given by

(7.62)

where

Substitution of (7.62) in (7.61) yields


(7.63)

Since the convolution of X a (jΩ) with a shifted impulse δ(Ω – kΩ T ) is the shifted
function X a (j(Ω – kΩ T )), the above reduces to

(7.64)

Eq. (7.64) shows that the spectrum of x p (t) consists of an infinite number of shifted
copies of the spectrum of x a (t), and the shifts in frequency are multiples of Ω T ; that is, X
p (jΩ) is a periodic function with a period of Ω T = 2π/T.

Since the continuous Fourier transform of δ(t – nT) is given by

(7.65)

we have from Eq.(7.60) that

(7.66)

Since

and the fact that the DTFT of the sequence x(n) is given by

(7.67)

we obtain

(7.68a)

or equivalently
(7.68b)

Hence, we have from (7.68a) and (7.64) that

(7.69)

On the other hand, the above equation can also be expressed as

(7.70)

From Eq.(7.69) or (7.70), it can be observed that X(e jω ) is obtained by frequency


scaling X p (jΩ) using Ω = ω/T.
As mentioned earlier, the continuous-time Fourier transform X p (jΩ) is periodic with
respect to Ω having a period of Ω T = (2π/T). In view of the frequency scaling, the DTFT
X(e jω ) is also periodic with respect to ω with a period of 2π.

7.4.1 Sampling of Low-Pass Signals


Sampling Theorem
If the highest component of frequency in analog signal x a (t) is Ω m , then x a (t) is
uniquely determined by its samples x a (nT), provided that

(7.71)

where Ω T is called the sampling frequency in radians. Eq. (7.71) is often referred as the
Nyquist condition.
The spectra of the analog signal x a (t) and the impulse train p(t) with a sampling period
T = 2π/Ω T are shown in Figure 7.12(a) and (b), respectively.
Figure 7.12 (a) Spectrum of an analog signal (b) spectrum of the pulse train

Undersampling
If Ω T < 2Ω m , then the signal is undersampled, and the corresponding spectrum X p (jΩ)
is as shown in Figure 7.13. In this figure, the image frequencies centered at Ω T will alias
into the baseband frequencies, and the information of the desired signal is
indistinguishable from its image in the fold-over region.

Figure 7.13 Spectrum of an undersampled signal, showing aliasing (fold-over region). Signals in the fold-over region
are not recoverable

Oversampling
If Ω T > 2Ω m , then the signal is oversampled, and its spectrum is shown in Figure 7.14.
Its spectrum is the same as that of the original analog signal, but repeats itself at every
multiple of Ω T . The higher-order components centered at multiples of Ω T are called
image frequencies.

Figure 7.14 Spectrum of an oversampled signal

7.5 Reconstruction of a Band-Limited Signal from Its Samples


According to the sampling theorem, samples of a continuous-time band-limited signal
(i.e., its Fourier transform X a (jΩ) = 0 for |Ω| > |Ω m |) taken frequently enough are
sufficient to represent the signal exactly. The original continuous-time signal x a (t) can be
fully recovered by passing the modulated impulse train x p (t) through an ideal low-pass
filter, H LP (jΩ), whose cutoff frequency satisfies Ω m ≤ Ω c ≤ Ω T /2. Consider a low-pass
filter with a frequency response:
(7.72)

Applying the inverse continuous-time Fourier transform to H LP (jΩ), we obtain the


impulse response h LP (t) of the ideal low-pass filter given by

(7.73)

For a given sequence of samples x(n), we can form an impulse train x p (t) in which
successive impulses are assigned an area equal to the successive sequence values, i.e.,

(7.74)

The n th sample is associated with the impulse at t = nT, where T is the sampling period
associated with the sequence x(n). Therefore, the output x a (t) of the ideal low-pass filter
is given by the convolution of x p (t) with the impulse response h LP (t) of the analog low-
pass filter:

(7.75)

Substituting h LP (t) from Eq.(7.73) in Eq. (7.75) and assuming for simplicity that Ω c
= Ω T /2 = π/T, we get

(7.76)

The above expression indicates that the reconstructed continuous-time signal x a (t) is
obtained by shifting in time the impulse response h LP (t) of the low-pass filter by an
amount nT and scaling it in amplitude by the factor x(n) for all integer values of n in the
range –∞ < n < ∞ and then summing up all the shifted versions.
Example 7.17 Consider the system shown in Figure 7.15(a), where H(e jω ) is an ideal
LTI low-pass filter with cutoff of Π/8 rad/sec, and the spectrum of x a (t) is shown in Figure
7.15(b).
Figure 7.15 (a) Discrete time system (b) spectrum of input x a (t)

(i) What is the maximum value of T to avoid aliasing in the ADC?

(ii) If 1/T = 10 kHz, then what will be the spectrum of y r (t).


Solution

(i) From Figure 7.15(b), Ω m = 10 k π.


The given T1 = 0.0001 sec. Then Ω T = = 20 kπ.

The condition to avoid aliasing in the ADC is Ω T = 2Ω m (Figure 7.16)

(ii)
Figure 7.16

7.6 Problems

1. Obtain the DTFS representation of the periodic sequence shown in Figure P7.1
Figure P7.1 Periodic sequence with period N = 5

2. Find the Fourier coefficients in DTFS representation of the sequence

3. Find the DTFT for the following sequences:


(a) x1(n) = u(n) – u(n – 5) (b) x2(n) = α n (u(n) – u(n – 8)), |α| < 1

(c)  (d) x4(n) = |a|nsin ωn, |α| < 1

4. Let G 1 (e jω ) denote the DTFT of the sequence g 1 (n) shown in Figure P7.2(a).
Express the DTFTs of the remaining sequences in Figure P7.2 in terms of G 1 (e jω ).
Do not evaluate G1(e jω ).

Figure P7.2 Sequences g 1 (n), g 2 (n), and g 3 (n)


5. Determine the inverse DTFT of each of the following DTFTs:
(a) H1(e jω ) = 1 + 4 cos ω + 3 cos 2ω

(b) H2(e jω ) = (3 + 2 cos ω + 4 cos (2ω)) cos (ω/2)e−jω/2

(c) H3(e jω ) = e−jω/4


(d) H4(e jω ) = e−jω[1 + 4 cos ω]

6. A continuous-time signal x a (t) has its spectrum X a (jΩ) as shown in Figure P7.3(a).
The signal x a (t) is input to the system shown in Figure P7.3(b). H(e jω ) in Figure
P7.3(b) is an ideal LTI low-pass filter with a cutoff frequency of (π/2). Sketch the
spectrums of x(n), y(n), and y r (t).

Figure P7.3 (a) Spectrum of signal. (b) Signal reconstruction

Further Reading
1. Morrison, N.: Introduction to Fourier Analysis. Wiley, New York (1994)
2. Mitra, S.K.: Digital Signal Processing. McGraw-Hill, New York (2006)
3. Oppenheim, A.V., Schafer, W.: Discrete-Time Signal Processing, 2nd edn. Prentice-Hall, Upper Saddle River (1999)
© Springer International Publishing AG, part of Springer Nature 2018
K. Deergha Rao, Signals and Systems, https://doi.org/10.1007/978-3-319-68675-2_8

8. The z-Transform and Analysis of Discrete


Time LTI Systems
K. Deergha Rao1
(1) Department of Electronics and Communication Engineering, Vasavi College of
Engineering (Affiliated to Osmania University), Hyderabad, Telangana, India


The DTFT may not exist for all sequences due to the convergence condition, whereas the
z-transform exists for many sequences for which the DTFT does not exist. Also, the z-
transform allows simple algebraic manipulations. As such, the z-transform has become a
powerful tool in the analysis and design of digital systems. This chapter introduces the z-
transform, its properties, the inverse z-transform, and methods for finding it. Also, in this
chapter, the importance of the z-transform in the analysis of LTI systems is established.
Further, one-sided z-transform and the solution of state-space equations of discrete-time
LTI systems are presented. Finally, transformations between continuous-time systems and
discrete-time systems are discussed.

8.1 Definition of the z-Transform


The z-transform of an arbitrary discrete-time signal x(n) is defined as

(8.1)

where z is a complex variable. For the existence of the z-transform, Eq. (8.1) should
converge. It is known from complex variables that if is absolutely
convergent, then Eq. (8.1) is convergent. Eq. (8.1) can be rewritten as

(8.2)

By ratio test, the first series is absolutely convergent if


or

(8.3a)

Similarly, the second series in Eq. (8.2) is absolutely convergent if

or

(8.3b)

Thus, in general, Eq. (8.1) is convergent in some annulus

(8.4)

The set of values of z satisfying the above condition is called the region of
convergence (ROC). It is noted that for some sequences r 1 = 0 or r 2 = ∞. In such cases,
the ROC may not include z = 0 or z = ∞, respectively. Also, it is seen that no z-transform
exists if r 1 > r 2.
The complex variable z in polar form may be written as

(8.5)

where r and ω are the magnitude and the angle of z, respectively. Then, Eq. (8.1) can
be rewritten as

(8.6)
When r = 1, that is, when the contour |z| = 1, a unit circle in the z-plane, then Eq. (8.5)
becomes the DTFT of x(n).
Rational z-Transform
In LTI discrete-time systems , we often encounter with a z-transform which is a ratio of
two polynomials in z:

(8.7)

The zeros of the numerator polynomial N(z) are called the zeros of X(z) and those of
the denominator polynomial D(z) as the poles of X(z). The numbers of finite zeros and
poles in Eq. (8.7) are M and N, respectively. For example, the function has a
zero at z = 0 and two poles at z = 1 and z = 2.

8.2 Properties of the Region of Convergence for the z-


Transform
The properties of the ROC are related to the characteristics of the sequence x(n). In this
section, some of the basic properties of ROC are considered.
Property 1: ROC should not contain poles.
In the ROC, X(z) should be finite for all z. If there is a pole p in the ROC, then X(z) is
not finite at this point, and the z-transform does not converge at z = p. Hence, ROC cannot
contain any poles.
Property 2: The ROC for a finite duration causal sequence is the entire z -plane
except for z = 0.
A causal finite duration sequence of length N is such that x(n) = 0 for n < 0 and for
n > N − 1. Hence X(z) is of the form

(8.8)

It is clear from the above expression that X(z) is convergent for all values of z except
for z = 0, assuming that x(n) is finite. Hence, the ROC is the entire z-plane except for z = 0
and is shown as shaded region in Figure 8.1.
Figure 8.1 ROC of a finite duration causal sequence

Property 3: The ROC for a noncausal finite duration sequence is the entire z -
plane except for z = ∞.
A noncausal finite duration sequence of length N is such that x(n) = 0 for n ≥ 0 and for
n ≤ −N. Hence, X(z) is of the form

(8.9)

It is clear from the above expression that X(z) is convergent for all values of except for
z = ∞, assuming that x(n) is finite. Hence, the ROC is the entire z-plane except for z = ∞
and is shown as shaded region in Figure 8.2.

Figure 8.2 ROC of a finite duration noncausal sequence

Property 4: The ROC for a finite duration two-sided sequence is the entire z -
plane except for z = 0 and z = ∞.
A finite duration of length (N 2 + N 1 + 1) is such that x(n) = 0 for n < −N 1 and for
n > N 2, where N 1 and N 2 are positive. Hence, x(z) is of the form

(8.10)

It is seen that the above series is convergent for all values of z except for z = 0 and
z = ∞.
Property 5: The ROC for an infinite duration right-sided sequence is the exterior
of a circle which may or may not include z = ∞.
For such a sequence, x(n) = 0 for n < N. Hence, X(z) is of the form

(8.11)

If N ≥ 0, then the right-sided sequence corresponds to a causal sequence and the above
series converges if Eq (8.3a) is satisfied, that is,

(8.12)

Hence, in this case the ROC is the region exterior to the circle |z| = r 1 or the region
|z| > r 1 including the point at z = ∞. However, if N is a negative integer, say, N = −N 1,
then the series (8.12) will contain a finite number of terms involving positive powers of z.
In this case, the series is not convergent for z = ∞, and hence the ROC is the exterior of the
circle |z| = r 1 but will not include the point at z = ∞.
As an example of an infinite duration causal sequence, consider

(8.13)

Eq. (8.13) holds only if |r 1z −1| < 1. Hence, the ROC is |z| > r 1. The ROC is indicated
by the shaded region shown in Fig. 8.3 and includes the region |z| > r 1. It can be seen that
X(z) has a zero at z = 0 and pole at z = r 1. The zero is denoted by O and the pole by X.
Figure 8.3 ROC of an infinite duration causal sequence

Property 6: The ROC for an infinite duration left-sided sequence is the interior of
a circle which may or may not include z = 0.
For such a sequence, x(n) = 0 for n > N. Hence, X(z) is of the form

(8.14)

If N < 0, then the left-sided sequence corresponds to a noncausal sequence and the
above series converges if Eq. (8.3b) is satisfied, that is,

(8.15)

Hence, in this case, the ROC is the region interior to the circle |z| = r 2 or the region
|z| < r 2 including the point at z = 0.
However, if N is a positive integer, then the series (8.14) will contain a finite number
of terms involving negative powers of z. In this case, the series is not convergent for z = 0,
and hence the ROC is the interior of the circle |z| = r 2 but will not include the point at
z = 0.
As an example of an infinite duration noncausal sequence, consider

(8.16)

Then,

(8.17)

Hence, the ROC is |z| < r 2, that is, the interior of the circle |z| = r 2. The ROC and the
pole and zero of X(z) are shown in Fig. 8.4.
Figure 8.4 ROC of an infinite duration noncausal sequence

Property 7: The ROC of an infinite duration two-sided sequence is a ring in the z -


plane.
In this case, the z-transform X(z) is of the form

(8.18)

and converges in the region r 1 < |z| < r 2, where r 1 and r 2 are given by (8.3a) and
(8.3b), respectively. As mentioned before, the z-transform does not exist if r 1 > r 2.
As an example, consider the sequence

(8.19)

Then,

(8.20)

where the region of convergence is r1< |z| < r2. Thus, the ROC is a ring with a pole on
the interior boundary and a pole on the exterior boundary of the ring, without any pole in
the ROC. There are two zeros, one being located at the origin and the other in the ROC.
The poles and zeros as well as the ROC are shown in Figure 8.5.
Figure 8.5 ROC of an infinite duration two-sided sequence

Example 8.1 Determine the z-transform and the ROC for the following sequence:

Solution From the definition of the z-transform ,

Thus, the ROC is |z| > 2.


Example 8.2 Determine the z-transform and the ROC for the following sequence:

Solution respectively.

Thus, the ROC is

8.3 Properties of the z-Transform


Properties of the z-transform are very useful in digital signal processing. Some important
properties of the z-transform are stated and proved in this section. We will denote in the
following the ROC of X(z) by R(r 1 < |z| < r 2) and those of X 1(z) and X 2(z) by R 1 and R
2, respectively. Also, the region (1/(r 2) < |z| < 1/(r 1) is denoted by (1/R).

Linearity If x 1(n) and x 2(n) are two sequences with z-transforms X 1(z) and X 2(z) and
ROCs R 1 and R 2, respectively, then the z-transform of a linear combination of x 1(n) and
x 2(n) is given by

(8.21)

whose ROC is at least (R 1 ∩ R 1) and a 1 and a 2 being arbitrary constants.


Proof
(8.22)

(8.23)

The result concerning the ROC follows directly from the theory of complex variables
concerning the convergence of a sum of two convergent series.
Time Reversal If x(n) is a sequence with z-transform X(z) and ROC R, then the z-
transform of the time reversed sequence x(−n) is given by

(8.24)

whose ROC is 1/R.


Proof From the definition of the z-transform , we have

(8.25)

Hence,

(8.26)

Since (r 1 < |z| < r 2), we have (1/(r 2) < |z −1| < 1/(r 1)). Thus, the ROC of Z [x(−n)] is
1/R.
Time Shifting If x(n) is a sequence with z-transform X(z) and ROC R, then the z-
transform of the delayed sequence x(n − k), k being an integer, is given by

(8.27)

whose ROC is the same as that of X(z) except for z = 0 if k > 0 and z = ∞ if k < 0
Proof
(8.28)

Substituting m = n − k,

(8.29)

(8.30)

It is seen from Eq. (8.30) that in view of the factor z −k, the ROC of Z [x(n − k)] is the
same as that of X(z) except for z = 0 if k > 0 and z = ∞ if k < 0. It is also observed that in
particular, a unit delay in time translates into the multiplication of the z-transform by z −1.
Scaling in the z-Domain If x(n) is a sequence with z-transform X(z), then Z{a n x (n)} =
X(a −1z) for any constant a, real or complex. Also, the ROC of Z{a n x(n)} is | a|R, i.e., | a|
r 1 < |z| < |a|r 2.
Proof

(8.31)

(8.32)

Since the ROC of X(z) is r 1 < |z| < r 2, the ROC of X(a −1z) is given by r 1 < |a –1z| < r
2, that is,

Differentiation in the z-Domain If x(n) is a sequence with z-transform X(z), then

(8.33)
whose ROC is the same as that of X(z).
Proof From the definition,

Differentiating the above equation with respect to z, we get

(8.34)

Multiplying the above equation both sides by −z, we obtain

(8.35)

which can be rewritten as

(8.36a)

Now, the region of convergence r a < |z| < r b of the sequence nx(n) can be found using
Eqs. (8.3a) and (8.3b).

and

Hence, the ROC of Z[nx(n)] is the same as that of X(z).


By repeated differentiation of Eq. (8.36a), we get the result
(8.36b)

It is to be noted that the ROC of Z[n k x(n)] is also the same as that of X(z).
Convolution of Two Sequences If x 1(n) and x 2(n) are two sequences with z-
transforms X 1(z) and X 2(z), and ROCs R 1 and R 2, respectively, then

(8.37)

whose ROC is at least R 1 ∩ R 2.


Proof

(8.38)

The discrete convolution of x 1(n) and x 2(n) is given by

(8.39)

Hence, the z-transform of the convolution is

(8.40)

Interchanging the order of summation, the above equation can be rewritten

(8.41)

Hence,

(8.42)

Since the right side of Eq. (8.42) is a product of the two convergent sequences X 1(z)
and X 2(z) with ROCs R 1 and R 1, it follows from the theory of complex variables that the
product sequence is convergent at least in the region R 1 ∩ R 2. Hence, the ROC of Z[x
1(n) ∗ x 2(n)] is at least R 1 ∩ R 2.

Correlation of Two Sequences If x 1(n) and x 2(n) are two sequences with z-transforms
X 1(z) and X 1(z), and ROCs R 1 and R 2, respectively, then

(8.43)

whose ROC is at least R 1 ∩ (1/R 2)


Proof Since

(8.44)

(8.45)

Since the ROC of X 2(z) is R 2, the ROC of X 2(z −1) is 1/R 2 from the property
concerning time reversal . Also, since the ROC of X 1(z) is R 1, it follows from Eq. (8.45)
that the ROC of is at least R 1 ∩ (1/R 2).

Conjugate of a Complex Sequence If x(n) is a complex sequence with the z-transform


X(z), then

(8.46)

with the ROCs of both X(z) and Z[x *(n)] being the same
Proof The z-transform of x*(n) is given by

(8.47)

(8.48)
In the R.H.S. of the above equation, the term in the brackets is equal to x (z * ).
Therefore, Eq. (8.48) can be written as

(8.49)

It is seen from Eq. (8.49) that the ROC of the z-transform of conjugate sequence is
identical to that of X(z).
Real Part of a Sequence If x(n) is a complex sequence with the z-transform X (z), then

(8.50)

whose ROC is the same as that of X(z).


Proof

(8.51)

Since the z-transform satisfies the linearity property , we can write Eq. (8.51) as

(8.52)

(8.53)

It is clear that the ROC of Z[Re{x(n)}] is the same as that of X(z).


Imaginary Part of a Sequence If x(n) is a complex sequence with the z-transform X(z),
then

(8.54)

whose ROC is the same as that of X(z).


Proof Now
(8.55)

Thus,

(8.56)

Hence,

(8.57)

Again, since the z-transform satisfies the linearity property , we can write Eq. (8.57) as

(8.58)

Again, it is evident that the ROC of the above is the same as that of X(z). The above
properties of the z-transform are all summarized in Table 8.1.
Table 8.1 Some properties of the z-transform

Property Sequence z-Transform ROC

Linearity a 1x 1(n) + a 2x 2(n) a 1X 1(z) + a At least R 1 ∩ R 2


2X 2(z)

Time shifting x(n − k) z −kX(z). Same as R except for z = 0 if k > 0 and


for z = ∞ if k < 0

Time reversal x(−n) X(z −1)

Scaling in the z-domain a n x(n) X(a −1z) |a|R

Differentiation in the z-domain nx(n) R

Convolution theorem x 1(n) ∗ x 2(n) X 1(z)X 2(z) At least R 1 ∩ R 2

Correlation theorem X 1(z)X 2(z At least R 1 ∩ 1/R 2


−1)

Conjugate complex sequence x ∗(n) [X(z ∗)]∗ R


Real part of a complex sequence Re[x(n)] At least R

Imaginary part of a complex Im[x(n)] At least R


sequence

Time reversal of a complex x ∗(−n) X ∗(1/z ∗)


conjugate sequence

8.4 z-Transforms of Some Commonly Used Sequences


Unit Sample Sequence The unit sample sequence is defined by

(8.59)

By definition, the z-transform of δ(n) can be written as

(8.60)

It is obvious from (8.60) that the ROC is the entire z-plane.


Unit Step Sequence The unit step sequence is defined by

(8.61)

The z-transform of x(n) by definition can be written as

(8.62)

Hence, the ROC for X(z) is |z| > 1


Example 8.3 Find the z-transform of x(n) = δ(n − k)
Solution By using the time shifting property , we get

(8.63)

The ROC is the entire z-plane except for z = 0 if k is positive and for z = ∞ if k is
negative
Example 8.4 Find the z-transform of x(n) = −u(−n − 1)
Solution We know that from Eq. (8.62)
Hence, using the time shifting property

(8.64)

Now, using the time reversal property (Table 8.1), we get

Hence,

(8.65)

Example 8.5 Find the z-transform of the sequence x(n) = {b n u(n)}


Solution Let x 1 (n) = u(n). From Eq. (8.62),
Using the scaling property, we get

Example 8.6 Find the z-transform of x(n) = nu(n)


Solution Let x 1 (n) = u(n). Again, using Eq. (8.62), we have

Using the differentiation property,

we get
Example 8.7 Obtain the z-transform of the following sequence:

Solution

Let x(n) = n 2x 1(n), where x 1(n) = u(n). Then

Using the differentiation property that

we get

The ROC of X(z) is the same as that of u(n), namely, |z| > 1
Example 8.8 Find the z-transform of x(n) = sin ωn u(n)
Solution

Using the scaling property, we get


Therefore,

Example 8.9 Find the z-transform of x(n) = cos ωn u(n).


Solution

Using the scaling property, we get

Therefore,

Example 8.10 Find the z-transform of the sequence x(n) = [u (n) − u (n − 5)]
Solution

The ROC is the entire z-plane except for z = 0


Example 8.11 Determine X(z) for the function

Solution From Eq. (8.65), we have

Now using the scaling property (Table 8.1),


Thus the ROC is

Example 8.12 Consider a system with input x(n) and output y(n). If its impulse response
h(n) = Ax(L − n), where L is an integer constant, and A is a known constant, find Y(z) in
terms of X(z).
Solution

By the convolution property of the z-transform, we have

where

Letting n − L = m in the above, we have

Hence,

A list of some commonly used z-transform pairs are given in Table 8.2
Table 8.2 Some commonly used z-transform pairs

x(n) X(z) ROC

δ(n) 1 Entire z-plane

u(n) |z| > 1


nu(n) |z| > 1

−a n u(−n − 1) |z| < |a|

−na n {u(−n − 1)} |z| < |a|

{cosωn} u(n) |z| > 1

{sinωn} u(n) |z| > 1

Initial Value Theorem


If a sequence x(n) is causal, i.e., x(n) = 0 for n < 0, then

(8.66)

Proof Since x(n) is causal, its z-transform X[z] can be written as

(8.67)

Now, taking the limits on both sides

(8.68)

Hence, the theorem is proved.


Example 8.13 Find the initial value of a causal sequence x(n) if its z-transform X(z) is
given by

Solution The initial value x(0) is given by


8.5 The Inverse z-Transform
The z-transform of a sequence x(n), Z[x(n)], defined by Eq. (8.1), is

(8.69)

Multiplying the above equation both sides by z n − 1 and integrating both sides on a
closed contour C in the ROC of the z-transform X(z) enclosing the origin, we get

(8.70)

Multiplying both sides of Eq. (8.70) by , we arrive at

(8.71)

By Cauchy integral theorem, we have

(8.72)

Thus, the inverse z-transform of X(z), denoted by Z −1[X(z)], is given by

(8.73)

It should be noted that given the ROC and the z-transform X(z), the sequence x(n) is
unique. Table 8.2 can be used in most of the cases for obtaining the inverse transform. We
will consider in Section 8.6 different methods of finding the inverse transform.

8.5.1 Modulation Theorem in the z-Domain


The z-transform of the product of two sequences (real or complex) x 1(n) and x 2(n) is
given by

(8.74)

where C is a closed contour which encloses the origin and lies in the ROC that is common
to both X 1(v) and .

Proof Let x(n) = x 1(n)x 2(n)


The inverse z-transform of x 1(n) is given by

(8.75)

Using Eq. (8.75), we get

(8.76)

Taking the z-transform of Eq. (8.76), we obtain

(8.77)

Using the scaling property, we have that

Hence, Eq. (8.77) becomes

which is the required result.

8.5.2 Parseval’s Relation in the z-Domain


If x 1(n) and x 2(n) are complex valued sequences, then

(8.78)

where C is a contour contained in the ROC common to the ROCs of X 1(v) and .

Proof From Eq. (8.77), we have

Hence,

(8.79)

where we have used the result concerning the z-transform of a complex conjugate (see
Table 8.1). That is,

(8.80)

Letting z = 1 in Eq. (8.80), we get

Hence, the theorem.


If x 1(n) = x 2(n) = x(n) and the unit circle is included by the ROC of X(z), then by
letting v = e jω in (8.78), we get the energy of sequence in the z-domain to be

(8.81)

For the energy of real sequences in the z-domain, the above expression becomes
(8.82)

The Parseval’s relation in the frequency domain is given by

Thus,

(8.83)

8.6 Methods for Computation of the Inverse z-Transform


8.6.1 Cauchy’s Residue Theorem for Computation of the
Inverse z-Transform
By Cauchy’s residue theorem, the integral in Eq. (8.73) for rational z-transforms yields Z
−1[X(z)] = x(n) = sum of the residues of the function [X(z)z n−1] at all the poles p enclosed
i
by a contour C that lies in the ROC of X(z) and encloses the origin. The residue at a simple
pole p i is given by

(8.84)

while for a pole p i of multiplicity m, the residue is given by

(8.85)

We will now consider a few examples of finding the inverse z-transform using the
residue method.
Example 8.14 Assuming the sequence x(n) to be causal, find the inverse z-transform of
Solution Since the sequence is causal, we have to consider the poles of X(z)z n−1 for
only n ≥ 0. For n ≥ 0, the function X(z)z n−1 has only one pole at z = 1 of multiplicity 3.
Thus, the inverse z-transform is given by

It should be mentioned that if x(n) were not causal, then X(z)z n−1 would have had a
multiple pole of order n at the origin, and we would have to find the residue of X(z)z n−1 at
the origin to evaluate x(n) for n < 0.
Example 8.15 If x(n) is causal, find the inverse z-transform of

Solution Since the sequence is causal, we have to consider the poles of X(z)z n−1 for
only n ≥ 0. Hence , we see that for n ≥ 1, X(z)z n−1 has two simple
poles at 0.8 and −0.4. However for n = 0, we have an additional pole at the origin. Hence,
we evaluate x(0) separately by evaluating the residues of . Thus,

For n > 0,

Hence for any n ≥ 0,


8.6.2 Computation of the Inverse z-Transform Using the Partial
Fraction Expansion
Partial fraction expansion is another technique that is useful for evaluating the inverse z-
transform of a rational function and is a widely used method. To apply the partial fraction
expansion method to obtain the inverse z-transform, we may consider the z-transform to
be a ratio of two polynomials in either z or in z −1. We now consider a rational function
X(z) as given in Eq. (8.7). It is called a proper rational function if M > N; otherwise, it is
called an improper rational function . An improper rational function can be expressed as a
proper rational function by dividing the numerator polynomial N(z) by its denominator
polynomial D(z) and expressing X(z) in the form

(8.86)

where the order of the polynomial N 1(z) is less than that of the denominator polynomial.
The partial fraction expansion can be now made on N 1(z)/D(z). The inverse z-transform of
the terms in the sum is obtained from the pair (see Table 8.1) and the time-shift
property (see Table 8.2).
Let X(z) be a proper rational function expressed as

(8.87)

For simplification, eliminating negative powers, Eq. (8.87) can be rewritten as

(8.88)

Since X(z) is a proper fraction, so will be [X(z)/z]. If all the poles p i are simple, then,
[X(z)/z] can be expanded in terms of partial fractions as

(8.89)

where
(8.90)

If [X(z)/z] has a multiple pole, say at p j , with a multiplicity of k, in addition to (N−k)


simple poles at p i , then the partial fraction expansion given in Eq. (8.89) has to be
modified as follows.

(8.91)

where c i is still given by (8.90) and c jk by

(8.92)

Hence,

(8.93)

Then inverse z-transform is obtained for each of the terms on the right-hand side of
(8.91) by the use of Tables 8.1 and 8.2. We will now illustrate the method by a few
examples.
Example 8.16 Assuming the sequence x(n) to be right-sided, find the inverse z-transform
of the following:

Solution The given function has poles at z = a and z = b. Since X(z) is a right-sided
sequence, the ROC of X(z) is the exterior of a circle around the origin that includes both
the poles. Now X(z)/z can be expressed in partial fraction expansion as

Hence,
We can now find the inverse transform of each term using Table 8.2 as

Example 8.17 Assuming the sequence x(n) to be causal, find the inverse z-transform of
the following:

Solution Dividing the numerator and denominator by z 2, we can rewrite X(z) as

Each term in the above expansion is a first-order z-transform and can be recognized
easily to evaluate the inverse transform as

Example 8.18 Assuming the sequence x(n) to be causal, determine the inverse z-
transform of the following:

Solution Since X(z)/z can be written in partial fraction expansion as

Solving for A, B, and C, we get A = 0, B = 1, C = 2. Hence, X(z) can be expanded as


Making use of Table 8.2, the inverse z-transform of X(z) can be written as

Example 8.19 If x(n) is a right-handed sequence, determine the inverse z-transform for
the function:

Solution

Now, X(z)/z can be written in partial fraction expansion form as

Solving for A, B, C, and D, we get A = 6, B = 2, C = 8, D = −13. Hence,

Since the sequence is right-handed and the poles of X(z) are located z = 0, 0.5, and 1,
the ROC of X(z) is |z| > 1. Thus, from Table 8.2, we have

Example 8.20 Assuming h(n) to be causal, find the inverse z-transform of


Solution Expanding H(z)/z as

Solving for A, B, and C, we get A = −1.78, B = 0.033, and C = 2.75


Therefore, H(z) can be expanded as

Hence,

8.6.3 Inverse z-Transform by Partial Fraction Expansion Using


MATLAB
The M-file residue z can be used to find the inverse z-transform using the power Series
expansion .
The coefficients of the numerator and denominator polynomial written in descending
powers of z for Example 8.20 can be
num= [1 -2 1];
den= [1 -0.1 -0.56];
The following MATLAB statement determines the residue (r), poles (p), and direct
terms (k) of the partial fraction expansion of H(z).
[r,p,k]= residuez(num,den);
After execution of the above statements, the residues, poles, and constants obtained are
Residues: 0.0333 2.7524
Poles: 0.8000 –0.7000
Constants: −1.7857
The desired expansion is
(8.94)

8.6.4 Computation of the Inverse z-Transform Using the Power


Series Expansion
The z-transform of an arbitrary sequence defined by Eq. (8.1) implies that X(z) can be
expressed as power series in z −1 or z. In this expansion, the coefficient of the term
indicates z −n the value of the sequence x(n). Long division is one way to express X(z) in
power series.
Example 8.21 Assuming h(n) to be causal, find the inverse z-transform of the following:

Solution We obtain the inverse z-transform by long division of the numerator by the
denominator as follows:

Hence, H(z) can be written as

implying that
Example 8.22 Find the inverse z-transform of the following:

Solution We know that power series expansion for log(1 + u) is

Letting u = bz −1, X(z) can be written as

From the definition of z-transform of x(n), we have

Comparing the above two expressions, we get x(n), i.e., the inverse z-transform of
X(z) = log (1 + bz −1) to be

(8.95)

Example 8.23 Find the inverse z-transform of

Solution The sequence is a right-sided causal sequence as the region of convergence is


|z| > |b|. We can use the long division as we did in Example 8.21 to express z/(z−b) as a
series in powers of z −1. Instead, we will use binomial expansion .
Hence,

Example 8.24 Find the inverse z-transform of

Solution Since the region of convergence is |z| < |b|, the sequence is a left-sided
sequence. We can use the long division to obtain z/(z−b) as a power series in z. However,
we will use the binomial expansion.

Hence,

Example 8.25 Using the z-transform, find the convolution of the sequences:

Solution
Step 1: Determine z-transform of individual signal sequences
and

Step 2: Obtain X(z) = X 1(z)X 2(z)

Step 3: Obtain the inverse z-transform of X(z)

8.6.5 Inverse z-Transform via Power Series Expansion Using


MATLAB
The M-file impz can be used to find the inverse z-transform using the power series
expansion.
The coefficients of the numerator and denominator polynomial for Example 8.21 can
be written as
num = [1 2 1];
den = [1 0.4 -0.12];
The following statement can be run to obtain the coefficients of the inverse z-
transform:
h = impz(num,den);
where h is the vector containing the coefficients of the inverse z-transform. The first 11
coefficients of the inverse z-transform of Example 8.21 obtained after execution of the
above MATLAB statements are
Columns 1 through 9
1.0000 1.6000 0.4800 0 0.0576 -0.0230 0.0161 -0.0092
0.0056
Columns 10 through 11
-0.0034 0.0020

8.6.6 Solution of Difference Equations Using the z-Transform


Example 8.26 Determine the impulse response of the system described by the difference
equation:

Assume that the system is relaxed initially.


Solution Let X(z) = Z[x(n)] and Y(z) = Z[y(n)]. Taking z-transform on both sides and
using the time shifting property , we get

Since X(z) = 1, we have

We now take inverse transform of the above and use Table 8.2 to obtain y(n), which is
the impulse response of the system as

Example 8.27 Determine the response y(n), n ≥ 0 of the system described by the second-
order difference equation

for the input x(n) = 4 n u(n)


Solution Applying z-transform to both sides of the equation, we have

Given that x(n) = 4 n u(n), we have


Substituting for X(z) in the expression for Y(z) and simplifying, we get

or

Hence,

By applying inverse z-transforms, we get

Example 8.28 Find the impulse response of the system

Solution Taking z-transforms on both sides of the above equation, and using the
factZ[δ(n)] = 1, we get

Hence, the impulse response is given by


8.7 Analysis of Discrete-Time LTI Systems in the z-Transform
Domain
8.7.1 Transfer Function
It was stated in Chapter 6 that an LTI system can be completely characterized by its
impulse response h(n). The output signal y(n) of a LTI system and the input signal x(n) are
related by convolution as

(8.96)

Taking z-transform on both sides of the above equation and using the convolution
property, we get

(8.97)

indicating the z-transform of the output sequence y(n) is the product of the z-
transforms of the impulse response h(n) and the input sequence x(n). The quantities h(n)
and H(z) are two equivalent descriptions of a system in the time domain and z-domain,
respectively. The transform H(z) is called the transfer function or the system function and
expressed as

(8.98a)

Or equivalently,

(8.98b)

where the constants a k and b k are real.


The above transfer function is a ratio of polynomials in z −1 and, hence, is a rational
transfer function or system function.
Example 8.29 The following are known about a LTI discrete-time system :
y(n) = δ(n) + a(0.25) n u(n) for x(n) = (0.5) n u(n)
(i)

(ii) y(n) = 0 for all n if x(n) = (−2) n for all n


Find the value of the constant a.
Solution It is given that for the input x(n) = (0.5) n u(n), the output of the LTI system is
y(n) = δ(n) + a(0.25) n u(n). From this fact, the transfer function H(z) is given by

It is also given that the output y(n) = 0 for the input x(n) = (−2) n for all n. Since the
function is an eigenfunction for a discrete-time LTI system, the output to this input is
. From this, it can be inferred that H(−2) = 0. Using this in the above transfer
function, the value of a is calculated to be −1.125

8.7.2 Poles and Zeros of a Transfer Function


As mentioned earlier, the zeros of a system function H(z) are the values of z for which
H(z) = 0, while the poles are the values of z for which H(z) = ∞. Since H(z) is a rational
transfer function , the number of finite zeros and the number of finite poles are equal to
the degrees of the numerator and denominator polynomials, respectively.
In MATLAB, tf2zp command can be used to find the zeros, poles, and gains of a
rational transfer function. z plane command can be used for plotting pole-zero plot of a
rational transfer function.
Example 8.30 Determine the pole-zero plot using MATLAB for the system described by
the system function

Solution The coefficients of the numerator and denominator polynomial can be written
as
numerator = [0 1 -1];
denominator = [8 -6 1];
The following MATLAB statement yields the poles and zeros and gain of the system:
[z,p,gain] = tf2zp (numerator, denominator)
zeros, z = 1
poles, p = [0.500 0.250] and gain = 0.1250
The MATLAB command z-plane (z, p) plots the poles and zeros as shown in Figure
8.6.

Figure 8.6 Pole-zero plot of Example 8.30

8.7.3 Frequency Response from Poles and Zeros


By factorizing the numerator and denominator polynomials of Eq. (8.98b), the transfer
function can be written in pole-zero form as

(8.99)

where z i and p i are the zeros and poles of H(z). It should be noted that the zeros are either
real or occur in conjugate pairs. The frequency response of the system can be obtained by
letting z = e jω in the transfer function H(z), that is,

Hence,
(8.100)

The contribution of the zeros and poles to the system frequency response can be
visualized from the above expression.
The magnitude of the frequency response can be expressed by

(8.101)

The zeros contribute to pulling down the magnitude of the frequency response,
whereas the poles contribute to pushing up the magnitude of the frequency response. The
size of decrease or increase in the magnitude response depends on how far the zero or the
pole is from the unit circle. A peak in |H(e jω )| appears at the frequency of a pole very
close to the unit circle.
To illustrate this, consider the following example.
Example 8.31 Consider a system with the transfer function

(8.102)

The numerator and denominator polynomials coefficients in descending powers of z can


be written as
num=[1 2 1];
den=[1.2 0 1];
Then, as used in Example 8.30, using the MATLAB commands tf2zp and z-plane, the
pole zero plot can be obtained as shown in Figure 8.7(a). The magnitude and phase
responses of the above system transfer function are obtained using the above num and den
vectors using the MATLAB command freqz. The magnitude and phase responses are
shown in Figure 8.7(b) and (c), respectively.
Figure 8.7 (a) Pole-zero plot, (b) magnitude response, (c) phase response

Figure 8.7(a) indicates that the system has zeros of order 2 at z = −1 and two poles on
the imaginary axis close to the unit circle. In the magnitude response of Figure 8.7(b), a
peak occurs at ω = π/2. This can be attributed to the fact that the frequency of the poles is
π/2. The magnitude response is small at high frequencies due to the zeros.

8.7.4 Stability and Causality


The stability of a LTI system can be expressed in terms of the transfer function or the
impulse response of the system. It is known from Section 6.​4.​5 that a necessary and
sufficient condition for a LTI system to be BIBO (bounded-input bounded-output) stable is
that its impulse response be absolutely summable, i.e.,

(8.103)

(8.104)

(8.105)

On the unit circle (i.e., |z| = 1), the above expression becomes

(8.106)

Therefore, for a stable system, the ROC of its transfer function H(z) must include the
unit circle. Thus we have the following theorem.
BIBO Stability Theorem
A discrete LTI system is BIBO stable if and only if the ROC of its system function
includes the unit circle, |z| = 1.
We know from Section 6.4.5 that for a discrete LTI system to be causal h(n) = 0 for
n < 0. Thus, the sequence should be right-sided. We also know from Section 8.2 that the
ROC of a right-sided sequence is the exterior of a circle whose radius is equal to the
magnitude of the pole that is farthest from the origin. At the same time, we also know that
for a right-sided sequence, the ROC may or may not include the point z = ∞. But we know
from Section 8.2 that a causal system cannot have a pole at infinity . Thus, in a causal
system, the ROC should include the point z = ∞. Thus, we may summarize the result for
causality by the following theorem:
Causality Theorem
A discrete LTI system is causal if and only if the ROC of its system function is the exterior
of a circle including z = ∞. An alternate way of stating this result is that a system is causal
if and only if its ROC contains no poles, finite or infinite.
Thus the conditions for stability and causality are quite different. A causal system
could be stable or unstable, just as a noncausal system could be stable or unstable . Also, a
stable system could be causal or noncausal just as an unstable system could be causal or
noncausal. However, we can conclude from the above two theorems that a causal stable
system must have a system function whose ROC is |z| = r, where r < 1. Hence, we can
summarize this result as follows.
Condition for a System to Be Both Causal and Stable
A causal LTI system is BIBO stable if and only if all its poles are within the unit circle.
As a consequence, for a LTI system with a system function H(z) to be stable and
causal , it is necessary that the degree of the numerator polynomial in z not exceed that of
the denominator polynomial. As such, an FIR system is always stable, whereas if an IIR
system is not designed properly, it may be unstable.
Example 8.32 Given the system function

Find the various regions of convergence for H(z), and state whether the system is
stable and/or causal in each of these regions. Also, find the impulse response h(n) in each
case.
Solution The system function can be expressed in partial fraction in the form

The system function has two zeros, viz., , and two poles at Hence,
there are three regions of convergence: (i) , (ii) , and (iii) |z| > 4. Let us
consider each of these regions separately.

(i)
In this region, there are no poles including the origin, but has poles exterior to it.
Hence, the system is noncausal. Also, it is an unstable system, since the ROC does not
include the unit circle. By using Table 8.2, we get
(ii)
This region includes the unit circle and hence the system is stable. However, since the
pole |z| = 4 is exterior to this region, it is noncausal, and the corresponding sequence is
two-sided. Again by using Table 8.2, we have

(iii) |z| > 4


This region does not include the unit circle, and hence the system is unstable.
However, in this region, there are no poles, finite or infinite, and hence, the system is
causal . The impulse response of the system is obtained from H(z) using Table 8.2 as

Example 8.33 The rotational motion of a satellite was described by the difference
equation

Is the system stable ? Is the system causal? Justify your answer.


Solution Taking the z-transform on both sides of the given difference equation, we get

The poles of the system are at z = 0.5 ± 0.5j as shown in Figure 8.8
Figure 8.8 Poles of Example 8.33

All poles of the system are inside the unit circle. Hence, the system is stable . It is causal
since the output only depends on the present and past inputs.
Example 8.34 Consider the difference equation

(a) Determine the possible choices for the impulse response of the system. Each choice
should satisfy the difference equation. Specifically indicate which choice
corresponds to a stable system and which choice corresponds to a causal system.

(b) Can you find a choice which implies that the system is both stable and causal? If not,
justify your answer.

Solution

(a) Taking the z-transform on both sides and using the shifting theorem, we get

The system function H(z) has a zero of order 2 at z = 0 and two poles at z = 1/3, 2.
Hence, there are three regions of convergence, and thus, there are three possible choices
for the impulse response of the system. The regions are
(i) , (ii) , and (iii) R 3: |z| > 2.
The region R 1 is devoid of any poles including the origin and hence corresponds to an
anti-causal system , which is not stable since it does not include the unit circle. Region R 2
does include the unit circle and hence corresponds to a stable system; however, it is not
causal in view of the presence of the pole z = 2. Finally, the region R 3 does not have any
poles including at infinity and hence corresponds to a causal system; however, since R 3
does not include the unit circle, the system is not stable .

(b) There is no ROC that would imply that the system is both stable and causal .
Therefore, there is no choice for h(n ) which make the system both stable and causal.

Example 8.35 A system is described by the difference equation

(i) Determine the transfer function and discuss the stability of the system.

(ii) Determine the impulse response h(n) and show that it behaves according to the
conclusion drawn from (i).

(iii) Determine the response when x(n) = 10 for n ≥ 0. Assume that the system is initially
relaxed.

Solution

(i) Taking the z-transforms on both sides of the given equation, we get

Hence,

The pole is at z = −1, that is, on the unit circle. So the system is marginally stable or
oscillatory.

(ii) Since h(n) = 0 for n < 0,

This impulse response confirms that the impulse response is oscillatory.

(iii) Since

Thus,

or

Therefore,

8.7.5 Minimum-Phase, Maximum-Phase, and Mixed-Phase


Systems
A causal stable transfer function with all its poles and zeros inside the unit circle is called
a minimum-phase transfer function . A causal stable transfer function with all its poles
inside the unit circle and all the zeros outside the unit circle is called a maximum-phase
transfer function . A causal stable transfer function with all its poles inside the unit circle
and with zeros inside and outside the unit circle is called a mixed-phase transfer function.
For example, consider the systems with the following transfer functions:

(8.107)

(8.108)

(8.109)

The pole-zero plot of the above transfer functions are shown in Figure 8.9 (a), (b), and
(c), respectively. The transfer function H 1(z) has a zero at z = −0.4 and a pole at z = −0.3,
and they are both inside the unit circle. Hence, H 1(z) is a minimum-phase function. The
transfer function H 2(z) has a pole inside the unit circle, at z = −0.5, and a zero at z = −2.5,
outside the unit circle. Thus, H 2(z) is a maximum-phase function . The transfer function H
3(z) has two poles, one at z = −0.3 and the other at z = −0.5, and two zeros one at z = −0.4,
inside the unit circle, and the other at z = −2.5, outside the unit circle. Hence, H 3(z) is a
mixed-phase function.
Figure 8.9 Pole-zero plot of (a) a minimum-phase function, (b) a maximum-phase function, and (c) a mixed-phase
function

8.7.6 Inverse System


Let H(z) be the system function of a linear time-invariant system . Then its inverse system
function H I (z) is defined, if and only if the overall system function is unity when H(z)
and H I (z) are connected in cascade, that is, H(z) H I (z) = 1, implying

(8.110)

In the time domain, this is equivalently expressed as

(8.111)

Example 8.36 A system is described by the following difference equation:


where the constant α > 0. Find the corresponding inverse system function to recover x(n)
from y(n). Check for the stability and causality of the resulting recovery system ,
justifying your answer.
Solution

The corresponding inverse system

The recovery system is both stable and causal, since all the poles of the system H I (z)
are inside the unit circle.

8.7.7 All-Pass System


Consider a causal stable Nth-order transfer function of the form

(8.112)

(8.113)

or

(8.114)

Hence,

(8.115)

and
(8.116)

Therefore,

(8.117)

Thus,

(8.118)

for all values of ω.


In other words, H(z) given by (8.112) passes all the frequencies contained in the input
signal to the system, and hence such a transfer function is an all-pass transfer function ,
and the corresponding system is an all-pass system. It is also seen from (8.112) that if
z = p i is a zero of D(z), then z = (1/p i ) is a zero of M(z). That is, the poles and zeros of an
all-pass function are reciprocal of one another. Since all the poles of H(z) are located
within the unit circle, all the zeros are located outside the unit circle.
If x(n) is the input sequence and y(n) the output sequence for an all-pass system , then

(8.119)

Thus,

(8.120)

Since |H(e jω )| = 1, we get

(8.121)

We know from Parseval’s relation that the output energy of a LTI system is given by

(8.122)
(8.123)

Hence,

(8.124)

Thus, the output energy is equal to the input energy for an all-pass system . Hence, an
all-pass system is that it is a lossless system.
Example 8.37 A discrete-time system with poles at z = −0.6 and z = −0.7 and zeros at
z = −1/0.6 and z = −1/0.7 is shown in Figure 8.10. Demonstrate algebraically that
magnitude response is constant.

Figure 8.10 Pole-zero plot of a second-order all-pass system

Solution For given pole-zero pattern, the system function is given by

Substituting z = e jω in the above transfer function, we get


8.7.8 All-Pass and Minimum-Phase Decomposition
Consider an Nth-order mixed-phase system function H(z) with m zeros outside the unit
circle and (n−m) zeros inside the unit circle. Then H(z) can be expressed as

(8.125)

where H 1(z) is a minimum-phase function as its N poles and (n−m) zeros are inside the
unit circle. Eq. (8.125) can be equivalently expressed as

(8.126)

In the above equation, the factor H 1(z)(1−z −1a 1)(1−z −1a 2)⋯(1−z −1a m ) is also a
minimum-phase function , since |a 1|, |a 2|, …, |a m | are less than 1, the zeros are inside the
unit circle, and the factor is all-pass. Thus, any transfer function
H(z) can be written as

(8.127)

H min(z) has all the poles and zeros of H(z) that are inside the unit circle in addition to
the zeros that are conjugate reciprocals of the zeros of H(z) that are outside the unit circle,
while H ap(z) is an all-pass function that has all the zeros of H(z) that lie outside the unit
circle along with poles to cancel the conjugate reciprocals of the zeros of H (z) that lie
outside the unit circle, which are now contained as zeros in H min(z).
Example 8.38 A signal x(n) is transmitted across a distorting digital channel
characterized by the following system function:
Consider the compensating system shown in Figure 8.11. Find H 1C(z) such that the
overall system function G 1(z) is an all-pass system.

Figure 8.11 Compensating system

Solution

Then,

is an all-pass system .

8.8 One-Sided z-Transform


The unilateral or one-sided z-transform, which is appropriate for problems involving
causal signals and systems, is evaluated using the portion of a signal associated with
nonnegative values of time index (n ≥ 0). It gives considerable meaning to assume
causality in many applications of the z-transforms.
Definition
The one-sided z-transform of a signal x[n] is defined as

(8.128)

which depends only on x(n) for (n ≥ 0). It should be mentioned that the two-sided z-
transform is not useful in the evaluation of the output of a non-relaxed system. The one-
sided transform can be used to solve for systems with nonzero initial conditions or for
solving difference equations with nonzero initial conditions . The following special
properties of X + (z) should be noted.
1. The one-sided transform X + (z) of x(n) is identical to the two-sided transform X(z) of

the sequence x(n)u(n). Also, since x(n)u(n) is always causal, its ROC and hence that
of X + (z) are always the exterior of a circle. Hence, it is not necessary to indicate the
ROC of a one-sided z-transform.

2. X + (z) is unique for a causal signal, since such a signal is zero for n < 0.

3. Almost all the properties of the two-sided transform are applicable to the one-sided
transform, one major exception being the shifting property.

Shifting Theorem for X + (z) When the Sequence is Delayed by k


If

then

(8.129)

However, if x(n) is a causal sequence, then the result is the same as in the case of the
two-sided transform and

(8.130)

Proof By definition,

Letting (n−k) = m, the above equation may be written as

which proves (8.129). If the sequence x(n) is causal, then the second term on the right
side of the above equation is zero, and hence we get the result (8.130).
Shifting Theorem for X + (z) When the Sequence is Advanced by k
If

then

(8.131)

Proof By definition

Letting (n + k) = m, the above equation may be written as

thus establishing the result (8.131).


Final Value Theorem
If a sequence x(n) is causal, i.e., x(n) = 0 for n < 0, then

(8.132)

The above limit exists only if the ROC of (z−1) X(z) exists.
Proof Since the sequence x(n) is causal, we can write its z-transform as follow

(8.133)

Also
(8.134)

Hence, we see that

(8.135)

Thus,

Substituting (8.133) and (8.135) for the L.H.S., we have

Taking the limit as z → 1, we get

Thus,

or

Hence,

It should be noted that the limit exists only if the function (z−1) X(z) has an ROC that
includes the unit circle; otherwise, system would not be stable and the limn→∞x(n) would
not be finite.
Example 8.39 Find the final value of x(n) if its z-transform X(z) is given by
Solution The final value or steady value of x(n) is given by

The result can be directly verified by taking the inverse transform of the given X (z)
Example 8.40 The following facts are given about a discrete-time signal x(n) with X(z)
as its z-transform:

(i) x(n) is real and right-sided.

(ii) X(z) has exactly two poles.

(iii) X(z) has two zeros at the origin.

(iv) X(z) has a pole at .

(v) .
Determine X(z) and specify its region of convergence.
Solution It is given that x(n) is real and that X(z) has exactly two poles with one of the
pole at . Since, x [n] is real, the two poles must be complex conjugates of each

other. Thus, the other pole of the system is at . Also, it is given that X(z) has two
zeros at the origin. Therefore, X(z) will have the following form:
for some constant K to be determined. Finally, . Substituting this
in the above equation, we get K = 2. Thus,

Since x[n] is right-sided, its ROC is (note that both poles have magnitude ).

8.8.1 Solution of Difference Equations with Initial Conditions


The one-sided z-transform is very useful in obtaining solutions for difference equations
which have initial conditions. The procedure is illustrated with an example.
Example 8.41 Find the step response of the system

with the initial condition y(−1) = 1


Solution Taking one-sided z-transforms on both sides of the given equation and using
(8.129), we have

Substituting for X + (z) and y(−1), we have

Hence,
Taking the inverse transform, we get

8.9 Solution of State-Space Equations Using z-Transform


For convenience, the state-space equations of a discrete-time LTI system from Chapter 6
are repeated here:

(8.136a)

(8.136b)

Taking unilateral z-transform Eqs. (8.136a) and (8.136b), we obtain

(8.137a)

(8.137b)

where

Eq. (8.137a) can be rewritten as


(8.138)

where I is the identity matrix.


From Eq. (8.138), we get

(8.139)

The inverse one-sided z-transform of Eq. (8.139) yields

(8.140)

(8.141)

By using convolution theorem, we obtain

(8.142)

Thus,

(8.143)

Substituting Eq. (8.143) into Eq. (8.136b), we get

(8.144)

Example 8.42 Consider an initially relaxed discrete time system with the following state-
space representation. Find y(n).
Solution

Since the system is initially relaxed, cA n X(0) = 0

Hence,
8.10 Transformations Between Continuous-Time Systems and
Discrete-Time Systems
The transformation of continuous-time system to discrete-time system arises in various
situations. In this section, two techniques, namely,

(i) Impulse invariance technique

(ii) Bilinear transformation technique


are discussed for transforming continuous-time system to discrete-time system.

8.10.1 Impulse Invariance Method


In this method, the impulse response of an analog filter is uniformly sampled to obtain the
impulse response of the digital filter, and hence this method is called the impulse
invariance method. The process of designing an IIR filter using this method is as follows:
Step 1: Design an analog filter to meet the given frequency specifications. Let H a (s)
be the transfer function of the designed analog filter. We assume for simplicity that H
a (s) has only simple poles. In such a case, the transfer function of the analog filter
can be expressed in partial fraction form as

(8.145)

where A k is the residue of H(s) at the pole p k .


Step 2: Calculate the impulse response h(t) of this analog filter by applying the
inverse Laplace transformation on H(s). Hence,
(8.146)

Step 3: Sample the impulse response of the analog filter with a sampling period T.
Then, the sampled impulse response h(n) can be expressed as

(8.147)

Step 4: Apply the z-transform on the sampled impulse response obtained in Step 3, to
form the transfer function of the digital filter, i.e., H(z) = Z[h(n)]. Thus, the transfer
function H(z) for the impulse invariance method is given by

(8.148)

This impulse invariant method can be extended for the case when the poles are not
simple.
Example 8.43 Consider a continuous system with the transfer function:

Solution The inverse Laplace transform of H(s) yields

Sampling h(t) with sampling period T, we get


Disadvantage of Impulse Invariance Method
The frequency responses of the digital and analog filters are related by

(8.149)

From Eq. (8.149), it is evident that the frequency response of the digital filter is not
identical to that of the analog filter due to aliasing in the sampling process. If the analog
filter is band-limited with

(8.150)

then the digital filter frequency response is of the form

(8.151)

In the above expression, if T is small, the gain of the filter becomes very large. This
can be avoided by introducing a multiplication factor T in the impulse invariant
transformation . In such a case, the transformation would be

(8.152)

and H(z) would be


(8.153)

Also, the frequency response is

(8.154)

Hence, the impulse invariance method is appropriate only for band-limited filters, i.e.,
low-pass and band-pass filters, but not suitable for high-pass or band-stop filters where
additional band limiting is required to avoid aliasing. Thus, there is a need for another
mapping method such as bilinear transformation technique which avoids aliasing.

8.10.2 Bilinear Transformation


In order to avoid the aliasing problem mentioned in the case of the impulse invariant
method, we use the bilinear transformation, which is a one-to-one mapping from the s-
plane to the z-plane; that is, it maps a point in the s-plane to a unique point in the z-plane
and vice versa. This is the method that is mostly used in designing an IIR digital filter
from an analog filter. This approach is based on the trapezoidal rule. Consider the bilinear
transformation given by

(8.155)

Then a transfer function H a (s) in the analog domain is transformed in the digital
domain as

(8.156)

Also, from Eq. (8.155), we have

(8.157)

We now study the mapping properties of the bilinear transformation. Consider a point
s = −σ + jΩ in the left half of the s-plane. Then, from Eq. (8.157),
(8.158)

Hence, the left half of the s-plane maps into the interior of the unit circle in the z-plane
(see Figure 8.12). Similarly, it can be shown that the right-half of the s-plane maps into the
exterior of the unit circle in the z-plane. For a point z on the unit circle, z = e jω , we have
from Eq. (8.155)

(8.159)

Figure 8.12 Mapping of the s-plane into the z-plane by the bilinear transformation

Thus,

(8.160)

or

(8.161)

showing that the positive and negative imaginary axes of the s-plane are mapped,
respectively, into the upper and lower halves of the unit circle in the z-plane. We thus see
that the bilinear transformation avoids the problem of aliasing encountered in the impulse
invariant method, since it maps the entire imaginary axis in the s-plane onto the unit circle
in the in the z-plane. Further, in view of the mapping, this transformation converts a stable
analog filter into a stable digital filter.
Example 8.44 Design a low-pass digital filter with 3 dB cutoff frequency at 50 Hz and
attenuation of at least 10 dB for frequency larger than 100 Hz. Assume a suitable sampling
frequency .
Solution Assume the sampling frequency as 500 Hz. Then,

Prewarping of the above normalized frequencies yields

Substituting these values in and solving for N, we get

Hence, the order of the Butterworth filter is 2. The normalized low-pass Butterworth
filter for N = 2 is given by

The transfer function H c (s) corresponding to Ω c = 0.325 is obtained by substituting


s = (s/Ω c ) = (s/0.325) in the expression for H N (s); hence,

The digital transfer function H(z) of the desired filter is now obtained by using the
bilinear transformation (8.157) in the above expression:
8.11 Problems

1. Find the z-transform of the sequence x(n) = {1,2,3,4,5,6,7}.

2. Find the z-transform and ROC of the sequence x(n) tabulated below.
n −2 −1 0 1 2 3 4

x(n) 1 2 3 4 5 6 7

3. Find the z-transform of the signal x(n) = [3(3) n −4(2) n ].

4. Find the z-transform of the sequence x(n) = (1/3)n−1−u(n−1).

5. Find the z-transform of the sequence

6. Find the z-transform of the following discrete-time signals , and find the ROC for
each.

(i)

(ii)
(iii)

7. Find the z-transform of the sequence x(n) = na n−1u(n−1).

8. Find the z-transform of the sequence x(n) = (1/4)n+1u(n).

9. Find the z-transform of the signal x(n) = [(4)n+1−3(2)n−1].

10. Determine the z-transform and the ROC for the following time signals. Sketch the
ROC, poles, and zeros in the z-plane.

(i)

(ii)

11. Find the inverse z-transform of the following, using partial fraction expansions:
(i)

(ii)

(iii)

(iv)
12. Find the inverse z-transform of the following using the partial fraction expansion .
(i)

(ii)

(iii)

13. Determine all the possible signals that can have the following z-transform:

14. Find the stability of the system with the following transfer function :

15. The transfer function of a system is given as

Specify the ROC of H(z ) and determine h(n) for the following conditions:

(i) The system is causal .

(ii) The system is stable .


(iii) Can the given system be both causal and stable?

16. A causal LTI system is described by the following difference equation:

Determine whether the system is an all-pass system .

17. In the system shown in Figure P8.1, if S 1 is a causal LTI system with system
function,

Figure P8.1 Cascade connection of two systems S 1 and S 2

Determine the system function for a system S 2 so that the overall system is an
all-pass system.

18. The transfer function of a system is given by

Determine the response when x(n) = u(n ). Assume that the system is initially
relaxed.

19. A causal LTI system is described by the following difference equation:


Is it a stable system? If not, find a noncausal stable impulse response that
satisfies the difference equation.

20. Using the one-sided z-transform, solve the following difference equation:

21. Consider a continuous system with the transfer function

Determine the transfer function and pole-zero pattern for the discrete-time
system by using the impulse invariance technique.

22. Design a low-pass Butterworth filter using Bilinear transformation for the following
specifications:
Passband edge frequency: 1000 Hz
Stopband edge frequency: 3000 Hz
Passband ripple: 2 dB
Stopband ripple: 20 dB
Sampling frequency : 8000 Hz

23. Consider an initially relaxed discrete time with the following state-space
representation. Find y(n).
8.12 MATLAB Exercises

1. Write a MATLAB program using the command residuez to find the inverse of the
following by partial fraction expansion:

2. Write a MATLAB program using the command impz to find the inverse of the
following by power series expansion:

3. Write a MATLAB program using the command z-plane to obtain a pole-zero plot for
the following system:

4. Write a MATLAB program using the command freqz to obtain magnitude and phase
responses of the following system:

Further Reading
1. Lyons, R.G.: Understanding Digital Signal Processing. Addison-Wesley, Reading (1997)
2. Oppenheim, A.V., Schafer, R.W.: Discrete-Time Signal Processing, 2nd edn. Prentice-Hall, Upper Saddle River
(1999)
3. Mitra, S.K.: Digital Signal Processing. McGraw-Hill, New York (2006)
4. Hsu, H.: Signals and Systems, Schaum’s Outlines, 2nd edn. McGraw-Hill, New York (2011)

5. Kailath, T.: Linear Systems. Prentice-Hall, Englewood Cliffs (1980)


6. Zadeh, L., Desoer, C.: Linear System Theory. McGraw-Hill, New York (1963)
Index
A
Aliasing
All-pass decomposition
All-pass system
Amplitude
Amplitude demodulation
Amplitude modulation (AM)
Analog filter design
band-pass
band-stop
Butterworth low-pass filter
Chebyshev analog low-pass filter
type 1 Chebyshev low-pass filter
type 2 Chebyshev filter
elliptic analog low-pass filter
high-pass
low-pass
notch
specifications of low-pass filter
transformations
low-pass to band-pass
low-pass to band-stop
low-pass to high-pass
low-pass to low-pass
Analog filter types comparison
Application examples
Associative property

B
Band-pass filter
Band-stop filter
Basic continuous-time signals
complex exponential function
ramp function
real exponential function
rectangular pulse function
signum function
sinc function
unit impulse function
unit step function
Basic sequences
arbitrary
exponential and sinusoidal
unit sample
unit step
Bessel filter
BIBO stability theorem
Bilateral Laplace transform
Bilinear transformation
Block diagram representation
described by differential equations
Butterworth analog low-pass filter

C
Cauchy’s residue theorem
Causal and stable conditions
Causality
Causality for LTI systems
Causality theorem
Characteristic equation
Chebyshev analog low-pass filter
Classification of signals
analog and digital signals
causal, non-causal and anti-causal signals
continuous time and discrete time signals
deterministic and random signals
energy and power signals
even and odd signals
periodic and aperiodic signals
Commutative property
Complex Exponential Fourier Series
Computation of convolution integral using MATLAB
Computation of convolution sum using MATLAB
Computation of linear convolution
graphical method
matrix method
Conjugate of complex sequence
Continuous Fourier Transform
convergence of Fourier transform
Continuous Fourier transform properties
convolution property
differentiation in frequency
differentiation in time
duality
frequency shifting
integration
linearity
modulation
Parseval’s theorem
symmetry properties
time and frequency scaling
time reversal
time shifting
Continuous time signal
complex exponential Fourier series
convergence of Fourier series
properties of Fourier series
trigonometric Fourier series
symmetry conditions
Continuous-time systems
causal system
invertible system
linear systems
memory and memoryless system
stable system
time–invariant system
Convergence of the DTFT
Convolution integral
associative property
commutative property
distributive property
graphical convolution
Convolution of two sequences
Convolution sum
Convolution theorem
Correlation
Correlation of discrete-time signals
Correlation of two sequences
Correlation theorem

D
Direct form I
Direct form II
Discrete-time Fourier series (DTFS)
Fourier coefficients
multiplication
periodic convolution
symmetry properties
Discrete-time Fourier transform (DTFT)
linearity
Discrete time LTI systems in z-domain
Discrete-time signal
Discrete-time signals classification
energy and power signals
finite and infinite length
periodic and aperiodic
right-sided and left-sided
symmetric and anti-symmetric
Discrete-time system characterization
non-recursive difference equation
recursive difference equation
Discrete-time systems classification
causal
linear
stable
time-invariant
Discrete transformation
Distributive property
Down-sampler

E
Elementary operations on signals
Elliptic analog low-pass filter
Energy and power signals
Examples of real world signals and systems
audio recording system
global positioning system
heart monitoring system
human visual system
location-based mobile emergency services system
magnetic resonance imaging
F
Filtering
Final value theorem
Fourier transform
Fourier transform of discrete-time signals
convergence of the DTFT
properties of DTFT
for a complex sequence
for a real sequence
theorems on DTFT
convolution theorem
correlation theorem
differentiation in frequency
frequency shifting
linearity
Parseval’s theorem
time reversal
time shifting
windowing theorem
Frequency division multiplexing (FDM)
Frequency response computation using MATLAB
Frequency response from poles and zeros
Frequency response of continuous time systems
Frequency response of discrete-time systems
computation using MATLAB
Frequency shifting

G
Generation of continuous-time signals using MATLAB
Graphical convolution

H
Half-wave symmetry
High-pass filter

I
Imaginary part of a sequence
Impulse and step responses
Impulse and step responses computation using MATLAB
Impulse response
Impulse step responses
Initial Value Theorem
Input-output relationship
Interconnected systems
Inverse discrete Fourier transform
Inverse Fourier transform
Inverse Laplace transform
partial fraction expansion
partial fraction expansion using MATLAB
partial fraction expansion with multiple poles
partial fraction expansion with simple poles
Inverse system
Inverse z -transform
Cauchy’s residue theorem
modulation theorem
Parseval’s relation
partial fraction expansion
partial fraction expansion using MATLAB
power series expansion
power series expansion using MATLAB

L
Laplace transform
block diagram representation
definition of
existence of
inter connection of systems
properties
convolution in the frequency domain
convolution in the time domain
differentiation in the s-domain
differentiation in the time domain
division by t
final value theorem
initial value theorem
integration
linearity
properties of even and odd functions
shifting in the s-domain
time scaling
time shifting
region of convergence
finite duration signal
left sided signal
right sided signal
strips parallel to the jΩ axis
two sided signal
region of convergence (ROC)
relationship to Fourier transform
representation of Laplace transform in the s-plane
system function
block diagram representation
interconnection of systems
table of properties
transfer Function
unilateral Laplace transform
differentiation property
Linear constant coefficient difference equations
Linear constant-coefficient differential equations
Linearity
Linearity property of the Laplace transform
Low-pass to band-pass
Low-pass filter
LTI discrete-time systems
LTI systems with and without memory

M
Matrix method
Maximum-phase systems
Minimum-phase decomposition
Minimum-phase systems
Mixed-phase systems
Modulation
Modulation and demodulation
Modulation property
Modulation theorem
Multiplexing and demultiplexing

N
Nonperiodic signals
Non-recursive difference equation
Notch filter

O
One-sided z-transform
properties
shifting theorem
final value theorem
solution of linear difference equations
with initial conditions

P
Parseval’s relation
Parseval’s theorem
Partial fraction expansion
Partial fraction expansion using MATLAB
Particular solution
Periodic convolution
Phase and group delays
Poles and zeros
Pole-zero pairing
Pole-zero placement
Power series expansion using MATLAB
Properties of the convolution integral
Properties of the convolution sum
Properties of the impulse function
sampling property
scaling property
shifting property
Proposition

Q
Quantization and coding
Quantization error

R
Rational transfer function
Rational z -transform
Real part of a sequence
Reconstruction of a band-limited signal from its samples
Recursive difference equation
Region of convergence (ROC)
Representation of signals in terms of impulses

S
Sampling
continuous time signals
discrete time signals
frequency domain
Nyquist rate
Sampling frequency
Sampling in frequency-domain
aliasing
over-sampling
sampling theorem
under sampling
Sampling period
Sampling theorem
Scalar multiplication
S-domain
Single-side-band (SSB) AM
Singularity functions
Solution of difference equations
characteristic equation
complementary solution
using MATLAB
particular solution
Solution of linear differential equations using MATLAB
Stability and causality
Stability and causality of LTI systems in terms of the impulse response
Stability for LTI systems
Step and impulse responses
Step response
Systems
all-pass
causal
inverse
maximum-phase
minimum-phase
mixed-phase
stable

T
Theorems on DTFT
Time-invariant
Time reversal
Time scaling
Time shifting
Transfer function
Transformation

U
Under sampling
Unilateral Laplace transform
differentiation property
Unit doublet
Unit impulse response
Unit ramp
Unit sample sequence
Unit step response
Unit step sequence
Unstable system
Up-sampling

W
Windowing theorem

Z
Zero locations
Zero-order hold
Z-transform
definition of
properties of
region of convergence (ROC)
Z-transform properties
conjugate of a complex sequence
convolution of two sequences
correlation of two sequences
differentiation in the z-domain
imaginary part of a sequence
linearity
real part of a sequence
scaling in the z-domain
time reversal
time shifting
Z-transforms of commonly-used sequences
unit step sequence

También podría gustarte