Está en la página 1de 111

Telecommunication Concepts

MSc in Software Development Dr. Dirk Pesch

Dr. D. Pesch, CIT, 2002

Introduction
Telecommunication systems functionality based on layered network approach ISO/OSI Model From a telecommunications software perspective layers three to seven are most interesting network layer to application layer Main principles associated with telecommunications networking are switching, routing, management and control Telecommunications software covers areas of protocol design, management, and applications

Dr. D. Pesch, CIT, 2002

Telecommunication Networks
No generally accepted taxonomy into which all communication networks fit Networks can be classified according to
Transmission technology Scale

Transmission technology
digital v. analogue point-to-point v. broadcast circuit-switched v. packet-switched

With regard to the physical appearance of networks, there is no general accepted taxonomy into which all networks fit. Many different opinions exist and many classifications have been attempted. Here, we follow Andrew Tanenbaum, who proposes to classify networks according to transmission technology and scale. Transmission technology refers to whether digital or analogue transmission is used. Most modern communication networks, in particular computer communication networks, use digital transmission technology. However, there are many communication networks in operation that use analogue transmission technology. Those networks provide the plain old telephone service (POTS) as well as allow computers to interconnect using modem technology which converts the digital data signal of computers into an analogue signal that can be transmitted across an analogue telephone network. A second aspect of transmission technology is whether networks are point-to-point or broadcast networks. Point-to-point networks connect any two network nodes, such as computers, telephone apparatus, switches, routers, or hubs with a physical connection. This physical connection can be based on copper, fibre, or radio links. To go from source to destination, data will be routed along a path that can involve one or more intermediate machines. Broadcast networks have a single communication channel that is shared by all network nodes. Communication takes place by one node sending data and all or a group of nodes receiving the data. In the first case we talk about broadcasting, in the latter about multicasting. In order to transmit data from source to destination, point-to-point networks use two different transmission options. The first option establishes a dedicated route between source and destination along which the information flows. This route is made up of dedicated physical links, which are used solely by the communication service in question. This transmission option is called circuit switching. On the other hand, a logical connection can be established along which the information, in form of packets of data, is transmitted. The logical connection can either use a physical Dr. D. Pesch, CIT, 2002 3 connection, which is shared with others, or many different physical connections are used depending on certain circumstances. This transmission option is called packet switching. Packet switching uses two transmission services, connection-oriented and

Scale of Networks
Personal Area Networks Local Area Networks Metropolitan Area Networks Wide Area Networks Internetworks

A personal area network (PAN) is a network in which a number of devices attached or in close proximity to the human body are interconnected to form a very small network. A network consisting of a mobile phone, a personal digital assitant and a wireless handsfree set is an example of a PAN. PANs are a very recent invention and are typically wireless networks in which all communicating devices are connected via short-range wireless links. Currently the wireless networking technology being considered for PANs is Bluetooth but other types of wireless short range systems may be used in the future. A local area network (LAN) is usually privately owned and links the devices in a single office, building, or campus. Depending on the needs of an organisation and the type of technology used, a LAN can be as simple as two PCs and a printer in a home office environment, or it can extend throughout the campus of a company and include voice, sound, and video equipment. A LAN is usually up to a few kilometres is size. LANs are distinguished by (1) their size, (2) their transmission technology, and (3) their topology. Example of a LAN is the well know Ethernet, which is probably the most common LAN technology for office computer networks. A metropolitan area network (MAN), is basically a bigger version of a LAN and normally uses similar technology. It might cover a group of nearby corporate offices or a city and might be either private or public. A MAN can support both data and voice, and might even be related to the local television network. A MAN just has one or two cables and does not contain switching elements, which simplifies design. The main reason for distinguishing MANs as a special class of networks is because a standard has been adopted for them. This standard is call DQDB (Distributed Queue Dual Bus) and specified in IEEE 802.6. This MAN standard is used to provide Switched Megabit Data Services (SMDS) to metropolitan areas. It is widely used in North America and also in some European countries such as Dr. D.Germany, where the service is called Datex-M. However, it is expected that the 4 Pesch, CIT, 2002 Asynchronous Transfer Mode (ATM) technology will replace DQDB in the near future. ATM will provide corporate backbone networks, which are of the size of

A WAN consists of end systems, e.g. a computer (host) or even a mobile terminal (mobile phone), and communication subnets. The job of the subnet is to carry data from end system to end system. In most WANs, the subnet consists of transmission lines and switches. Transmission lines, also called circuits, channel, or trunks, move bits between machines. The switching systems are specialised computers as outlines above. Many networks exist in the world, e.g. computer networks, packet data networks, circuit-switched telephone networks, mobile radio networks, etc., often with different hardware and software. People connected to one network often want to communicate with people attached to a different one. For example a person may want to call a friend, who has a mobile phone, from his/her home telephone. This desire requires connecting together different, and frequently incompatible networks, sometimes by using machines called gateways to make a connection and provide the necessary translation, very much like an interpreter. A collection of interconnected networks is called an internetwork or just internet. NOTE: This should not be confused with the term Internet, which refers to the global computer network using the TCP/IP protocol. However, the origin of the term Internet is from internetworks, what the Internet basically is.

Dr. D. Pesch, CIT, 2002

Transmission Modes
Simplex Half-Duplex Duplex

Dr. D. Pesch, CIT, 2002

Network Topologies
Mesh topology Star topology Tree topology Ring topology Bus topology Hybrid topology Irregular topology

The term topology refers to the way a network is laid out, either physically or logically. Two or more devices connect to a link;

Dr. D. Pesch, CIT, 2002

Mesh Topology

In a mesh topology, every device has a dedicated point-to-point link to every other device. The term dedicated means that the link carries information only between the two devices it connects. A fully connected mesh network therefore has n(n-1)/2 physical channels to link n devices. To accommodate that many links, every device on the network must have n-1 input/output (I/O) ports. A mesh topology offers advantages over other topologies. First, the use of dedicated links guarantees that each connection can carry its data load, thus eliminating data traffic problems that can occur when more than two device share a common communication channel. Secondly, a mesh topology is robust. If one link fails, it does not incapacitate parts of or the entire communication network. Another advantage is privacy or security. When a message travels along a dedicated line only the intended recipient sees it. Finally, point-to-point links make fault identification and isolation easy. Traffic can be routed to avoid links with suspected problems. The main disadvantage of a mesh are related to the amount of cabling and the number of I/O ports required. This has implication on the amount of hardware required, the available space for cabling and finally the overall cost, which can be prohibitive. Therefore, mesh topologies are often only used in backbone networks or the mesh provides only partial connection between devices.

Dr. D. Pesch, CIT, 2002

Star Topology

Hub/Switch

In a star topology, each device has a dedicated point-to-point link only to a central controller, e.g. a hub or switch. The devices are not linked to each other. Unlike a mesh topology, a star topology does not allow direct traffic between devices. The controller acts as an exchange: If one device wants to send data to another, it sends to the controller which, which then relays the data to the other connected devices (see figure above). A star topology is less expensive then a mesh topology. In a star, each device needs only one link and one I/O port. This factor also makes it easy to install and reconfigure. Far less cabling needs to be housed, and additions, moves, and deletions involve only one connection. Other advantages include robustness. If one link fails, only the device connected is affected and no other parts of the network. Fault finding is also easy. An example of a star configuration is an Ethernet LAN with a hub as a central controller.

Dr. D. Pesch, CIT, 2002

Tree Topology

Switch

Switch

Switch

Switch

A tree topology is a variation of a star. As in a star, nodes in a tree are linked to a central controller that controls the data traffic in the network. However, not every device is directly connected into a central hub. The majority of device are connected to a secondary controller that in turn is connected to the central controller. The advantages and disadvantages of a tree topology are generally the same as for the star. The addition of secondary controllers (switches), however, brings two further advantages. First, it allows more devices to be attached to a central switch and can therefore increase the distance a signal can travel between devices. Secondly it allows to scale the size of the network. When the network grows, a single central controller may easily be overloaded by the amount of devices connected. In a tree topology the number of devices attached to an individual controller can be scaled. However, if a link between a secondary controller and the central controller fails, the entire subtree will be disconnected from the rest of the network.

Dr. D. Pesch, CIT, 2002

10

Bus Topology - Example 1

Drop line Tap

Cable end

Cable end

The preceding topologies are all examples of point-to-point transmission technology. A bus topology, on the other hand, is an example of a broadcast technology. One long cable is shared by all devices in the network (see figure above). Nodes are connected to the bus by drop lines and taps. A drop line is a connection running between the device and the main cable. A tap is a connector that either splices into the main cable or punctures the sheathing of a cable to create a contact with the metallic core. Due to the electric resistance of the cable, the distance between two adjacent taps is limited. Also, if a device does not regenerate a signal the overall length of the cable is limited and thus the size of the network. Advantages of a bus topology include ease of installation. Backbone cable can be laid along the most efficient path, then connected to the devices by drop lines of various lengths. In this way a bus uses less cabling than the previous topologies. Disadvantages include difficult reconfiguration and fault isolation. A bus is usually designed to optimally efficient at installation. It can therefore be difficult to add new devices. Signal reflections at taps can degrade signal quality. Also, the mechanism which controls the sharing of the single communication channel among a number of nodes can have a limiting effect on the number of devices that can be connected to a bus.

Dr. D. Pesch, CIT, 2002

11

Bus Topology - Example 2

Ether

In the previous example of a bus topology, the bus was the physical communication medium for a number of devices. In the example above, which shows a wireless broadcast network such as the CB (citizens band) radio. The ether represents a logical bus, which represents the single communication channel that is shared by all radio terminals.

Dr. D. Pesch, CIT, 2002

12

Ring Topology

In a ring topology, each device has a dedicated point-to-point line configuration only with the two devices on either side of it. A signal, e.g. a message, data, or a packet, is passed along the ring in one direction, from device to device, until it reaches its destination. Each device in a ring incorporates a repeater. When a device receives a signal intended for another device, its repeater regenerates the bits and passes them along (see figure above).

Dr. D. Pesch, CIT, 2002

13

Hybrid Topology

Star

Switch

Ring Bus

Hub

Star

Communication networks often combine several of the basic topologies as subnetworks linked together in a larger topology. One department in a college has decided to have a ring topology using Token Ring LAN technology, whereas another department uses a bus topology with an Ethernet LAN. The two subnets can be connected to each other by a central controller, which may be a hub or a switch.The so created higher topology is a start topology (see figure above).

Dr. D. Pesch, CIT, 2002

14

Layered Network Architecture


Host 1 Layer 5 Layer 4/5 interface Layer 4 Layer 3/4 interface Layer 3 Layer 2/3 interface Layer 2 Layer 1/2 interface Layer 1 Layer 1 protocol Layer 1 Layer 2 protocol Layer 2 Layer 3 protocol Layer 3 Layer 4 protocol Layer 4 Layer 5 protocol Host 2 Layer 5

Physical transmission medium

In order to reduce the design complexity of networks, they are organised as a series of layers or levels, each one built upon one below it. The number of layers, the name of each layer, contents of each layer, and the function of each layer differ from network to network. However, in all networks, the purpose of each layer to offer certain services to higher layers, shielding those layers from the details of how the offered services are actually implemented. Layer N on one machine carries on a conversation with layer N on another machine. The rules and conventions used in this conversation are collectively known as the layer N protocol. Basically, a protocol is an agreement between the communicating parties on how communication is to proceed. The key elements of a protocol are Syntax - includes such things as the data format, coding and signal levels. Semantics - includes control information for co-ordination and error handling. Timing - includes speed matching and sequencing. A five layer network is illustrated in the slide above. The entities comprising the corresponding layers on different machines are called peers. In other words, it is peers that communicate using protocols. In reality, no data are directly transferred from layer N on one machine to layer N on another machine. Instead, each layer passes data and control information to the layer immediately below it, until the lowest layer is reached. Below layer 1 is the physical transmission medium through which actual communication occurs. Between two pairs of adjacent layers there is an interface. The interface defines which primitive operations and services the lower layer offers to the upper layer. It is important in the design of a layer to define clean interfaces so that it is possible to replace the implementation of one layer by a completely different implementation. A set of layers and protocols is called a network architecture. The specification of an architecture must contain enough information to allow unambiguous Dr. D.implementation of the functionality of each layer in either software or hardware. The15 Pesch, CIT, 2002 details of the implementation and the specification of the interfaces are not part of the architecture as they are hidden away inside the machines and are not visible to

Information Flow and Protocol Hierarchy


Source machine Layer 5 M H4 M Layer 5 protocol Layer 4 protocol M H4 M Destination machine

H3 H4 M1

H3

M2

Layer 3 protocol Layer 2 protocol

H3 H4

M1

H3

M2

2 1

H2 H3 H4

M1 T2 H2 H3

M2 T2

H2 H3 H4

M1 T2 H2 H3

M2 T2

Layer 1 protocol

The slide above demonstrates how a message is sent from the top (fifth) layer of one machine to the top layer of the other. A message, M, is produced by the protocol entity in layer 5. This entity may be an application process or an entity providing service to an even higher layer. The message is passed on to layer 4, where a header is put in front of the message to identify the message. The header includes control information, such as sequence numbers, to allow layer 4 on the destination machine to deliver messages in the right order if the lower layers do not maintain sequence. In some layers headers also contain sizes, times, and other control information. The resulting unit of header and message is passed on to layer 3. In many networks there is no real limit to the size of messages transmitted in the layer 4 protocol, but there is nearly always a limit imposed by the layer 3 protocol. Consequently, layer 3 must break up the incoming message into smaller units, packets, pre-pending a layer 3 header to each packet. In the example above, the data passed from layer 4 to layer 3 is split into two parts. This divides message M into two parts, M1 and M2. Layer 3 decides which of the outgoing lines to use and passes packets to layer 2. Layer 2 adds not only a header to each piece, but also a trailer, and gives the resulting unit to layer 1 for physical transmission. At the destination machines the received data moves upward, from layer to layer, with headers being stripped off and the original message M being recreated as the data progresses. None of the headers or trailers of layer N are passed up to layer N+1. The important aspect to understand about the example in the slide above is the relation between the virtual and actual communication and the difference between protocols and interfaces. The peer processes in layer 4 think of their communication as being horizontal using the layer 4 protocol. Each one is likely to have a procedure called SendToOtherSide, even though this procedure actually communicates with the lower layer across the layer 3/4 interface and not with the other side. Even though the reader might have the impression that protocols are implemented in Dr. D.software, the 2002 layers are frequently implemented in hardware. The functionality16 Pesch, CIT, lower of layer 1 is almost always implemented in hardware, often in a specially designed ASICs.

Design Issues for Layers


Addressing Segmentation and re-assembly Transmission modes Error control Flow control Routing Multiplexing Connection and other management

The concept of addressing in a communication architecture is a complex one and covers a number of issues. At least four separate issues need to be discussed: Addressing level Addressing scope Connection identifiers Addressing mode Addressing level refers to the level of communications architecture at which an entity is named, e.g. end system or intermediate system. Such an address is in general a network level address as for example an IP address in the case of TCP/IP or a network service access point (NSAP). In general an address identifies a service access point (SAP) in the protocol hierarchy of the network architecture. A second issue of addressing is the addressing scope. An IP address is a globally unique address. In an Ethernet LAN for example, each Ethernet card is identified by an address which is valid in the sub-network where the card is used. The concept of connection identifiers comes into play when the connection-oriented data transfer is considered, e.g. virtual circuits. A connection between the two ends of a sub-network is identified by a connection identifier or the connection between two end-systems. The addressing mode is used when uni-cast, multi-cast, or broadcast communication is used, that is in point-to-point or point-to-multipoint connections. Segmentation and re-assembly takes place when a higher layer passes data packets to a lower layer, which has restrictions on size for the data segments it can send to its peer entity or to the layer below. An example of this is ATM (asynchronous transfer mode) networks. The ATM layer accepts only chunks of 48 bytes from the layer above, because it process data in form of cells of 53 bytes each, with a 5 byte header, which the layer adds itself, and a 48 byte payload with data from the higher layer. In Dr. D.order toCIT, 2002 that the data packets, which have been segmented, arrive in the17 Pesch, make sure right order to the receiving entity, a sequencing function is often used. Each segment is assigned a sequence number. The receiving side can then re-assemble the original data packet in the right order. Sequencing is also used for flow control and

Error control is used to guard against loss or damage of data and control information. The level of error control varies depending on the type of data that is being transmitted. Control data, which is essential for the proper operation of the communication system, must not experience any damage or loss during transmission. Therefore, error control mechanisms make sure that the probability of error is very small. On the other hand, if voice is being transmitted, error control needs not be as stringent as voice communication can sustain damage or loss of information. The human brain is very good at correcting or replacing loss of voice information. Two types of error control can be distinguished, forward error control (FEC) and automatic repeat request (ARQ) error control. The first type adds redundancy to the data that is being send. This adding of redundancy, also called channel coding, is used to detect and also to correct errors in digital data. However, if more errors were introduced than can be corrected, the received data will remain erroneous. This type of error control is frequently used in voice communication. The second type, ARQ mechanisms, are used for error control of data and control information. Some redundancy is added that allows the receiving side to determine whether errors were introduced. If the receiving side detects that data is not error free, it requests the sending side to repeat the transmission. In this case errors in sequencing of segmented data are also covered. A combination of FEC and ARQ mechanisms are used in systems where the physical transmission medium is regarded as highly unreliable. This would be the case in all mobile radio systems. Flow control is a function performed mainly by the receiving end in order to limit the amount or rate of data that is send by the transmitting entity. Flow control is used to manage and also shape the data traffic in the communication system and to avoid congestion. The simplest form of flow control is a stop-and-wait procedure, in which each data packet must be acknowledged before the next can be sent. More efficient protocols use a sliding window mechanisms, such as HDLC based protocols. Routing is a function that is used to determine the transmission path between two end systems across a number of subnets. The transmission route that is being established depends on a number of factors, such as traffic intensity and congestion, availability of transmission medium, cost of transmission, transmission delay, and reliability of transmission among others. Routing functions usually reside in layer 3 of the protocol hierarchy. Routing can be static or dynamic. Static routing is used mainly in connection-oriented data transmission, where a physical or virtual connection is established between two end-systems. Dynamic routing is used in connectionless data transmission where each data packet carries the destination address and can be routed independently of other data packets between the two end systems. The concept of multiplexing is related to addressing. One form of multiplexing is supported by means of multiple connections into a single system. For example a number of virtual connections can terminate in one end system. These virtual connections are transmitted over a single physical channel, they are multiplexed into the physical channel. Beside multiplexing of virtual connections into one physical connection, there can also be logical multiplexing of many logical connections into another logical connection. There are several ways in which multiplexing of multiple virtual connections into a physical connection can take place. The most common Dr. D.forms are based on frequency, time or code multiplexing. The concept of18 Pesch, CIT, 2002 multiplexing will be addressed in detail later. Connection management is used in connection-oriented data transfer, where a

Interfaces and Services


Relationship between layers and interfaces

Layer N

(N) - PDU

(N-1) - SAP Interface

(N-1) - SDU Layer N - 1


(N-1) - PCI

(N-1) - PDU

The function of each layer is to provide a service for the layer above. The active elements in each layer are called entities. An entity can be a software entity (such as a process) or a hardware entity (such as an I/O chip). Entities in the same layer in different systems are call peer entities. The entities in layer N implement a service used by layer N+1. In this case layer N is called the service provider and layer N+1 the service user. Services are available at Service Access Points (SAPs). The layer N SAPs are the places where layer N+1 can access the services offered. Each SAP has an address that uniquely identifies it. As an example, the SAPs in the telephone system are the sockets into which the telephone apparatus are plugged, and the SAPs addresses are the telephone numbers of these sockets. To call someone, one must know the callees SAP address. In order for two layers to exchange information, there has to be an agreed upon set of rules about the interface. The standard convention in the layered model is that the layer N+1 entity passes a Protocol Data Unit (PDU) to the layer N entity through the layer N SAP. The PDU consists of a Service Data Unit (SDU) and Protocol Control Information (PCI), which is added by the layer entity in order to perform the operation of the layer protocol. The SDU may also contain Interface Control Information (ICI), which may be needed by the layer N entity. In order to transfer the SDU, the layer N entity may fragment it into several pieces, each of which is given a header and sent as a separate PDU, such as a packet.

Dr. D. Pesch, CIT, 2002

19

Connection-Oriented and Connectionless Services Connection-

Connection-Oriented Service
modelled after telephone network connection acts like a tube

Connectionless Service
modelled after postal system Each message (packet, cell) carries full dest. address

Quality of Service

Layers can offer two types of service to the layers above: connection-oriented and connectionless service. To use a connection-oriented service, the service user first requests the establishment of a connection, uses the connection for information exchange, and then releases the connection. The essential aspect of the connection is that it acts like a tube: the sender pushes objects (bits) in one end, and the receiver takes them out in the same order at the other end. In contrast, a connectionless service does not first establish a connection. Each message carries the full destination address, and is routed through the system independent of other messages. Normally, the message sent first will arrive first. However, it is possible for messages to overtake each other. With a connectionoriented service this is impossible. Each service can be characterised by a quality of service. Some services are reliable in the sense that they never loose data. Usually, a reliable service is implemented by having the receiver acknowledge the receipt of each message, so that the sender is sure it has arrived. The acknowledgement process introduces overhead and delays, which are often worth the effort but undesirable. An application where delays are unacceptable is digitised voice or video traffic (in general any real-time traffic). It is preferable for telephone users to hear some noise in the background than to wait for acknowledgements of delivered voice frames.

Dr. D. Pesch, CIT, 2002

20

Not all application require connections. For example, electronic junk mail delivery may become common for advertising purposes on the Internet some day. The junk mail sender may not want to go through the trouble of setting up and later tearing down a connection to send just one item to hundreds of users. Furthermore, 100 percent reliability may not be required for this service. All that is need is a high probability that the junk mail will reach its destination. Unreliable connectionless service is often called datagram service, in analogy with telegram service, which does also not provide an acknowledgement back to the sender. Still another service is the request-reply service. In this service the sender transmits a single datagram containing a request; the reply contains the answer. For example, a query to the local library asking whether Andrew Tanenbaums book Computer Networks is available falls into this category. The request-reply service is commonly used to implement communication in the client-server model: the client issues a request and the server responds to it. The table below summarises the most common types of services. Service Connectionoriented Reliable message stream Reliable byte stream Unreliable connection Connectionless Unreliable datagram Acknowledged datagram Request-reply Example Sequence of pages Remote login, file transfer Digitised voice/video Electronic junk mail Registered mail Database query

Dr. D. Pesch, CIT, 2002

21

Service Primitives

Service is formally specified by primitives (operations) Four classes of primitives


Request Indication Response Confirm

A service is specified by primitives available to a user or other entity to access the service. These primitives tell the service to perform some action or report on an action taken by a peer entity. One way to classify the service primitives is to divide them into four classes as shown in the table below. Primitive Request Indication Response Confirm Meaning An entity wants the service to do some work An entity is to be informed about an event An entity wants to respond to an event The response to an earlier request has come back

Dr. D. Pesch, CIT, 2002

22

Service Primitives - Example

Connection Establishment
System A Layer N
CONNECT.request CONNECT.confirm

System B Layer N
CONNECT.indication CONNECT.response

Layer N - 1

Layer N - 1

To illustrate the use of primitives, consider how a connection between layers in two different systems is established. The initiating entity in layer N of System A, requests the underlying layer N - 1 to establish a connection by requesting its service CONNET by issuing a CONNET.request primitive. This results in a message being send by the layer N - 1 entity in System A to layer N - 1 in System B. The CONNECT service in layer N - 1 of System B notifies layer N of the establishment request by issuing a CONNECT.indication. Layer N uses the CONNECT.response primitive to tell layer N - 1 whether it wants to accept or reject the proposed connection. The layer N - 1 entity in System B sends a message to the layer N - 1 entity in System A with the response of the layer N entity in System B. The entity in layer N - 1 of System A informs the requesting Layer N entity in a CONNET.confirm primitive of the outcome of the connection establishment. Most primitives can have parameters, which specify addresses, service types, maximum message sizes, caller identity, and a reject or accept field. The value of the parameters varies the connection establishment. A form of negotiation takes place and the details are part of the protocol. Services can either be confirmed or unconfirmed. In a confirmed service there is a request, indication, response, and confirm. In an unconfirmed service, there is just a request and an indication. An example of a confirmed service is the above connection establishment. An example for an unconfirmed service is data exchange on an established connection , which typically uses the primitives DATA.request and DATA.indication.

Dr. D. Pesch, CIT, 2002

23

Relationship of Services to Protocols Services and protocols are distinct concepts, although they are frequently confused. A service is a set of primitives (operations) that a layer provides to the layer above. The service defines what operations the layer is prepared to perform on behalf of the its users, but it says nothing at all about how these operations are implemented. A service relates to an interface between two layers, the Service Access Point (SAP), with the lower layer being the service provider and the upper layer the service user. A protocol, in contrast, is a set of rules governing the format and meaning of messages, frames, or packets that are exchanged by peer entities within a layer of two different systems. Entities use protocols in order to implement their service definitions. They are free to change their protocols, provided they do not change the service that is visible to the user. In this way the service and the protocol are completely decoupled. There is a strong analogy with programming languages, in particular objectoriented languages. A service relates to an object. It defines operations that can be performed on the data of an object but does not specify how these operations are implemented. A protocol relates to the implementation of an objects operations and as such are hidden from the user.

Dr. D. Pesch, CIT, 2002

24

The ISO/OSI 7 Layer RM


International Standards Organisation (ISO) Open Systems Interconnection (OSI) Reference Model
Application layer Presentation layer Session layer Transport layer Network Layer Data Link Layer Physical layer Application layer protocol Presentation layer protocol Session layer protocol Transport layer protocol Network layer protocol Data Link layer protocol Physical layer protocol Application layer Presentation layer Session layer Transport layer Network Layer Data Link Layer Physical layer

Physical transmission medium

Dr. D. Pesch, CIT, 2002

25

The Internet (TCP/IP) RM


5 Layer Reference Model
Host-to-network layer (layers 1 and 2)
Physical layer Multiple Access sublayer Link layer

Subnet (Internet) layer Transport layer Application layer

Dr. D. Pesch, CIT, 2002

26

The Physical Layer


Transmission of raw bits over a communication channel DAC/ADC Modulation Voltage levels Electrical interfaces Mechanical connections Properties of the physical transmission medium

The physical layer is concerned with transmitting raw bits over a communication channel. The design issues are basically to make sure that when one side sends a 1 bit, it is received as a 1 bit and not as a 0 bit. Typical characteristics of physical layers are how many volts should be used to represent a 1 and how many for a 0, how many microseconds a bit lasts, whether transmission may proceed simultaneously in both directions, how the initial connection is established and how it is torn down when both sides are finished, and how many pins the network connector has and what each pin is used for. The design issues in the physical layer deal largely with mechanical, electrical, and procedural interfaces, and the physical transmission medium, which lies below the physical layer.

Dr. D. Pesch, CIT, 2002

27

The Data Link Layer


Transform a raw data transmission facility into a reliable (error free) link for the network layer Data framing Addressing Flow control Error detection and correction (recovery) Synchronisation Multiple access control (for broadcast/multipoint channels)

The main task of the data link layer is to tale a raw transmission facility provided by the physical layer and transform it into a communication line that appears free of undetected transmission errors to the network layer. It accomplishes this task by having the sender break the input data up into data frames (typically a few hundred or a few thousand bytes), transmit the frames sequentially, and process the acknowledgement frames sent back by the receiver. Data link layer frames include control information for synchronisation, link management and error detection and correction. Another issue that arises in the data link layer (and most of the higher layers as well) is flow control. Flow control stops a slow receiver from being drowned in data. This requires some form of traffic regulation mechanism. In the data link layer flow control and error handling are often integrated. Broadcast networks have an additional issue in the data link layer: how to control access to the shared communication channel. A special sublayer of the data link layer, the medium access sublayer, deals with this problem.

Dr. D. Pesch, CIT, 2002

28

The Network Layer


The network layer controls the operation of the subnet Routing Congestion control Logical addressing Address transformation Interfacing between heterogeneous networks

The network layer is concerned with controlling the operation of the subnet. A key design issue is determining how information is routed from source to destination. Routes can be based on static tables that are wired into the network and rarely change. They can also be determined at the start of each conversation or can be highly dynamic and change with every packet in order to reflect the network load. If to many users are using the network it can lead to congestion. The control of congestion is also part of the network layer tasks. When information travels from one network to another to get to its destination, addressing needs to be taken into account. This requires translation of local addresses between two networks. This also requires some form of interfacing between two networks. The function of accounting comes into the picture at network boundaries since all involved operators would like to get a share of the bill. In broadcast networks, the routing problem is simple, so the network layer is often thin or non-existent.

Dr. D. Pesch, CIT, 2002

29

The Transport Layer


Source-to-destination (end-to-end) delivery of the entire information (data stream) End-to-end message delivery across one or more subnets Service-point (port) addressing Segmentation and reassembly Multiplexing Connection control

The basic function of the transport layer is to accept data from the session layer, split it up into smaller units if need be, pass these to the network layer, and ensure that the pieces all arrive correctly at the other end. In this way, the transport layer provides a true end-to-end connection. The lower layers establish connections only to their immediate neighbours, whereas a transport layer connection can span several networks and network layers. Under normal conditions, the transport layer creates a distinct network connection for each transport connection required by the session layer. If the transport connection requires high throughput, however, the transport layer might create multiple connections, dividing the data among the network connections to improve throughput. On the other hand, network connections can be expensive and the transport layer might multiplex several connections onto the same network connection to reduce cost. In all cases the transport layer is required to make multiplexing transparent. The transport layer also determines what kind of service to provide to the session layer. This can be connection-oriented or connectionless. Many hosts allow multiple connections to enter and leave the host. There needs to be some form of service point addressing in order to tell which information belongs to which connection. In order to maintain end-to-end connectivity the transport layer requires functionality to establish, maintain and release connections across the network. This requires some form of naming or addressing. There is also an element of flow control in the transport layer in order to control the data flow across a network with possibly links of higher and lower speed.

Dr. D. Pesch, CIT, 2002

30

The Session Layer


(Only exists in the OSI RM) Establish sessions between users on different machines Session management Dialogue control Token management Synchronisation

The session layer allows users on different machines to establish sessions between them. A session allows ordinary data transport, as does the transport layer, but it also provides enhanced services useful in some applications. A session might be used to allow a user to log into a remote timesharing system or to transfer a file between two machines. One of the services of the session layer is to manage dialogue control. Sessions can allow traffic to go in both directions at the same time, or only one direction at a time. If half duplex transmission is used, the session layer keeps track of whose turn it is. A related session service is token management. For some protocols, it is essential that both sides do not attempt the same operation at the same time. To manage these activities, the session layer provides token exchange. Another session service is synchronisation. Consider a two hour file transfer between two machines with a one hour mean time between crashes. In order to avoid to start the whole transmission over and over again, the session layer inserts check points at which data transmission can resume after a crash.

Dr. D. Pesch, CIT, 2002

31

The Presentation Layer


(Only exists in the OSI RM) Layer ensures interoperability from a syntactical and semantics point of view Translation Encryption Compression Security

The presentation layer, unlike all lower layers, which are just interested in moving bits around networks reliably, is concerned with syntax and semantics of the information transmitted. The functions provided by the presentation layer include translation of characters between two code systems, for example between ASCII and Unicode, encryption of sensitive data for security purposes, and compression of data in order to reduce bandwidth requirements.

Dr. D. Pesch, CIT, 2002

32

The Application Layer


Enables the user, whether human or software, to access and use the communication network Network virtual terminal File access, transfer, and management Mail services, Directory services Hypertext transfer (world wide web) Control signalling applications in telecommunication networks call/session establishment, maintenance, release call related and independent supplementary services

The application layer contains a variety of protocols that are commonly needed. In computer networks the typical application layer protocols are Telnet, FT, X.400 messaging, X.500 directory service.

Dr. D. Pesch, CIT, 2002

33

A Critique of OSI RM
Pro
The layered concept simplifies design and implementation and the general concept is used in most data and computer communication networks

Cons
The OSI reference model is not a generally suitable model for communication networks The architecture of many real networks cannot easily be mapped onto the OSI RM The layer protocols recommended for the OSI RM are too generic and complex for many implementations The functionality of many layers is not needed in real networks The OSI model does not deal well with the concept of planes, which is used in many modern data communication networks

Dr. D. Pesch, CIT, 2002

34

A Critique of Internet RM
Pro
Internet protocols are well thought out and can be efficiently implemented Internet protocols and networks have proven to be extremly useful and telecommunications is in fact moving towards a unifying adoption of the Internet protocols

Cons
Internet RM is not a general definition of a layered network architecture and as such not suitable to describe any other network Internet RM is not well defined in terms of service, interface, and protocol and therefore not suitable as a guide to designing new networks Some layers within the Internet RM do not distinguish between an interface and a layer well enough Internet RM does not define the functionality of the physical and data link layers well enough for network design

Dr. D. Pesch, CIT, 2002

35

Communication Protocols
Layers in layered network architecture contain peer processes Peer processes
have a common objective, which is achieved through processing and information exchange communicate through lower layers consist of an algorithm, which is implemented as a distributed algorithm or protocol

Communication Protocols are distributed algorithms implemented by two or more peer processes to provide a communication facility to higher layers

Dr. D. Pesch, CIT, 2002

36

Problems of Distributed Algorithms

Red army Blue army 1 Blue army 2

Messenger

As indicated above, a communication protocol is an implementation of a distributed algorithm. In order to gain some insight into the problems associated with distributed algorithms, we examine the above example involving unreliable communication, which has in fact no solution. There are three armies, two coloured blue and one red. The red army separates the two blue armies. If the two blue armies attack at the same time, they win over the red army, but due to the red armys strength, they lose if they attack independently. The only communication between the two blue armies is by sending a messenger through the red army lines. There is a possibility that the messenger will be captured, causing the message to go undelivered. The blue armies would like to synchronise their attack at some given time but are unwilling to attack unless assured with certainty that the other will also attack. Thus, the first blue army might send a message saying Lets attack on Monday noon; please acknowledge if you agree. The second blue army, receiving such as message, might send a return message saying We agree; please send an acknowledgement if you receive our message. It is not hard to see that this strategy leads to an infinite sequence of messages, with the last army to send a message being unwilling to attack until obtaining a commitment form the other side. It is in fact more surprising, that no strategy exist for the two armies to synchronise. One may try to convince oneself that this is in fact the case by going through the situation presented above. What you are likely to encounter in this simple mind experiment is that it is difficult to convince oneself that there is no solution to the problem. This is so, because we are generally not used to dealing with distributed decision making problems based on distributed information. If the above conditions are relaxed as to require only a high probability of simultaneous attack, the problem can be solved. How? Fortunately, most problems in real communication networks do not require Dr. D.simultaneous2002 Pesch, CIT, agreement. Typically, what is required is for one peer process to enter37 a given state with the assurance that the other peer process will eventually enter a corresponding state. Some acknowledgement may berequired for this but a deadlock situation as in the above example is avoided.

Error and Flow Control


Communication links are fundamentally unreliable to more or less extend In order to provide a reliable communication facility mechanisms to detect and correct transmission impairments have to be introduced Provision of a reliable communication facility will also cause some overhead on top of the actual data that is to be transmitted The two communicating parties require to adhere to common rules of communication Typically the Data Link Control Layer provides the means for reliable communication

Dr. D. Pesch, CIT, 2002

38

Data Link Control Layer


Network layer Packets Data Link Control layer Frames H Data T Data Network layer

Data Link Control layer

Virtual synchronous unreliable bit pipe Physical layer/interface Communication link Physical layer/interface

Dr. D. Pesch, CIT, 2002

39

Objectives for Data Link Control


Frame Synchronisation Flow control Error detection and correction (error control) Addressing Framing
Control information and user data transmission on the same link

Link Management

Dr. D. Pesch, CIT, 2002

40

Error Control
In order to provide reliable communication we must be able to
detect and correct any transmission errors

How can this be achieved?


Error detection
Add information to data that will allow to detect bit errors

Error correction
Add information to data that allows to correct bit errors repeat sending data until received error-free

Dr. D. Pesch, CIT, 2002

41

Error Detection
Error detection techniques are based on adding redundancy to data messages Strategy
partition data into blocks of n bits depending on n bit sequence add additonal k bits according to some algorithm Apply algorithm at receiver to detect whether n bits were received without bit-error

Dr. D. Pesch, CIT, 2002

42

Error Detection Strategies


Parity Check codes Cyclic Redundancy Check (CRC) codes Block codes
BCH codes other block codes

Convolutional codes

Dr. D. Pesch, CIT, 2002

43

Forward Error Control


Error detection and correction strategy Used in cases where re-transmission is not an option due to real-time constraints Error correction by means of adding redundancy Two main types of FEC
Block codes Convolutional codes

Dr. D. Pesch, CIT, 2002

44

Standard CRC Polynomials


16 bit CRC-16 CRC-CCITT P(X)=X16+X15+X2+1 P(X)=X16+X12+X5+1 P(X)=X32+X26+X23+X22+X16+ X12+X11+X10+X8+X7+X5+ X4+X2+X+1

32 bit
CRC-32

Dr. D. Pesch, CIT, 2002

45

Implementation of CRC
Typically CRC checks are implemented in digital logic on integrated circuits together with other DLC and physical layer functions Implementation based on XOR gates + shift register
register contains n bits, equal to the length of FCS up to n XOR gates presence or absence of gate corresponds to 1 or 0 in P

Dr. D. Pesch, CIT, 2002

46

Example CRC Shift Register

OUT

IN

C4

C3

C2

C1

C0

M = 1010001101 P = 110101 FCS=

M(X)=X9+X7+X3+X2+1 P(X)=X5+X4+X2+1

Dr. D. Pesch, CIT, 2002

47

Flow Control
Mechanism to control the speed of transmission of data by sender according to the reception capacity (buffer space) of receiver Flow control based on sequential transmission of frames Two main types of flow control used
stop-and-wait sliding window

Dr. D. Pesch, CIT, 2002

48

Stop-and-wait Flow Control


t0

t0 + 1

T T T T

R R R R

t0 + a

t0 + 1 + a

t0 + 1 + 2a

Dr. D. Pesch, CIT, 2002

49

Sliding-window Flow Control


Transmitter view
Frames already received Window of frames that may be transmitted

0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0
Frame sequence number Last frame transmitted Window shrinks as frames are sent Window expands as acknowledgements are received

Transmitter view
Frames already received Window of frames that may be accepted

0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0
Last frame acknowledged Window shrinks as frames are received Window expands as acknowledgements are sent

Dr. D. Pesch, CIT, 2002

50

Sliding-window Flow Control

Dr. D. Pesch, CIT, 2002

51

Utilisation of Sliding Window Flow Control as Function of Window Size

Dr. D. Pesch, CIT, 2002

52

Automatic Repeat Request


Error correction through retransmission backward error correction Three main types of backward error correction strategies Automatic repeat request (ARQ)
Stop-and-wait ARQ Go-back-N ARQ Selective-repeat ARQ

Retransmission based on flow-control mechanism to avoid overloading of receive buffer

Dr. D. Pesch, CIT, 2002

53

Stop-and-Wait ARQ
Based on stop-and-wait flow control Two types of errors are considered
frame arrives damaged no ACK is sent, timer at transmitter expires and frame is resent ACK from receiver is damaged and transmitter resends same frame; in order to avoid confusion, frames and ACK are alternatively marked 0 and 1, respectively

ARQ scheme is simple but not very efficient

Dr. D. Pesch, CIT, 2002

54

Stop-and-Wait ARQ - Example

Dr. D. Pesch, CIT, 2002

55

Go-back-N ARQ
Improves efficiency by adopting sliding-window flow control mechanism N denotes length of sliding window RR denotes ACK, REJ denotes NACK Principle
When a frame in error is received, destination sends a REJ and discards erroneous frame and all future frames until the one a frame is correctly received Upon receipt of REJ, transmitter must retransmit erroneous frame and all frames that where sent in the meantime

Dr. D. Pesch, CIT, 2002

56

Go-back-N ARQ Operation


Damaged Frame received
A transmits frame i. B detects error but has received (i-1) correctly. B sends REJ i, A retransmits i and all subsequent frames Frame i was lost in transit. A sends (i+1), B receives out of order frame and sends REJ i. Frame i is lost. A does not send more frames and B receives nothing and does not send RR or REJ. A timer at A expires and A sends RR frame with poll bit P = 1. B sends RR with next frame it expects and A resends frame i

Dr. D. Pesch, CIT, 2002

57

Go-back-N ARQ Operation


Damaged RR
B receives i and sends RR (i+1), which is lost. A may receive an RR to a subsequent frame before timer expires no error As timer expires and transmits an RR as in the case before. If RR response from B fails, A will try again for a number of times and than initiates link reset A receives a damaged REJ. A acts like in the case of damaged RR.

Dr. D. Pesch, CIT, 2002

58

Go-back-N ARQ

Dr. D. Pesch, CIT, 2002

59

Selective-Reject ARQ Operation


Based on sliding window flow control mechanism in a similar fashion as go-back-N Only damaged frames are retransmitted by sending SREJ
this is more efficient, but receiver has to maintain a large enough buffer to save post SREJ frames transmitter must be able to send out of sequence frames receiver must be able to order out-of-sequence frames

A problem occurs with selective-repeat if the window size is too large

Dr. D. Pesch, CIT, 2002

60

Selective-Reject and Small Window Size


Window size max. half the sequence number max.
station A sends frames 0 to 6 to station B station B receives all 7 frames and cummulatively acknowledges with RR 7 Because of noise RR 7 is lost A times out and retransmits frame 0 B has already advanced its receive window to accept frames 7, 0, 1, 2, 3, 4, 5. Thus it assumes that frame 7 has been lost and that this is frame 0, which it accepts The problem with this scenario is that there is an overlap between the sending and receiving windows. To overcome the problem the window size should be no more than half the sequence numbers.

Dr. D. Pesch, CIT, 2002

61

Selective-Reject ARQ

Dr. D. Pesch, CIT, 2002

62

Utilisation for various ARQ Schemes (Pb=10-3)

Dr. D. Pesch, CIT, 2002

63

Framing
Information in data and computer communication links is typically send in chunks of finite size called packets or frames The task of framing is to flag start and end of a frame so that the receiving end can identify where successive frames start and end Three protocols are in use for framing
character oriented framing bit oriented framing length oriented frame

Dr. D. Pesch, CIT, 2002

64

Character-based Framing
Character codes such as ASCII provide binary representation of communication control characters SYN (synchronous idle) is such character that is used when DLC has nothing to send STX (start of text) and ETX (end of text) used to indicate start and end of a frame Practical character-oriented framing protocols such as IBM binary synchronous communication system (BSC) are more complex

Dr. D. Pesch, CIT, 2002

65

Character-based Frame - Example

Frame SYN SYN STX Header Packet ETX CRC SYN

SYN = Synchronous Idle STX = Start of Text ETX = End of Text CRC = Cyclic Redundancy Check

Dr. D. Pesch, CIT, 2002

66

Transparent Mode
The DLE (data link escape) character is inserted to indicate start of transparent mode DLE is inserted before STX to indicate start of a frame DLE not inserted if STX or ETX are part of the data field DLE also inserted to indicate appearacne of DLE in data field DLE STX is start of frame DLE DLE STX is appereance of DLE STX in data field

Dr. D. Pesch, CIT, 2002

67

Bit-oriented Framing
In bit oriented framing a flag, that is a known sequence of bits, marks the start and end of a frame Typically, the flag is encoded as 01111110 In order to avoid having the sequence 01111110 within the data field, bit stuffing is used. Bit stuffing inserts a 0 after each sequence of five 1s. The receiver deletes the zero after a string of five 1s. If a 1 follows a sequence of five 1s, the frame is declared to be finished Some DLC implementations use a sequence of seven 1s as an abnormal termination of a frame and a sequence of 15 1s indicates that the link is idle.

Dr. D. Pesch, CIT, 2002

68

Frame Sizes
Two frame size options are possible
fixed size frames variable length frames

Fixed size frames


Since not all packet sizes are constant, the frames data field needs some additional bits, called fill, to bring it up to required length at all times Problem here is to determine where data ends and fill starts

Variable length frames


require length field that indicates length of packet based on multiples of octets Overhead similar to overhead due to bit stuffing

Dr. D. Pesch, CIT, 2002

69

DLC Protocols for link initialisation


Two typical protocols for DLC link initialisation
Master-Slave protocol for link initialisation Balanced protocol for link initialisation

Master-Slave Protocol
One node is master and the other slave during initialisation Inititialise and disconnect frames and their acknowlegements are sent accoring to the stop-and-wait ARQ protocol

Balanced Protocol
both nodes can be master and slave at the same time the balanced protocol consist of two consecutively, synchronised running master-slave protocols

Dr. D. Pesch, CIT, 2002

70

Master-Slave Protocol
Data Initiating INIT Node A Up Disconnecting DISC B A link free of data Down Initiating INIT

Node B

ACKI Up

ACKD Down A B link free of data

Data

Dr. D. Pesch, CIT, 2002

71

Balanced Protocol
Up Down INIT ACKD Node A INIT ACKD

ACKI

DISC ACKI

ACKD

ACKI

Node B

INIT ACKI

Up

DISC ACKD

Down INIT ACKD

ACKI

Dr. D. Pesch, CIT, 2002

72

Switching in Telecommunication Networks


Switching was created for the first telephone networks Office switch with a telephone operator (telephonist) Automatic switching introduced by Strowger Strowger switch (step-by-step switching) first circuit switching Telephone networks use circuit switching Data and computer networks use packet switching (sometimes called cell switching)

Dr. D. Pesch, CIT, 2002

73

Switching Networks
C

2 3 5 E D

4 6

7 F

End station Communicating network node

For transmission of data beyond a local area, communication is typically achieved by transmitting data from source to destination through a network of intermediate switching nodes. This switched-network design is sometimes used to implement LANs and MANs as well. The switching nodes are not concerned with the content of the data but rather their purpose is to provide a switching facility that will move the data from node to node until they reach their destination. The figure in the slide above illustrates a simple switching network. The end devices that wish to communicate may be referred to as stations. The stations may be computers, terminals, telephones, or other communicating devices. We will refer to the switching devices whose purpose it to provide communications as nodes, which are connected to each other in some topology by transmission links. Each station attaches to a node, and the collection of nodes is referred to as a communications network. The type of network, a wide area network, discussed here, is also referred to as switched communication network. Data entering the network from a station are routed to the destination by being switched from node to node. For example, data from station A intended for station F are sent to node 4. Data may then be routed via nodes 5 and 7 or nodes 6 and 7 to the destination. Two quite different technologies are used in wide-are switched networks: circuit switching and packet switching. These technologies differ in the way the nodes switch data from one link to another on the route from source to destination.

Dr. D. Pesch, CIT, 2002

74

Circuit Switching Networks


Dedicated connection path between two stations One logical connection on each physical connection Three phases
Circuit establishment Data transfer Circuit disconnection

Circuit establishment: Before any signals can be transmitted, an end-to-end (station-tostation) circuit must be established. For example, station A sends a request to node 4 requesting a connection to station E. Typically, the link from A to 4 is a dedicated line, so that part of the connection already exists. Node 4 must find the next leg in a route leading to node 7. Based on routing information and measures of availability and perhaps cost, node 4 selects the link to node 5, allocates a free channel, using FDM or TDM, on that link and sends a message requesting connection to E. So far, a dedicated path has been established from A through 4 to 5. Because a number of stations may be attached to 4, it must be able to establish internal paths from multiple stations to multiple nodes. The remainder of the process proceeds similarly. Node 5 dedicates a channel to node 7 and internally ties that channel to the channel from node 4. Node 7 completes the connection to station E. In completing the connection, a test is made to determine if E is busy or is prepared to accept the connection. Data transfer: Information can now be transmitted from A through the network to E. The data may be analog or digital, depending on the nature of the network. As networks evolve to fully integrated digital networks, the use of digital (binary) transmission for both voice and data is becoming the dominant method. Generally, the connection is fullduplex. Circuit disconnection: After some period of data transfer, the connection is terminated, usually by the action of one of the two stations. Signals must be propagated to nodes 4, 5, and 7 to de-allocate the dedicated channel resources. Circuit switching can be rather inefficient. Channel capacity is dedicated for the duration of a connection, even if no data is being transferred. For a voice connection, utilisation may be rather high, but still is well below 100%. For terminal-to-computer connection, the capacity may be idle during most of the time. However, after circuit establishment, the network is virtually transparent to the user and delay is at a minimum with only signal propagation delays. Dr. D. Pesch, CIT, 2002 75

Public Circuit Switched Network


A public telecommunication network can be described by
Subscribers Local Loop Exchanges (switches) Trunks

Example networks are


Public switched telephone network (PSTN) Private (automatic) branch exchange (PABX)

The best known example of a circuit-switched network is the public telephone network. This is actually a collection of one or more national networks interconnected to form a global service. Although originally designed and implemented to service analog telephone subscribers, the network handles an ever increasing amount of data traffic via modem and is gradually being converted to a fully digital network. Another well known application of circuit-switching is the private (automatic) branch exchange (PABX), used to interconnect telephones within a building of offices. A public telecommunications network consists of four generic architectural components: Subscribers: The devices that attach to the network. It is still the case that most subscriber devices to public telecommunications networks are telephones, but the percentage of data traffic is exponentially increasing. Local loop: The link between the subscriber and the network, also referred to as the subscriber loop. Almost all local loop connections use twisted pair wire. The length of a local loop is typically in the range from a few kilometres to a few tens of kilometres. Often multiplexing points are used in order to bundle individual links. Exchanges (switches): The switching centres in the network. A switching centre that directly supports subscribers is know as end office or local exchange (LE). Typically, a local exchange will support up to a few thousand subscribers in a localised area. There are many hundreds of local exchanges across Ireland, so that it is impractical for each LE to have a direct link to each of the other LEs across the country. Rather intermediate switching nodes, called trunk exchange, are used. Switches that represent nodes that connect only trunk exchanges are often called tandem switch. Trunks: The branches between exchanges. Trunks carry multiple voicefrequency circuits using either FDM or synchronous TDM. Dr. D. Pesch, CIT, 2002 76

Public Circuit Switched Network


Subscriber

Local Loop Exchanges (Switches)

Dr. D. Pesch, CIT, 2002

77

Switching Concepts
Elements of a modern switching node
Full-duplex lines to attached devices

Digital switch Network interface Control unit

Control unit

Digital switch

Switching techniques
Space division switching Time division switching

Network interface

At the heard of a modern switching node is a digital switch. The function of the digital switch is to provide a transparent signal path between any pair of attached devices. The path is transparent in that it appears to the attached pair of devices that there is a direct connection between them. Typically, the connection must allow full-duplex transmission. The network interface element represents the functions and hardware needed to connect digital devices, such as data processing devices and digital telephones, to the network. Analog telephones can also be attached if the network interface contains analog to digital conversion logic. Trunks to other digital switches carry TDM signals and provide the links for constructing multiple node networks. The control unit performs three general tasks. First it establishes connections, secondly, the control unit must maintain the connection. Because the digital switch uses timedivision principles this may require ongoing manipulation of the switching elements. Third the control unit must tear down the connection, either in response to a request from one of the parties or for its own reasons. An important characteristic of a circuit-switching device is whether it is blocking or non-blocking. Blocking occurs when the network is unable to connect two stations because all possible paths between them are already in use. A blocking network is one in which such blocking is possible. Hence, a non-blocking network permits all stations to be connected (in pairs) at once and grants all possible connection requests as long as the called party is free.

Dr. D. Pesch, CIT, 2002

78

Space Division Switching


First stage Second stage 5x2 switch 2x2 switch Third stage

Space division switch (Crossbar Switch)

2x5 switch

Three stage space division switch

Space division switching was originally developed for the analog environment and has been carried over into the digital domain. The fundamental principles are the same, whether the switch is used to carry analog or digital signals. As its name implies, a space division switch is one where the signals paths are physically separated from one another. Each connection requires the establishment of a physical path through the switch solely used to the transfer of signals between the two endpoints. The left figure in the slide above shows a simple crossbar matrix with 5 full-duplex I/O lines. The matrix has 5 inputs and 5 outputs. Interconnection is possible between any two lines by enabling the appropriate crosspoint. Therefore, N I/O lines require N2 crosspoints. This indicates the limitations of the crossbar switch: number of crosspoints grows with the square of the number of I/O lines the loss of a crosspoint prevents connection between two particular devices the crosspoints are inefficiently utilised. Even if all connections are active, only a fraction of all crosspoints is used. To overcome these limitations, multiple-stage switches are employed. The right figure in the above slides shows a three stage switch. This arrangement has several advantages the number of crosspoints is reduced, increasing crossbar utilisation. There is more than one path through the switch to connect two endpoints, increasing reliability. Of course, a multiple stage crossbar switch requires a more complex control unit. A consideration with a multistage space division switch is that it may be blocking. It should be clear from the figure above that a single-stage crossbar switch is non-blocking.

Dr. D. Pesch, CIT, 2002

79

Time Division Switching


TDM bus switch
Control unit Full-duplex lines to attached devices

Control of a TDM bus switch 1 2 3 4 5 6


1 3 2 5 4 6 3 1

Control logic

Network interface

5 2 6 4

Virtually all modern switches use digital time division techniques for establishing and maintaining circuits. Time-division switching uses input lines based on synchronous time division multiplexing. The slots on the TDM line are manipulated by the control logic to route data from an input line to the dedicated output line. There are a number of variations on this basic concept. Here only the concept of TDM bus switching is examined. The left figure in the above slide shows how TDM can be extended to provide a switching functionality. Each device attaches to the switch through a full-duplex line. These lines are connected through controlled gates to a high-speed digital bus. Each line is assigned a time slot for providing input. For the duration of the slot, that lines gate is enabled, allowing s small burst of data onto the bus. For that same time slot one of the other gates is enabled for output. During successive time slots, different input/output pairings are enabled, allowing a number of connections to be carried over a shared bus. The right figure in the above slide is an example that suggests how the control for a TDM bus switch can be implemented. Lets assume that the propagation time on the bus is 0.01sec. Time on the bus is organised into 30.06sec frames of six 5.01sec slots each. A control memory indicates which gates are to enabled during each time slot. In this example, size words of memory are needed. A controller cycles through the memory at a rate of one cycle every 30.06sec.

Dr. D. Pesch, CIT, 2002

80

Control Signalling
Means by which network is managed and connections are established, maintained and terminated Control signalling functions
Audible communication with subscriber - ringing one, dialling tone, busy signal, etc. Transmission of number dialled to switches for attempt to complete connection Transmission of information between switches indicating that call cannot be completed Transmission of information between switches indicating that call has ended A signal to make telephone ring Transmission of information for billing purposes Transmission of information indicating status or equipment and trunks Transmission of information for diagnosing and isolating system faults Control of special equipment such as satellite channel equipment

Dr. D. Pesch, CIT, 2002

81

Signalling in Circuit-Switched Networks


Description
In-channel Inband Transmit control signals in the same band of frequencies used by the voice signals The simplest technique. It is necessary for call info signals and may be used for other control signals. Inband can be used over any type of subscriber interface Unlike inband, out-of-band provides continuous supervision for the duration of the connection

Comment

Out-of-band

Transmit control signals over the same facilities as voice signals but in a different part of the frequency band Transmit control signals over signalling channels that are dedicated to control signals and are common to a number of voice channels.

Common Channel

Reduces call setup time compared with in-channel methods. It is also more adaptable to evolving functional needs.

Control signalling needs to be considered in two contexts: signalling between a subscriber and the network and signalling within the network. Typically, signalling operates differently within these two contexts. The signalling between a telephone or other subscriber device and the local exchange to which it attaches is, to a large extend, determined by the characteristics of the subscriber device and the needs of the human user. Signals within the networks are entirely computer to computer. The internal signalling is concerned not only with the management of the subscriber calls but with the management of the network itself. Therefore, mapping between the less complex subscriber signalling techniques and the more complex network signalling techniques must be possible. Traditional control signalling in circuit-switched networks has been on a per-trunk or in-channel basis. With in-channel signalling, the same channel is used to carry control signals and the call the control signals relate to. Such signalling originates at the subscriber and follows the same path as the call itself. Two forms of in-channel signalling are in use: inband and out-of-band. Inband signalling uses the same frequency band as the call and the signals have the same electromagnetic properties. Due to this method the information that can be carried is very limited. However, an advantage is that it is impossible to setup a call on a faulty speech path. Out-of-band signalling uses a narrow band within the 4kHz speech band that is not used by speech. Signalling is possible whether voice signals are on the line or not and thus continuous supervision of a call is possible. However, an out-of-band scheme needs extra electronics to handle the signalling band.

Dr. D. Pesch, CIT, 2002

82

Common Channel Signalling

Non-associated signalling Associated signalling

Speech Signalling Switching points Signalling transfer points

As public network become more complex and provide a richer set of services, the drawbacks of in-channel signalling become more apparent. The information transfer rate is quite limited and with inband signalling only available if there are no voice signals on the circuit. Out-of-band signalling provides only a very limited bandwidth. With these limitations it is difficult to provide more complex control messages in order to manage the increasing complexity of evolving network technology. A more powerful approach is required. This approach is based on common channel signalling. In this approach the signalling path is physically distinct from the path for voice and other subscriber signals. The common channel can be configured with the bandwidth required to carry control signals for a rich variety of functions. With dropping costs for hardware this concept has become so attractive that it is being introduced in all public telecommunication networks. The control signals are messages passed between switches as wells as between a switch and the network management centre. Thus, the control-signalling portion of the network is a distributed computer network carrying short messages. Two modes of operation are used, the associated mode and the non-associated mode. In the associate mode (shown above) the common channel closely tracks along the entire length of the inter-switch trunks. The non-associated mode is more complex, but more powerful; with this the network is augmented by additional nodes, known as signal transfer points. There is now no close or simple assignment of control channels to trunk groups. In effect, there are now two separate networks, with links between them so that the control portion of the network can exercise control over the switching nodes that carry the subscriber calls. This mode is used in modern telecommunication networks and the control signalling architecture is called Common Channel Signalling System No. 7 (SS#7).

Dr. D. Pesch, CIT, 2002

83

Packet Switching
Data are transmitted in short packets ( 1000bytes) Packet consists of control and data part
Control part contains address

Two approaches to switching


datagrams virtual circuits

A key characteristic of circuit-switched networks is that resources within the network are dedicated to a particular call. For voice connections, the resulting circuit will enjoy a high percentage of utilisation because most of the time one party or the other is talking. However, as the circuit-switching network is more and more utilised for data transmission, two shortcomings become apparent In a typical user/host data connection much of the time the line is idle. Therefore, the resource usage is inefficient. In a circuit-switching network the connections provide for transmission at constant data rate. Thus, each of the two devices must transmit and receive at the same data rate as the other; this limits the utility of the network in interconnecting a variety of different computing devices and terminals. Packet switching addresses these problems by transmitting data in short packets of usually no more than about 1000 bytes. If a data stream is longer it will be broken up into a series of packets, each packet containing a portion of the overall data stream. A packet also contains some control information. The control information, at a minimum, includes the information that the network requires in order to be able to route a packet through the network and deliver it to the destination. At each node en route, the packet is is received, stored briefly, and passed on to the next node. This approach has a number of advantages over circuit switching Line efficiency is greater as a single node-to-node link can be dynamically shared by many packets over time. Packets are queued up and transmitted as rapidly as possible over the link. By contrast, with circuit switching, time on a node is pre-allocated using synchronous TDM.

Dr. D. Pesch, CIT, 2002

84

A packet switching network can perform data-rate conversion. Two stations of different data rates can exchange packets because each connects to its node at its proper data rate. When traffic become heavy on a circuit-switching network, some calls are blocked; that is the network refuses to accept additional connection requests until the load decreases. On a packet-switching network, packets are still accepted but delivery delay increases Packets can be transmitted with different priorities attached. Thus, if a node has a number of packets queued for transmission, it can transmit higher-priority packets first. These packets will therefore experience less delay than lower-priority packets. The key question is now how a packet-switching network attempts to transmit a sequence of packets from source to destination. Two approaches are used in contemporary networks: datagram and virtual circuit. In the datagram approach each packet is treated independently, with no reference to packets that have already been transmitted. The implications of this technique are that a routing decision has to be made for each packet. Therefore, each packet must contain the full address of its detstination. It is possible that packets get lost somewhere along the path and that packets arrive out of sequence if routed on a path where transmission takes longer than on others. The advantage of this approach is that no connection establishment is required and in cases where only a short message needs to be sent this approach achieves fast transmission. Each packet is referred to as a datagram. This technique is commonly used in the Internet. In the virtual circuit approach, a pre-planned route is established before any packets are sent. Because the route is fixed for the duration of the transmission, it is somewhat similar to a circuit in a circuit-switching network, and is referred to as a virtual circuit. Each packet now contains a virtual-circuit identifier as well as data. Each node on the pre-established route knows where to direct such packets; no routing decisions are required. The pre-establishment of the route does not mean that this is a dedicated path. A packet is still buffered at each node, and queued for output over a line. The difference from the datagram approach is that the node need not make a routing decision for each packet; it is made only once for all packets using the virtual circuit. The advantages of the virtual circuit approach are that services such as sequencing and error control can be associated with a virtual circuit. The virtual circuit approach also allows to control the load in the network better as the number of virtual circuits per line can be limited and thus the potential load per line.

Dr. D. Pesch, CIT, 2002

85

Packet Size

Dr. D. Pesch, CIT, 2002

86

Comparison of Switching Techniques


Circuit switching
Dedicated transmission path Continuous transmission of data Fast enough for interactive Message are not stored Path established for entire call Call setup delay; negligible transmission delay Busy signal if called party busy Overload may block call setup; no delays for established calls Electromechanical or computerised switching User responsible for message loss protection Usually no speed or code conversion Fixed bandwidth transmission No overhead bits after call setup

Datagram packet switching


No dedicated path Transmission of packet Fast enough for interactive Packets may be stored until delivered Route established for each packet Packet transmission delay Sender may be notified if packet not delivered Overload increases packet delay Small switching nodes (computers) Network may be responsible for individual packets Speed and code conversion Dynamic use of bandwidth Overhead bits in each message

Virtual-circuit packet switching


No dedicated path Transmission of packets Fast enough for interactive Packets stored until delivered Route established for entire call Call setup delay; packet transmission delay Sender notified of connection denial Overload may block call setup; increases packet delay Small switching nodes (computers) Network may be responsible for packet sequences Speed and code conversion Dynamic use of bandwidth Overhead bit in each packet

Dr. D. Pesch, CIT, 2002

87

Routing in Wide Area Networks


Determining a path from source to destination with respect to certain criteria Typically:routing along the best path in terms of
smallest number of hops smallest delay (desirable in most cases)

Routing based on smallest delay influenced by


Packet transmission time Queuing and processing delay

The goal of all routing procedures is to determine the best path in terms of the smallest number of hops and/or the smallest delay. The smallest delay is typically desirable in most networks, particularly in packet switched networks, as the queuing delay takes up the bulk of the delay. However, it is often impossible to determine the delay, in particular the queuing delay as it depends on the load of the nodes and the network. Most routing algorithms attempt to route the packet over the best guess path. This is achieved by assigning fixed or varialbe cost to a path. Approaches to determining link cost are unit cost, which delivers minimum hop path cost inverse to link data rate, which yields the minimum transmission time and achieves load balancing cost equal to average delay experienced, which is estimated over some interval two costs, low cost when queue is below a threshold, high cost, when queue length grows beyond a certain bound. For virtual circuit routing, a path is chosen at setup time. Although the path chosen may provide minimum delay at setup time, there is no guarantee that this state will prevail throughout the duration of the connection. For datagram routing, routing decisions may be made for each packet individually, and thus the ideal of minimum delay is closer to fulfillment.

Dr. D. Pesch, CIT, 2002

88

Routing in Wide Area Networks


Requirements for routing strategies
Correctness Simplicity Robustness Stability Fairness Optimality Efficiency

Dr. D. Pesch, CIT, 2002

89

Elements of Routing Techniques


Performance Criteria
Number of Hops Cost Delay Throughput

Network information source


None Local Adjacent node Nodes along route All nodes

Decision time
Packet (Datagram) Session (virtual circuit)

Network information update timing


Continuous Periodic Major load change Topology change

Decision place
Each node (distributed) Central node (centralised) Originating node (source)

The selection of a route is generally based on some performance criterion. The simplest criterion is to choose the minimum-hop route through the network. This is an easily measured criterion and should minimise the consumption of network resources. A generalisation of the minimum-hop criterion is the least-cost routing. In this case a cost is associated with each link and the optimal route is the one that has the least cost associated with it. Routing decisions are made on the basis of some performance criterion. Two key characteristics of the decision are the time and the place that the decision is made. Decision time is based on whether the decision is made on a per packet basis (datagram approach) or on a virtual circuit basis. The term decision place refers to which node or nodes in the network are responsible for the routing decision. Most common is distributed routing, in which each node has the responsibility of selecting an output link for routing packets as they arrive. For centralised routing, the decision is made by some designated node, such as a network control centre. The last approach is source routing. This allows the user to decide upon a route that meets criteria local to that user. The decision time and place are independent design variables for packet switching networks. Most routing strategies require that decisions be based on knowledge of the topology of the network, traffic load, and link cost. Surprisingly, some strategies use no such information and yet manage to get packets through.; flooding and some random strategies are in this category. With distributed routing the individual node may make use of local information such as the cost of each outgoing link. Each node might also collect information from adjacent nodes such as the state of congestion they are currently experiencing. Information update timing is a function of both the information source and the routing strategy. If no information is used there is no requirement for updating. If only local updating is used the update timing is essentially continuous. For a fixed routing strategy information is never updated and for an adaptive strategy it is updated from time to time. Dr. D. Pesch, CIT, 2002 90

Routing in Circuit-Switched Networks


Routing finds a path through more than one switching node depending on a certain set of criteria Alternate routing
Used in SS7 networks Fixed alternate routing dynamic alternate routing
multi-alternate routing dynamic non-hierarchical routing

Adaptive routing
Dynamic traffic management

Dr. D. Pesch, CIT, 2002

91

Routing Strategies for Packet-Switched Networks


Fixed routing Flooding Random routing Adaptive routing
failure congestion

Dr. D. Pesch, CIT, 2002

92

Example Routing Algorithms


Least Cost Algorithms
Most routing algorithms are based on two basic least-cost algorithms Dijkstras Algorithm Bellman-Ford Algorithm

Principle
Given a network of nodes connected by bidirectional links, where each link has a cost associated with it in each direction, define the cost of a path between two nodes as the sum of the costs of the links traversed. For each pair of nodes, find the path with the least cost.

Dr. D. Pesch, CIT, 2002

93

Example Packet-Switched Network


8 5

2
2 3 2 2

3 6

3
8 3 3 1

6
2 4

1 7

1 1

Dr. D. Pesch, CIT, 2002

94

Dijkstras Algorithm
Find the shortest paths from a given source node to all other nodes, by developing the paths in order of increasing path length. The algorithm has proceeds in stages and by the kth stage the shortest paths to the k nodes closest to the source node have been determined Algorithm is defined formally as N = set of nodes in the network s = source node T = set of nodes so far incorporated by the algorithm w(i, j) = link cost from node i to node j; w(i, i) = 0; w(i, j) = if nodes are not directly connected L(n) = cost of the least-cost path from node s to n that is currently known to algorithm

Dr. D. Pesch, CIT, 2002

95

Dijkstras Algorithm
Initialisation
T = {s}, i.e. set of nodes so far incorporated consists of only the source node L(n) = w(i, j) for n s, i.e. the initial path cost are simply the link costs

Get Next Node


Find the neighbouring node not in T that has the least-cost path from node s and incorporate that node into T; Also incorporate the edge that is incident on that node and a node in T the contributes to the path Find x T such that L( x ) = min L( j ) jT Add x to T; add to T the edge that is incident on x to L(x)

Update Least-Cost Paths

L(n ) = min [L(n ), L( x ) + w( x, n )], for all n T

If the latter term is the minimum, the path from s to n is now the path from s to x concatenated with the edge from x to n

The final edges are called the spanning tree of the network

Dr. D. Pesch, CIT, 2002

96

Dijkstras Algorithms

Dr. D. Pesch, CIT, 2002

97

Bellman-Ford Algorithm
Find the shortest paths from a given source node subject to the constraint that the paths contain at most one link; then find the shortest paths with a constraint of paths of at most two links and so on. The algorithm is defined formally as s = source node w(i, j) = link cost from node i to j; w(i, i) = 0; w(i, j) = if nodes are not directly connected h = maximum number of links in a path at the current stage of the algorithm Lh(n) = cost of the least-cost path from node s to node n under the constraint of no more than h links

Dr. D. Pesch, CIT, 2002

98

Bellman-Ford Algorithm
Initialisation
L0(n) = , for all n s Lh(s) = 0; for all h

Update
For each successive h 0: For each n s, compute
j

Lh +1 (n ) = min[Lh ( j ) + w( j , n )]

Connect n with the predecessor node j that acheves the minimum, and eliminate any connection of n with a different predecessor node formed during an earlier iteration. The path from s to n terminates with the link from j to n.

Dr. D. Pesch, CIT, 2002

99

Bellman-Ford Algorithm

Dr. D. Pesch, CIT, 2002

100

Cost Function for Dijkstra and BF

Dr. D. Pesch, CIT, 2002

101

Traffic and Congestion Control


Communication networks are designed for a particular traffic load If traffic load increases beyond a certain point, congestion occurs Traffic management is used to avoid congestion Congestion control is used to resolve a state of congestion

Dr. D. Pesch, CIT, 2002

102

Congestion Control
Packet-switching network is network of queues that may become highly loaded or overloaded High load or overload situation needs to be controlled
choke source by dedicated control packet use routing information to influence packet generation rate Use probe packet to find least congested route add congestion information to packets

Dr. D. Pesch, CIT, 2002

103

Traffic Management/Control
Traffic control is also often called flow control Traffic management is mainly used in packetswitched networks Traffic management in packet-switched networks tries to avoid congestion Traffic control in circuit-switched networks is based on call blocking

Dr. D. Pesch, CIT, 2002

104

Objectives of Traffic Control


Limiting delay Limiting buffer overflow Fairness

For real-time applications, such as voice and video transmission, excessively delayed packets are useless, as they would reduce the quality of the applications significantly. For such applications, a limited delay is essential and should be the chief concern of traffic management algorithms; for example such applications may be given a high priority for transmission. For other applications, a small average delay per packet is desirable but it may not be crucial. For these applications, the network layer traffic management does not necessarily reduce delay; it simply shifts the delay from the network layer to the higher layers. That is, by restricting entrance into the subnet, traffic management keeps packets waiting outside the subnet rather than in the queues of the subnet. In this way, traffic management avoids wasting subnet resources in packet retransmissions and helps prevent a disastrous traffic jam inside the subnet. Retransmission in this scenario can occur in two ways: build-up of queues causes buffer overflow and packets are discarded, slow acknowledgements, due to excessive delays, can cause the source to retransmit packets because it mistakenly thinks that packets were lost. In certain cases sessions generating packets at high rate can capture almost buffer space and exclude slow rate source from transmission. In order to prevent this from happening, a buffer management scheme needs to be implemented. In such a scheme packets are divided into different classes. At each node separate buffer space is reserved for different classes, while some buffer space is shared by all classes.

Dr. D. Pesch, CIT, 2002

105

When offered traffic must be cut back in order to avoid congestion, it must be done fairly. The notion of fairness is complicated, however, by the presence of different session (connection) priorities and service requirements. For example, some sessions need a minimum guaranteed rate and a strict upper bound on network delay. Thus, while it is appropriate to consider simple notions of fairness within classes of similar sessions, the notion of fairness between classes is complex and involves the requirements of those classes. In general, real-time sessions would be favoured with respect to delay but may have to suffer some loss of packets whereas data sessions would have to suffer more delay at the source but the network would make sure that no loss occurs. However, it can easily be anticipated that fairness is a complex issue which can result in a multi-dimensional optimisation problem in order to achieve optimum fairness.

Dr. D. Pesch, CIT, 2002

106

Functions of Traffic Control


Call or packet blocking (admission control) Packet scheduling - Window flow control Source rate control (traffic shaping) Network resource allocation

Call or packet blocking is regulated by a traffic management function that is called admission control. Admission control allows or denies admission to the network based on whether the parameters that the connection requires can be fulfilled. Parameters in this case are average and peak data rate, packet delay variation, packet loss rate, etc. Packet scheduling is facilitated by a window based flow control mechanisms in much the same way as is used in data link layer protocols such as HDLC. However, as can be seen in HDLC, this kind of flow control is not well suited to high-speed transmission since it would require large window sizes to make use of a high data rate. It is also not well suited to wide area networks were propagation delays are large and waiting for acknowledgement packets reduces the throughput. A further problem is that window based mechanisms do not regulate the end-to-end delay well and do not guarantee a minimum data rate, which is important for the transmission of real-time services such as voice and video. Another form of traffic management more suited to high-speed transmission lines is rate control. This form of traffic management gives each session or connection a guaranteed data rate, which is commensurate to its needs. This rate should lie within certain limits that depend on the session type.

Dr. D. Pesch, CIT, 2002

107

The main considerations in setting source rates are: Delay-throughput trade-off - increasing throughput by setting the rates too high runs the risk of buffer overflow and excessive delay Fairness - if session rates must be reduced to accommodate some new sessions, the rate reduction must be done fairly, while obeying the minimum rate requirement for each session.

Dr. D. Pesch, CIT, 2002

108

Leaky Bucket Scheme


Queue with packets without a permit Arriving packets Queue of packets with a permit Permit queue (limited space W)

Arriving permits at a rate of one per 1/r sec (turned away if there is no space in the permit queue)

In order to implement a session rate of r packets/sec one could admit only one packet every 1/r seconds. This, however, amounts to a form of time division multiplexing and amounts to large delays when the traffic load is bursty. A more appropriate implementation is to admit as many as W packets (W > 1) every W/r seconds. This allows a burst of as many as W packets into the network without delay, and is better suited for a dynamically changing load. This approach achieves some sort of traffic smoothing and reduces the burstiness for which TDM causes long delays. An implementation of this kind of traffic management mechanisms is the so called leaky bucket scheme. An allocation of W packets is given to each session, and a count x of the unused portion of this allocation is kept at the source. Packets from the session are admitted to the network as long as x > 0. In the leaky bucket scheme the count is incremented periodically, every 1/r seconds, up to a maximum of W packets. Another way to view this scheme is to imagine that for each session, there is a queue of packets without a permit and a bucket or permits at the sessions source. The packet at the head of the packet queue obtains a permit once one is available in the permit bucket and then joins the set of packets with permits waiting to be transmitted (see figure in slide above). Permits are generated at the desired input rate r of the session (one permit every 1/r seconds) as long as the number is the permit bucket does not exceed a certain threshold W. The leaky bucket scheme is used in ATM networks to shape the source data rate such that it maintains the parameters of the agreed traffic contract.

Dr. D. Pesch, CIT, 2002

109

Congestion Control
No congestion Mild congestion Severe congestion

Throughput

Offered load No congestion Mild congestion Severe congestion

Delay

Offered load

Dr. D. Pesch, CIT, 2002

110

Functions of Congestion Control


When congestion occurs, one or more of the following functions are used to resolve congestion
Discard packets Send control packet Use routing information Use end-to-end probe packet Add congestion information to packets

Dr. D. Pesch, CIT, 2002

111

También podría gustarte