Está en la página 1de 34

Spring 2012 Master of Computer Application (MCA) Semester VI MC0087 Internetworking with TCP/IP Assignment Set 1 (60 Marks)

ks) ---------------------------------------------------------------------------------------------1. Explain T C P /IP p r o t o c o l s u i t . Explain t h e f u n c t i o n o f N e t w o r k interface layer, network layer and transport layer.

---------------------------------------------------------------------------------------------Ans: The TCP/IP protocol suite maps to a four-layer conceptual model known as the DARPA model, which was named after the U.S government agency that initially developed TCP/IP. The four layers of the DARPA model are: Application, Transport, Internet, and Network Interface. Each layer in the DARPA model corresponds to one or more layers of the seven-layer OSI model. Figure 1.2 shows the architecture of the TCP/IP protocol suite. The TCP/IP protocol suite has two sets of protocols at the internet layer: IPv4, also known as IP, is the internet layer in common use today on private intranets and internet. IPv6 is the new internet layer that will eventually replace the existing IPV4 internet layer.

Applications Transport

Applications TCP / UDP ICMP

Internet work

IP RP

ARP/RA

Network Interface And Network Interface

Network Interface Layer The Network Interface Layer (also called the Network Access Layer) sends TCP/IP packets on the network medium and receives TCP/IP packets off the network medium. TCP/IP was designed to be independent of the network access method, frame format, and medium. Therefore, you can use TCP/IP to communicate across differing network types that use LAN technologies such as Ethernet and 802.11 wireless LAN and WAN technologies such as Frame Relay and Asynchronous Transfer Mode (ATM). By being independent of any specific network technology, TCP/IP can be adapted to new technologies. The Network Interface layer of the DARPA model encompasses the Data Link and Physical layers of the OSI model. The Internet layer of DARPA model does not take advantage of

sequencing and acknowledgment services that might be present in the Data Link layer of the OSI model. The Internet layer assumes an unreliable Network Interface layer and that reliable communication through session establishment and the sequencing and acknowledgment of packets is the responsibility of either the Transport layer or the Application layer.

Internet Layer The Internet layer responsibilities including addressing, packaging, and routing functions. The Internet layer is analogous to the Network layer of the OSI model. The core protocols for the IPv4 Internet layer consist of the following:

The Address Resolution Protocol (ARP) resolves the Internet layer address to a Network Interface layer

address such as a hardware address.

The Internet Protocol (IP) is a routable protocol that addresses, routes, fragments and reassembles packets.

The Internet Control Message Protocol (ICMP) reports errors and other information to help you diagnose

unsuccessful packet. Delivery.

The Internet Group Management Protocol (IGMP) manages IP multicast groups.

Transport Layer The Transport layer (also known as the Host-to-Host Transport layer provides the Application layer with session and datagram communication services. The Transport layer encompasses the responsibilities of the OSI Transport layer. The core protocols of the Transport layer are TCP and UDP. TCP provides a one-to-one, connection-oriented, reliable communications service. TCP establishes connections, sequences and acknowledges packets sent, and recovers packets lost during transmission. In contrast to TCP, UDP privies a one-to-one or one-to-many, connectionless, unreliable communications service. UDP is used when the amount of data to be transferred is small (such as the data that would fit into a single packet), when an application developer does not want the overhead associated with TCP connections, or when the applications or upper-layer protocols provide reliable delivery. TCP and UDP operate over both IPv4 and IPv6 Internet layers.

Application Layer The Application layer allows applications to access the services of the other layers, and it defines the protocols that applications use to exchange date. The Application layer contains may protocols, and more are always being developed. The most widely known Application layer protocols help users exchange information:

The Hypertext Transfer Protocol (HTTP) transfers files that make up pages on the Word Wide Web.

The Simple Mail Transfer Protocol (SMTP) transfers mail messages and attachements. Additionally,

the following Application layer protocols help you use and manage TCP/IP networks:

The Domain Name System (DNS) protocol resolves a host name, such www.cisco.com, to an IP address

and copies name information between DNS servers.

The Routing Information Protocol (RIP) is a protocol that routers use to exchange routing information on an IP network.

The Simple Network Management Protocol (SNMP) collects and exchanges network management

information between an network management console and network devices such as routers, bridges, and server.

Windows Sockets and NetBIOS are examples of Application layer interfaces for TCP/IP application.

---------------------------------------------------------------------------------------------2. Discuss ISDN, X.25, Frame relay and ATM.

---------------------------------------------------------------------------------------------A) Integrated Services Digital Network (ISDN)


This section describes how to use the PPP encapsulation over ISDN point-to-point links. PPP over ISDN is documented by elective RFC 1618. Because the ISDN B-channel is, by definition, a point-to-point circuit, PPP is well suited for use over these links. The ISDN Basic Rate Interface (BRI) usually supports two B-channels with a capacity of 64 kbps each, and a 16 kbps D-channel for control information. B-channels can be used for voice or data or just for data in a combined way. The ISDN Primary Rate Interface (PRI) can support many concurrent B-channel links and one 64 kbps D-channel. The PPP, LCP and NCP mechanisms are particularly useful in this situation in reducing or eliminating manual configuration and facilitating ease of communication between diverse implementations. The ISDN D-channel can also be use for sending PPP packets when suitably framed, but is limited in bandwidth and often restricts communication links to a local switch. PPP treats ISDN channels as bit- or octet-oriented synchronous links. These links must be full duplex, but can be either dedicated or circuit-switched. PPP presents an octet interface to the physical layer. There is no provision for sub-octets to be supplied or accepted. PPP does not impose any restrictions regarding transmission rate other than that of the particular ISDN channel interface. PPP does not require the use of control signals. When available, using such signals can allow greater functionality and performance. The D-channel interface requires NRZ encoding. B) X.25

This topic describes the encapsulation of IP over X.25 networks, in accordance with ISO/IEC and CCITT standards. IP over X.25 networks is documented by RFC 1356 (which obsoletes RFC 877). RFC 1356 is a Draft Standard with a status of elective. The substantive change to the IP encapsulation over X.25 is an increase in the IP datagram MTU size, the X.25 maximum data packet size, the virtual circuit management, and the interoperable encapsulation over X.25 of protocols other than IP between multi-protocol routers and bridges. One or more X.25 virtual circuits anre opened on demand when datagram arrive at the network interface for transmission. Protocal Data Units (PDUs) are sent as X.25 complete packet sequences. That is, PDUs begin on X.25 data packet boundaries and the M bit (more data) is used to fragment PDUs that are larger than one X.25 data packet in length. In the IP encapsulation, the PDU is the IP DATAGRAM. The first octet in the Call User Data (CUD) field (the first data octet in the call Request packet) is used for protocol de-multiplexing in accordance with Subsequent Protocol Identifier (SPI) in ISO/IEC TR 9577. This field contains a one octet Network-Layer Protocol Identifier (NLPID), which identifies the network-layer protocol encapsulated over X.25 virtual circuit. C) Frame relay The frame relay network provides a number of virtual circuits that form the basis for connections between stations attached to the same from relay network. The resulting set of interconnected devices form a private frame relay group, which can be either fully interconnected with a complete mesh of virtual circuits, or only partially interconnect. In either case, each virtual circuit is uniquely identified at each frame relay interface by a Data Link Connection Identifier (DLCI). In most circumstances, DLCIs have strictly local significance at each frame relay interface. Frame relay is documented in RFC 2427, and is expanded in RFC 2590 to allow the transmission of IPv6 packets. Frame Format Frames contain the necessary information to identify the protocol carried within the protocol data unit (PDU), thus allowing the receiver the properly process the incoming packet. The format will be as follows: The control field is the Q. 922 control field. The UI (0x03) value is used unless it is negotiated otherwise. The use of XID (0xAF or 0xBF) is permitted. The pad field is used to align the data portion (beyond the encapsulation header) of the frame to a two octet boundary. if present, the pad is a single octet and must have a value of zero. The Network Level Protocol ID (NLPID) field is administrated by ISO and the ITU. It contains values for may different protocol, including IP,CLNP, and IEEE Sub network Access Protocol (SNAP). This field tells the receiver what encapsulation or what protocol follows. Values for this field are defined in ISO/IEC TR 95773. An NLPID value of 0x00 is defined within ISO/IEC TR 9577 as the null network layer or inactive set. Because it cannot be distinguished form a pad field, and because it has not significance within the context of this encapsulation scheme, an NLPID value 0x00 is invalid under the frame relay encapsulation. There is not commonly implemented minimum or maximum frame size for frame relay. A network must, however, support at least a 262-octet maximum. Generally, the maximum will be greater than or equal to 1600 octets, but each frame relay provider will specify an appropriate value for its network. A frame relay data terminal equipment (DTE) must allow the maximum acceptable frame size to be configurable.

Q.922 Address* Control (Ul = 0x03)


Pad (When Required)

NLPID

Data Frame Check Sequence (Two Octets) Flag (7E Hexadecimal)


Interconnect Issues There are two basic type of data packets that travel within the frame relay network: routed packets and bridged packets. These packets have distinct formats and must contain an indicator that the destination can use to correctly interpret the contents of the frame. This indicator embedded within the NLPID and SNAP header information.

---------------------------------------------------------------------------------------------3. Discuss Class Based IP Address. Write short notes on RARP, BOOTP, DHCP

---------------------------------------------------------------------------------------------Ans: B) Dynamic Host Configuration Protocol (DHCP) The Dynamic Host Configuration Protocol (DHCP) provides a frame work for the passing configuration information to host on a TCP/IP network. DHCP is based on the BOOTP protocol, adding the capability of automatic allocation of reusable network addresses and additional configuration options. DHCP messages use UDP port 67, the BOOTP servers well-known port and UDP port 68, the BOOPT clients well-know port. DHCP participants can interoperate with BOOTP participants. DHCP consists of two components: 1. 2. A protocol that delivers host-specific configuration parameters from a DHCP server to host. A mechanism for the allocation of temporary or permanent network addresses to hosts. IP requires the setting of may parameters within the protocol implementation software. Because IP can be used on many dissimilar kinds of network hardware, values for those parameter cannot be guessed at or assumed to have correct defaults. The use of the a distributed address allocation scheme based on a polling/defense mechanism, for discovery of network addresses already in use, cannot guarantee unique network addresses because host might not always be able to defend their network addresses. DHCP supports three mechanisms for IP address allocation: 1. Automatic Allocation: DHCP assign a permanent IP address to the host.

2.

Dynamic Allocation: DHCP assign an a IP address for a limited period of time. Such a network address is called a lease. This is the only mechanism that allows automatic reuse of addresses that are no longer needed by the host to which it was assigned.

Manual Allocation: the hosts address is assigned by a network administrator.

RARP (Reverse Address Resolution Protocol)


RARP (Reverse Address Resolution Protocol) is a protocol by which a physical machine in a local area network can request to learn its IP address from a gateway server's Address Resolution Protocol (ARP) table or cache. A network administrator creates a table in a local area network's gateway router that maps the physical machine (or Media Access Control - MAC address) addresses to corresponding Internet Protocol addresses. When a new machine is set up, its RARP client program requests from the RARP server on the router to be sent its IP address. Assuming that an entry has been set up in the router table, the RARP server will return the IP address to the machine which can store it for future use. RARP is available for Ethernet, Fiber Distributed-Data Interface, and token ring LANs.

BOOTP (Bootstrap Protocol)


BOOTP (Bootstrap Protocol) is a protocol that lets a network user be automatically configured (receive an IP address) and have an operating system booted (initiated) without user involvement. The BOOTP server, managed by a network administrator, automatically assigns the IP address from a pool of addresses for a certain duration of time. BOOTP is the basis for a more advanced network manager protocol, the Dynamic Host Configuration Protocol (DHCP).

Dynamic Host Configuration Protocol (DHCP)


The Dynamic Host Configuration Protocol (DHCP) is a network configuration protocol for hosts on Internet Protocol (IP) networks. Computers that are connected to IP networks must be configured before they can communicate with other hosts. The most essential information needed is an IP address, and a default route and routing prefix. DHCP eliminates the manual task by a network administrator. It also provides a central database of devices that are connected to the network and eliminates duplicate resource assignments. In addition to IP addresses, DHCP also provides other configuration information, particularly the IP addresses of local Domain Name Server (DNS), network boot servers, or other service hosts. DHCP is used for IPv4 as well as IPv6. While both versions serve much the same purpose, the details of the protocol for IPv4 and IPv6 are sufficiently different that they may be considered separate protocols.[1] Hosts that do not use DHCP for address configuration may still use it to obtain other configuration information. Alternatively, IPv6 hosts may use stateless address autoconfiguration. IPv4 hosts may use link-local addressing to achieve limited local connectivity. History DHCP was first defined as a standards track protocol in RFC 1531 in October 1993, as an extension to the Bootstrap Protocol (BOOTP). The motivation for extending BOOTP was that BOOTP required manual intervention to add configuration information for each client, and did not provide a mechanism for reclaiming disused IP addresses. Many worked to clarify the protocol as it gained popularity, and in 1997 RFC 2131 was released, and remains as of 2011 the standard for IPv4 networks. DHCPv6 is documented in RFC 3315. RFC 3633 added a DHCPv6 mechanism for prefix delegation. DHCPv6 was further extended to provide configuration information to clients configured using stateless address autoconfiguration in RFC 3736. The BOOTP protocol itself was first defined in RFC 951 as a replacement for the Reverse Address Resolution Protocol RARP. The primary motivation for replacing RARP with BOOTP was that RARP was a data link layer protocol. This made implementation difficult on many server platforms, and required that a server be present on each individual network link. BOOTP introduced the innovation of a relay agent, which allowed the forwarding of BOOTP packets off the local network using standard IP routing, thus one central BOOTP server could serve hosts on many IP subnets.[2]

Technical overview Dynamic Host Configuration Protocol automates network-parameter assignment to network devices from one or more DHCP servers. Even in small networks, DHCP is useful because it makes it easy to add new machines to the network. When a DHCP-configured client (a computer or any other network-aware device) connects to a network, the DHCP client sends a broadcast query requesting necessary information from a DHCP server. The DHCP server manages a pool of IP addresses and information about client configuration parameters such as default gateway, domain name, the name servers, other servers such as time servers, and so forth. On receiving a valid request, the server assigns the computer an IP address, a lease (length of time the allocation is valid), and other IP configuration parameters, such as the subnet mask and the default gateway. The query is typically initiated immediately after booting, and must complete before the client can initiate IP-based communication with other hosts. Upon disconnecting, the IP address is returned to the pool for use by another computer. This way, many other computers can use the same IP address within minutes of each other. This makes it impossible to accuse someone of internet abuse because so many different users will be assigned the same IP address within any given time frame. Depending on implementation, the DHCP server may have three methods of allocating IP-addresses:

dynamic allocation: A network administrator assigns a range of IP addresses to DHCP, and each client computer on the LAN is configured to request an IP address from the DHCP server during network initialization. The request-and-grant process uses a lease concept with a controllable time period, allowing the DHCP server to reclaim (and then reallocate) IP addresses that are not renewed. automatic allocation: The DHCP server permanently assigns a free IP address to a requesting client from the range defined by the administrator. This is like dynamic allocation, but the DHCP server keeps a table of past IP address assignments, so that it can preferentially assign to a client the same IP address that the client previously had. static allocation: The DHCP server allocates an IP address based on a table with MAC address/IP address pairs, which are manually filled in (perhaps by a network administrator). [Only requesting clients with a MAC address listed in this table will be allocated an IP address]. This feature (which is not supported by all DHCP servers) is variously called Static DHCP Assignment (by DD-WRT), fixed-address (by the dhcpd documentation), Address Reservation (by Netgear), DHCP reservation or Static DHCP (by Cisco/Linksys), and IP reservation or MAC/IP binding (by various other router manufacturers).

Technical details DHCP uses the same two ports assigned by IANA for BOOTP: destination UDP port 67 for sending data to the server, and UDP port 68 for data to the client. DHCP communications are connectionless in nature. DHCP operations fall into four basic phases: IP discovery, IP lease offer, IP request, and IP lease acknowledgement. These points are often abbreviated as DORA (Discovery, Offer, Request, Acknowledgement). DHCP clients and servers on the same subnet communicate via UDP broadcasts, initially. If the client and server are on different subnets, a DHCP Helper or DHCP Relay Agent may be used. Clients requesting renewal of an existing lease may communicate directly via UDP unicast, since the client already has an established IP address at that point.

---------------------------------------------------------------------------------------------4. Explain acknowledgements and retransmissions process in TCP. What is congestion? Mention few algorithms to overcome congestion

---------------------------------------------------------------------------------------------Ans: TCP sends data in variable length segments. Sequence numbers are base on a byte count. Acknowledgments Specify to sequence number of the next byte that the receiver expects to receive. Consider that a segment gets lost or corrupted. In this case, there receiver will acknowledge all further well received segments. With an acknowledgment referring to the first byte of the missing packet. The sender will stop transmitting when it has sent al the bytes in the window. Eventually, a timeout will occur and the missing segment will be retransmitted.

An example where a window size of 1500 bytes and segments of 500 bytes are used. A problem now arises, because the sender does know that segment 2 is lost or corrupted, but does not know anything about segments 3 and 4. The sender should at least retransmit segment 2, but it could also retransmit segments 3 and 4 (because they are within the current window). It is possible that: Segment 3 has been received, and we do not know about segment 4. It might be received, but ACK did not reach us yet, or it might be lost. Segment3 was lost, and we received the ACK 1500 on the reception of segment4. Each TCP implementation is free to react to a timeout as those implementing it want. It can retransmit on segment2, but in the second case, we will be waiting again until segment 3 times out. In this case, we lose all of the throughput advantages of the window mechanism. Or TCP might immediately resend all of the segments in the current window. Whatever the choice, maximal throughput is lost. This is because the ACK does not contain a second acknowledgment sequence number indicating the actual frame received.

Variable Timeout Intervals: Each TCP should implement an algorithm to adapt the timeout values to used for the round trip time of the segments. To do this, TCP records the time at which a segment was sent, and the time at which the ACK is received. A weighted average is calculated over several of these round trip times, to be used as a timeout value for the next segment or segments to sent. This is an important feature, because delays can vary in IP network, depending on multiple factors, such as the load of an intermediate low speed network or the saturation of an intermediate IP gateway. Establishing a TCP connection: Before any data can be transferred, a connection ahs to established between the two process. One of the processes (usually the server) issues a passive OPEN call, the other an active OPEN call. The passive OPEN call remains dormant until another process tries to connect to it by an active OPEN As shown in Fig. the network, three TCP segments are exchanged. This whole process is known as a three way handshake. Note that the exchanged TCP segments include the initial sequence numbers from both sides, to be used on the subsequent data transfers. Closing the connections don implicitly by sending TCP segment with the FIN bit (no more data) set. Because the connections full duplex (the is, there are two independent data streams, one in each direction), the FIN segment closes the date transfer in one direction. The other process with now send the remaining data it still has to transmit and also ends with a TCP segment where the FIN bit is set. The connections deleted (status information on both sides) after the data stream is closed in both directions The following is a list of different sates of a TCP connection:

1. 2. 3. 4. 5. 6. 7. 8.

LISTEN: Awaiting a connection request from another TCP layer. SYN-SENT: A SYN has been sent, and TCP is awaiting the response SYS. SYN-RECEIVED: A SYN has been receiving, a SYN has been sent, and TCP is awaiting an ACK. ESTABLISEHD: The three-way hand shake has been completed. FIN-WAIT-2: The local application has issued a CLOSE. TCP has sent a FIN, and is awaiting an ACK or a FIN. FIN WAIT-2: A FIN has been sent, and a ACK received. TCP is awaiting a FIN form the remote TCP layer. CLOSE-WAIT: TCP has received a FIN, and has sent an ACK. It is awaiting a close request form the local application before sending a FIN. CLOSING: A FIN has been sent, a FIN has been received, and an ACK has been sent. TCP is awaiting an ACK for the FIN that was sent.

9. 10. 11.

LAST-ACK: A FIN has been received, and an ACK and a FIN have been sent. TCP is awaiting an ACK. TIME WAIT: FINs have been received and ACKd , and TCP is waiting two MSLs to remove the connection from the table. CLOSED: imaginary, this indicates that a connection has been removed from the connection table. TCP Congestion Control Algorithms One big difference between TCP and UDP is the congestion control algorithm. The TCP congestion algorithm prevents a sender from overrunning the capacity of the network (for example, slower WAN links). TCP can adapt the senders rate to network capacity and attempt to avoid potential congestion situations In order to understand the difference potential congestion situations. In order to understand the difference between TCP and UDP, understanding basic TCP congestion control algorithms is very helpful. Several congestion control enhancement have been ads and suggested to TCP over the years. This is still an active and ongoing research area, but modern implementations of TCP contain four intervened algorithms as basic Internet standards:

Slow start Congestion avoidance Fast retransmit Fast recovery

Slow Start: Old implementations of TCP start a connection with the sender injecting multiple segments into the network, up to the window size advertised by the receiver. Although this OK when the two host are on the same LAN, if there are routers and slower links between the sender and the receiver, problems can arise. Some intermediate routers cannot handle it, packets get dropped, and retransmission results and performance is degraded. The algorithm to avoid this is called slow start. It operates by observing that the rate at which new packets should be injected in the network is the rate at which the acknowledgments are returned by the other end. Slow start adds another window to the senders TCP: the congestion window, called cwnd. When a new connection is established with a host on another network, the congestion window is initialized to one segment (for example, the segment size announced by the other end, or the default, typically 536 or 512). Each time an ACK is received, the congestion window is increase by one segment. The sender can transmit the lower value of the congestion window or the advertised window is flow control imposed by the receiver. The former is based on the senders assessment of perceived network congestion; the latter is related to the amount of available buffer space at the receiver for this connection. The sender starts by transmitting one segment and waiting for its ACK. When that ACK is received, the congestion window is incremented from one to two. And tow segments can be sent. When each of those two segments is acknowledged, the congestion window is increased to four. This provides an exponential growth, although it is not exactly exponential, because the receiver might delay it ACKs, typically sending one ACK for ever tow segments that it receives. At some point, the capacity of the IP network (for example, slower WAN links) can be reached, and an intermediate router will start discarding packets. This tells the sender that its congestion window has gotten too large. Congestion avoidance: the assumption of the algorithm is that packet loss cause by damage is very small (much less than 1%). Therefore, the loss of a packet signals congestion somewhere in the network between the source and destination. There are tow indications. Of packet loss: A timeout occurs. Duplicate ACKs are received.

Congestion avoidance and slow start are independent algorthims with different objectives. But when congestion occurs, TCP must slow down it transmission rate of packets into the network and invoke slow start to get things going again. In practice, they are implemented together. Congestion avoidance and slow start require that two variables be maintained for each connection: A congestion windows, cwnd A slow start threshold size, ssthresh

The combined algorithm operates as follows: 1. 2. 3. 4. Initialization for a given connection sets cwnd to one segment and ssthresh to 65535 bytes. The TCP output routine never sends more than the lower value of cwnd or the receivers advertised window. When congestion occurs (timeout or duplicate ACK), one half of the current window size is saved in ssthresh. Additionally, if the congestion is indicated by a timeout, cwnd is set to one segment. When new data is acknowledged by the other end, increase cwnd, by the way it increases depends on whether TCP is performing slow start or congestion avoidance. If cwnd is less than or equal to ssthresh, TCP is in slow start; otherwise, TCP is performing avoidance.

Slow start continues until TCP is half way to where it was when congestion occurred (since it recorded half of the window size that cause the problem in step2), and then congestion avoidance takes over. Slow start has cwnd begin at one segment, and increamented by one segment every time and ACK is received. As mentioned earlier, this opens the window exponentially: send on segment, then two, then four, and so on. Congestion avoidance dictates that cwnd be incremented by segsize*segsize / cwnd each time an ACK is receive, where segsize is the segment size and cwnd is maintained in bytes. This is a linear growth of cwnd, compared to slow starts exponential growth. The increase in cwnd should be at most one segment each round.-trip time (regardless of how many ACKs are received in that round-trip time). While slow start increments cwnd by the number of ACKs received in a roundtrip time. Many implementation incorrectly add a small fraction of segment size (typically the segment size divide by 8) during congestion avoidance. This is wrong and should not be emulated in future releases for an example of TCP slow start and congestion avoidance in action.

Fast Retransmit: Fast retransmit avoid having TCP wail for a timeout to resend lost segments. Modification to the congestion avoidance algorthim was proposed in 1990. Before describing the change, realize that TCP can generate an immediate acknowledgement (a duplicate ACK) when an out-of-order segment is received. This duplicate ACK should not be delayed. The purpose of this duplicate ACK is to let the other end know that a segment was received out of order and to tell it what sequence number is expected. Because TCP does not know whether a duplicate ACK is caused by a lost segment or just a reordering of segments, it wails for a small number of duplicate ACKS to be received. It is assumed that if there is just a reordering of the segments, there will be only one or two duplicate ACKS before the reordered segment is processed, which will then generate anew ACK. If three or more duplicate ACKs are received in a row, it is a strong indication that a segment has been lost. TCP then performs a retransmission of what appears to e the missing segment, without waiting for retransmission timer to expire for an overview of TCP fast retransmit in action Fast recovery: After fast retransmit send what appears to be the missing segment, congestion avoidance, but not slow star, is performed. This is the fast recovery algorithm. It is an improvement that allows high throughput under moderate congestion, especially for large windows. The reason for performing slow start in this case is that the receipt of the duplicate ACKs tells TCP more than just a packet has been lost. Because the receiver can only generate the duplicate ACK when another segment is received, that segment has left the network and is in the receivers buffer. That is there is still data flowing between the two ends, and TCP does not want to reduce the flow abruptly by going in slow star. The fast retransmit and fast recovery algorithms are usually implemented together as follows:

1.

When the third duplicate ACK in a row is received, se ssthresh to one half the current congestion windows, cwnd, but no less that two segments. Retransmit the missing segment. Set cwnd to ssthrest plus three times the segment size. This inflates the congestion window by the number of segments that have left the network and the other end has cached(3)

2. 3.

Each timer anther duplicate ACK arrives, increment cwnd by the segment size. This inflates the congestion window for the additional segment that has left the network. Transmit a packet, if allowed by then new value of cwnd. When the next ACK arrive that acknowledges new data . Set cwnd to ssthresh (the valu set in step 1). This AACK is the acknowledgement of the retransmission from step 1, one round trip time after the retransmission. Additionally, this ACK acknowledges all the intermediated segments sent between the lost packet and the receipt of the fist duplicates ACK. This step is congestion avoidance, because TCP IS DOWN TO one half the rate it was at when the packet was lost.

---------------------------------------------------------------------------------------------5. Differentiate between permanent and transient host groups. What is IGMP snooping? Bring out the differences between two PM modes.

---------------------------------------------------------------------------------------------Ans: IP multicasting is defined as the transmission of an IP datagram to a "host group", a set of zero or more hosts identified by a single IP destination address. A multicast datagram is delivered to all members of its destination host group with the same "best-efforts" reliability as regular unicast IP datagrams, i.e. the datagram is not guaranteed to arrive at all members of the destination group or in the same order relative to other datagrams. The membership of a host group is dynamic; that is, hosts may join and leave groups at any time. There is no restriction on the location or number of members in a host group, but membership in a group may be restricted to only those hosts possessing a private access key. A host may be a member of more than one group at a time. A host need not be a member of a group to send datagrams to it. A host group may be permanent or transient. A permanent group has a well-known, administratively assigned IP address. It is the address, not the membership of the group, that is permanent; at any time a permanent group may have any number of members, even zero. A transient group, on the other hand, is assigned an address dynamically when the group is created, at the request of a host. A transient group ceases to exist, and its address becomes eligible for reassignment, when its membership drops to zero. The creation of transient groups and the maintenance of group membership information is the responsibility of "multicast agents", entities that reside in internet gateways or other special-purpose hosts. There is at least one multicast agent directly attached to every IP network or subnetwork that supports IP multicasting. A host requests the creation of new groups, and joins or leaves existing groups, by exchanging messages with a neighboring agent. Multicast agents are also responsible for internetwork delivery of multicast IP datagrams. When sending a multicast IP datagram, a host transmits it to a local network multicast address which identifies all neighboring members of the destination host group. If the group has members on other networks, a multicast agent becomes an additional recipient of the local multicast and relays the datagram to agents on each of those other networks, via the internet gateway system. Finally, the agents on the other networks each transmit the datagram as a local multicast to their own neighboring members of the destination group. Level 2: full support for IP multicasting, allows a host to create, join and leave host groups, as well as send IP datagrams to host groups. It requires implementation of the Internet Group Management Protocol (IGMP) and extension of the IP and local network service interfaces within the host. All of the following sections of this memo are applicable to level 2 implementations. RFC 988, page 10: Within the IP module, the membership management operations are supported by the Internet Group Management Protocol (IGMP), specified in Appendix I. As well as having messages corresponding to each of the operations specified above, IGMP also specifies a "deadman timer" procedure whereby hosts periodically confirm their memberships with the multicast agents. The IP module must maintain a data structure listing the IP addresses of all host groups to which the host currently belongs, along with each group's loopback policy, access key, and timer variables. This data structure is used by the IP multicast transmission service to know which outgoing datagrams to loop back, and by the reception service

to know which incoming datagrams to accept. The purpose of IGMP and the management interface operations is to maintain this data structure. RFC 988, page 13: The Internet Group Management Protocol (IGMP) is used between IP hosts and their immediate neighbor multicast agents to support the creation of transient groups, the addition and deletion of members of a group, and the periodic confirmation of group membership. IGMP is an asymmetric protocol and is specified here from the point of view of a host, rather than a multicast agent.

IGMP snooping is the process of listening to Internet Group Management Protocol (IGMP) network traffic. IGMP snooping, as implied by the name, is a feature that allows a network switch to listen in on the IGMP conversation between hosts and routers. By listening to these conversations the switch maintains a map of which links need which IP multicast streams. Multicasts may be filtered from the links which do not need them. Purpose A switch will, by default, flood multicast traffic to all the ports in a broadcast domain (or the VLAN equivalent). Multicast can cause unnecessary load on host devices by requiring them to process packets they have not solicited. When purposefully exploited this is known as one variation of a denial-of-service attack. IGMP snooping is designed to prevent hosts on a local network from receiving traffic for a multicast group they have not explicitly joined. It provides switches with a mechanism to prune multicast traffic from links that do not contain a multicast listener (an IGMP client). IGMP snooping allows a switch to only forward multicast traffic to the links that have solicited them. Essentially, IGMP snooping is a layer 2 optimization for the layer 3 IGMP. IGMP snooping takes place internally on switches and is not a protocol feature. Snooping is therefore especially useful for bandwidth-intensive IP multicast applications such as IPTV. Standard status IGMP snooping, although an important technique, overlaps two standards organizations namely IEEE which standardizes Ethernet switches, and IETF which standardises IP multicast. This means that even today there is no clear owner of this technique. This is why RFC 4541 on IGMP snooping only has the status Informational[1] despite actually being referenced in other standards work such as RFC 4903 as normative. Implementations options Proxy reporting IGMP snooping with proxy reporting or report suppression actively filters IGMP packets in order to reduce load on the multicast router join and leaves heading upstream to the router are filtered so that only the minimal quantity of information is sent. The switch is trying to ensure the router only has a single entry for the group, regardless of how many active listeners there are. If there are two active listeners in a group and the first one leaves, then the switch determines that the router does not need this information since it does not affect the status of the group from the router's point of view. However the next time there is a routine query from the router the switch will forward the reply from the remaining host, to prevent the router from believing there are no active listeners. It follows that in active IGMP snooping, the router will generally only know about the most recently joined member of the group. IGMP querier In order for IGMP, and thus IGMP snooping, to function, a multicast router must exist on the network and generate IGMP queries. The tables created for snooping (holding the member ports for each a multicast group) are associated with the querier. Without a querier the tables are not created and snooping will not work. Furthermore IGMP general queries must be unconditionally forwarded by all switches involved in IGMP snooping.Some IGMP snooping implementations include full querier capability. Others are able to proxy and retransmit queries from the multicast router.

---------------------------------------------------------------------------------------------6. Discuss the common attacks against the networks security. Explain the several roles involved in the payment process defined by SET specification.

---------------------------------------------------------------------------------------------Ans: Common Types of Network Attacks

Without security measures and controls in place, your data might be subjected to an attack. Some attacks are passive, meaning information is monitored; others are active, meaning the information is altered with intent to corrupt or destroy the data or the network itself. Your networks and data are vulnerable to any of the following types of attacks if you do not have a security plan in place. Eavesdropping In general, the majority of network communications occur in an unsecured or "cleartext" format, which allows an attacker who has gained access to data paths in your network to "listen in" or interpret (read) the traffic. When an attacker is eavesdropping on your communications, it is referred to as sniffing or snooping. The ability of an eavesdropper to monitor the network is generally the biggest security problem that administrators face in an enterprise. Without strong encryption services that are based on cryptography, your data can be read by others as it traverses the network. Data Modification After an attacker has read your data, the next logical step is to alter it. An attacker can modify the data in the packet without the knowledge of the sender or receiver. Even if you do not require confidentiality for all communications, you do not want any of your messages to be modified in transit. For example, if you are exchanging purchase requisitions, you do not want the items, amounts, or billing information to be modified. Identity Spoofing (IP Address Spoofing) Most networks and operating systems use the IP address of a computer to identify a valid entity. In certain cases, it is possible for an IP address to be falsely assumed identity spoofing. An attacker might also use special programs to construct IP packets that appear to originate from valid addresses inside the corporate intranet. After gaining access to the network with a valid IP address, the attacker can modify, reroute, or delete your data. The attacker can also conduct other types of attacks, as described in the following sections. Password-Based Attacks A common denominator of most operating system and network security plans is password-based access control. This means your access rights to a computer and network resources are determined by who you are, that is, your user name and your password. Older applications do not always protect identity information as it is passed through the network for validation. This might allow an eavesdropper to gain access to the network by posing as a valid user. When an attacker finds a valid user account, the attacker has the same rights as the real user. Therefore, if the user has administrator-level rights, the attacker also can create accounts for subsequent access at a later time. After gaining access to your network with a valid account, an attacker can do any of the following:

Modify, reroute, or delete your data. Denial-of-Service Attack Unlike a password-based attack, the denial-of-service attack prevents normal use of your computer or network by valid users. After gaining access to your network, the attacker can do any of the following:

Obtain lists of valid user and computer names and network information. Modify server and network configurations, including access controls and routing tables.

Randomize the attention of your internal Information Systems staff so that they do not see the intrusion immediately, which allows the attacker to make more attacks during the diversion. Send invalid data to applications or network services, which causes abnormal termination or behavior of the applications or services. Flood a computer or the entire network with traffic until a shutdown occurs because of the overload. Block traffic, which results in a loss of access to network resources by authorized users.

Man-in-the-Middle Attack As the name indicates, a man-in-the-middle attack occurs when someone between you and the person with whom you are communicating is actively monitoring, capturing, and controlling your communication transparently. For example, the attacker can re-route a data exchange. When computers are communicating at low levels of the network layer, the computers might not be able to determine with whom they are exchanging data. Man-in-the-middle attacks are like someone assuming your identity in order to read your message. The person on the other end might believe it is you because the attacker might be actively replying as you to keep the exchange going and gain more information. This attack is capable of the same damage as an application-layer attack, described later in this section. Compromised-Key Attack A key is a secret code or number necessary to interpret secured information. Although obtaining a key is a difficult and resource-intensive process for an attacker, it is possible. After an attacker obtains a key, that key is referred to as a compromised key. An attacker uses the compromised key to gain access to a secured communication without the sender or receiver being aware of the attack.With the compromised key, the attacker can decrypt or modify data, and try to use the compromised key to compute additional keys, which might allow the attacker access to other secured communications. Sniffer Attack

A sniffer is an application or device that can read, monitor, and capture network data exchanges and read network packets. If the packets are not encrypted, a sniffer provides a full view of the data inside the packet. Even encapsulated (tunneled) packets can be broken open and read unless they are encrypted and the attacker does not have access to the key. Using a sniffer, an attacker can do any of the following:

Analyze your network and gain information to eventually cause your network to crash or to become corrupted.

Read your communications. Application-Layer Attack An application-layer attack targets application servers by deliberately causing a fault in a server's operating system or applications. This results in the attacker gaining the ability to bypass normal access controls. The attacker takes advantage of this situation, gaining control of your application, system, or network, and can do any of the following:
Read, add, delete, or modify your data or operating system. Introduce a virus program that uses your computers and software applications to copy viruses throughout your network. Introduce a sniffer program to analyze your network and gain information that can eventually be used to crash or to corrupt your systems and network. Abnormally terminate your data applications or operating systems. Disable other security controls to enable future attacks.

----------------------------------------------------------------------------------------------

Spring 2012 Master of Computer Application (MCA) Semester VI MC0087 Internetworking with TCP/IP Assignment Set 2 (60 Marks) ---------------------------------------------------------------------------------------------1. List common terms and concepts in TCP/IP and explain them. Explain IEEE 802 Local Area network.

---------------------------------------------------------------------------------------------Ans:

Transmission Control Protocol


Transmission Control Protocol is the transport layer protocol used by most Internet applications, like Telnet, FTP and HTTP. It is a connection-oriented protocol. This means that two hosts - one a client, the other a server -must establish a connection before any data can be transferred between them. TCP provides reliability. An application that uses TCP knows that data it sends is received at the other end, and that it is received correctly. TCP uses checksums on both headers and data. When data is received, TCP sends an acknowledgement back to the sender. If the sender does not receive an acknowledgement within a certain timeframe, the data is re-sent. TCP includes mechanisms for ensuring that data which arrives out of sequence is put back into the order it was sent. It also implements flow control, so a sender cannot overwhelm a receiver with data.

TCP sends data using IP, in blocks which are called segments. The length of a segment is decided by the protocol. Each segment contains 20 bytes of header information in addition to the IP header. The TCP header starts with 16bit source and destination port number fields. As with UDP, these fields specify the application layers that have sent and are to receive the data. An IP address and a port number taken together uniquely identify a service running on a host, and the pair is known as a socket. Next in the header comes a 32-bit sequence number. This number identifies the position in the data stream that the first byte of data in the segment should occupy. The sequence number enables TCP to maintain the data stream in the correct order even though segments may be received out of sequence. The next field is a 32-bit acknowledgement field, which is used to convey back to the sender that data has been received correctly. If the ACK flag is set, which it normally is, this field contains the position of the next byte of data that the sender of the segment expects to receive. In TCP there is no need for every segment of data to be acknowledged. The value in the acknowledgement field is interpreted as all data up to this point received OK. This saves bandwidth when data is all being sent one way by reducing the need for acknowledgement segments. If data is being sent in both directions simultaneously, as in a full duplex connection, then acknowledgements involve no overhead, as a segment carrying data one way can contain an acknowledgement for data sent the other way. Next in the header is a 16-bit field containing a header length and flags. TCP headers can include optional fields, so the length can vary from 20 to 60 bytes. The flags are: URG, ACK (which we have already mentioned), PSH, RST, SYN and FIN. We shall look at some of the other flags later. The header contains a field called the window size, which gives the number of bytes the receiver can accept. Then there is a 16-bit checksum, covering both header and data. Finally (before the optional data) there is a field called the urgent pointer. When the URG flag is set, this value is treated as an offset to the sequence number. It identifies the start of data in the stream that must be processed urgently. This data is often called out-of-band data. An example of its use is when a user presses the break key to interrupt the output from a program during a Telnet session. Making a connection Before any data can be sent between two hosts using TCP, a connection must be established. One host, called the server, listens out for connection requests. The host requesting a connection is called the client. To request a connection, a client sends a TCP segment specifying its own port number and the port that it wants to connect to. The SYN (synchronize sequence numbers) flag is set, and the clients initial data sequence number is specified. To grant the connection, the server responds with a segment in which the header contains its own initial data sequence number. The SYN and ACK flags are set. To acknowledge receipt of the clients data sequence number the acknowledgement field contains that value plus one. To complete the connection establishment protocol, the client acknowledges the servers data sequence number by sending back a segment with the ACK flag set and the acknowledgement field containing the servers data sequence number plus one. Using TCP, segments are only sent between client and server if there is data to flow. No status polling takes place. If the communication line goes down, neither end will be aware of the failure until data needs to be sent. In practice, an application timeout would usually terminate the connection if a certain interval elapsed without any activity occurring. However, it is possible to continue a failed session as if nothing has happened if you can bring the connection up again. (Note that this is only true if your ISP gives you a fixed IP address. If IP addresses are allocated dynamically when you log on, you wont be able to resume the connection because your socket (which, as we mentioned earlier, is comprised of your IP address and port number) would be different. Data Transmission Once a connection has been made, data can be sent. TCP is a sliding window protocol, which means that there is no need to wait for one segment to be acknowledged before another can be sent. Acknowledgements are sent only if required immediately, or after a certain interval has elapsed. This makes TCP an efficient protocol for bulk data transfers. One example of when an acknowledgement is sent immediately is when the sender has filled the receivers input buffer. Flow control is implemented using the window size field in the TCP header. In the segment containing the acknowledgement the window size would be set to zero. When the receiver is once more able to accept data, a

second acknowledgement is sent, specifying the new window size. Such an acknowledgement is called a window update. When an interactive Telnet session is taking place, a single character typed in at the keyboard could be sent in its own TCP segment. Each character could then be acknowledged by a segment coming the other way. If the characters typed are echoed by the remote host then a further pair of segments could be generated, the first by the remote host and the second, its acknowledgement, by the Telnet client. Thus, a single typed character could result in four IP packets, each containing 20 bytes of IP header, 20 bytes of TCP header and just one byte of data being transmitted over the Internet. TCP has some features to try to make things a bit more efficient. An acknowledgement delay of anything up to 500 ms can be specified in the hope that within that time some data will need to be sent the other way, and the acknowledgement can piggyback along with it. The inefficiency of sending many very small segments is reduced by something called the Nagle algorithm. This states that a TCP segment containing less data than the receivers advertised window size can only be sent if the previous segment has been acknowledged. Small amounts of data are aggregated until either they equal the window size, or the acknowledgement for the previous segment is received. The slower the connection, the longer will be the period during which data can be aggregated, and thus fewer separate TCP segments will be sent over a busy link. Error Correction An important advantage of TCP over UDP is that it is a reliable data transport protocol. It can detect whether data has been successfully received at the other end and, if it hasnt been, TCP can take steps to rectify the situation. If all else fails, it can inform the sending application of the problem so that it knows that the transmission failed. The most common problem is that a TCP segment is lost or corrupted. TCP deals with this by keeping track of the acknowledgements for the data it sends. If an acknowledgement is not received within an interval determined by the protocol, the data is sent again. The interval that TCP will wait before re-transmitting data is dependent on the speed of the connection. The protocol monitors the time it normally takes to receive an acknowledgement and uses this information to calculate the period for the retransmission timer. If an acknowledgement is not received after re-sending the data once, it is sent repeatedly, at ever-increasing intervals, until either a response is received or (usually) an application timeout value is exceeded. As already mentioned, TCP implements flow control using the window size field in the header. A potential deadlock situation arises if a receiver stops the data flow by setting its window size to zero and the window update segment that is meant to start data flowing again is lost. Each end of the connection would then be stalled, waiting for the other to do something. Acknowledgements are not themselves ACKed, so the retransmission strategy would not resolve the problem in this case. To prevent deadlock from occurring, TCP sends out window probe messages at regular intervals to query the receiver about its window size. Closing a Connection When the time comes to close a TCP connection, each direction of data flow must be closed down separately. One end of the connection sends a segment in which the FIN (finished sending data) flag is set. The receipt of this segment is acknowledged, and the receiving end notifies its application that the other end has closed that half of the connection. The receiver can, if it wishes, continue to send data in the other direction. Normally, however, the receiving application would instruct TCP to close the other half of the connection using an identical procedure. Network Time Protocol A network time service is one of the simplest possible Internet applications. It tells you the time as a 32-bit value, giving the number of seconds that have elapsed since midnight on 1st January 1900. Time servers use the wellknown port number 37. When your time client opens UDP port 37 on the server, the server responds by sending four bytes of time information. For such a simple transaction UDP is perfectly adequate, though as it happens many time servers do support connections using TCP as well. TCPs built in reliability is of little use in this application, because by the time the protocol decides that the message may have been lost and re-sends it, the information

it contained will be out of date. UDP is the most suitable protocol for real-time applications like this, and others like audio, video and network gaming. Simple Network Management Protocol A slightly more complex UDP application is Simple Network Management Protocol (SNMP). It allows applications to glean information about how various elements of the network are performing, and to control the network by means of commands sent over it rather than by physical configuration of equipment. In SNMP there are two distinct components, the SNMP manager and SNMP agents. A manager can communicate with many agents. Typically, the SNMP manager would be an application running on the network managers console, and agents will run on user workstations, in hubs, routers and other pieces of network hardware. All communication is between the manager and an agent. Agents dont communicate with each other. Communication may be infrequent and sporadic, and the amount of information exchanged small. Usually a command sent by the manager will generate just a single response. SNMP uses UDP. This avoids the overhead of having to maintain connections between the SNMP manager and each agent. Because the communication protocol consists essentially of a request for data and a reply containing the data requested, UDPs lack of reliability is not a problem. Reliability is easily implemented within the SNMP manager by re-sending a request if no response is received within a certain period. The main function of SNMP is to allow the manager to get information from tables maintained by the agents. The tables are known as the Management Information Base (MIB). The MIB is divided into groups, each containing information about a different aspect of the network. Examples of the information that the MIB may contain include the name, type and speed of a network interface, a components physical location and the contact person for it, and statistics such as the number of packets sent and the number that were undeliverable. Object IDs Data is addressed using object IDs. These are written as sequences of numbers separated by periods, rather like long IP addresses. Each number going from left to right represents a node in a tree structure, with related information being grouped in one branch of the tree. There are standardized object IDs for commonly used items of information, and also a section for vendor-specific information. The assignment of object IDs is controlled by the Internet Assigned Numbers Authority (IANA). Most SNMP messages have a fixed format. In a typical transaction, an SNMP manager will send a UDP datagram to port 161 on a host running an SNMP agent. The datagram has fields for the type of message (in this case a getrequest message), the transaction ID (which will be echoed in the response so that the manager can match up requests with the data received), and a list of object ID/value pairs. In the get-request message the object IDs specify the information requested and the value fields are empty. The agent will respond with a datagram in which the message type field is get-response. An error status field will indicate whether the request has been fulfilled, or whether an error such as a request for a non-existent object ID occurred. The same list of object ID / value pairs as in the get-request message will be returned, but with the value fields filled in. There are five types of message in SNMP version 1. Apart from get-request and get-response there is set-request, used by the SNMP manager to initialize a value, and get-next-request. The latter is a bit like listing a directory with a wildcard file spec, in that it returns a list of all the available object IDs in a particular group. The fifth message type, trap, is used by SNMP agents to signal events to the SNMP manager. These messages are sent to UDP port 162. Trap messages have a format of their own. This includes a trap type field which indicates the type of event being signaled: for example, the agent initializing itself, or the network device being turned off. There is a vendor-specific trap type which allows vendors to define traps for events of their own choosing. Message types One problem with SNMP version 1 is that the maximum size of a message is 512 bytes. This limit was chosen so that the UDP datagram in which it is sent falls within the limit (576 bytes) that all TCP/IP transports are guaranteed to pass. The error status value will indicate if the information requested is too big. Typically, this can occur when asking for text-based information, which is returned as strings of up to 255 characters in length. SNMP version 2 adds two new message types. Get-bulk-request provides a way to retrieve larger amounts of data than version 1 can handle, and inform-request allows SNMP managers to communicate with one another. SNMP 2

also adds security features which can be used to help ensure that information is passed only to agents authorised to receive it. Telnet Telnet is a terminal emulation application that enables a workstation to connect to a host using a TCP/IP link and interact with it as if it was a directly connected terminal. It is a client/server application. The server runs on a host on which applications are running, and passes information between the applications and the Telnet clients. The well-known port number for Telnet servers is TCP port 23. Telnet clients must convert the user data between the form in which it is transmitted and the form in which it is displayed. This is the difficult part of the application, the terminal emulation, and has little to do with the Telnet protocol itself. Telnet protocol commands are principally used to allow the client and server to negotiate the display options, because Telnet clients and servers dont make assumptions about each others capabilities. TCP provides the reliability for Telnet, so neither the client nor the server need be concerned about re-sending data that is lost, nor about error checking. This makes the Telnet protocol very simple. There is no special format for TCP segments that contain commands - they simply form part of the data stream. Data is sent, usually as 7-bit ASCII, in TCP packets (which you may recall are called segments). A byte value of 255, interpret as command (IAC), means that the bytes which follow are to be treated as Telnet commands and not user data. This is immediately followed by a byte that identifies the command itself, and then a value. Many commands are fixed length, so the byte after that, if not another IAC, would be treated as user data. To send the byte 255 as data, two consecutive bytes of value 255 are used. Some commands, such as those that include text values, are variable length. These are implemented using the sub-option begin (SB) and sub-option end (SE) command bytes. These command bytes enclose the variable length data like parentheses. Negotiation The principal Telnet commands used to negotiate the display options when a client connects to a server are WILL (sender wants to enable this option), WONT (sender wants to disable this option), DO (sender wants the receiver to enable this option) and DONT (sender wants the receiver to disable this option). To see how this works, consider an example. You start your Telnet client, which is configured to emulate a VT 220 terminal, and connect to a host. The client sends WILL <terminal-type> (where <terminal-type> is the byte value representing the terminal type display option) to say that it wants to control what terminal type to use. The server will respond with DO <terminal-type> to show that it is happy for the client to control this option. Next the server will send SB <terminal-type> <send> SE. This is an invitation to the client to tell the server what its terminal type is: <send> is a byte that means send the information. The client responds with SB <terminaltype> <is> VT 220 SE (<is> is a byte that indicates that the requested information follows) and so the server is informed of the terminal emulation that the client will be using. Client and server will negotiate various other options at the start of a connection. Certain options may also be changed during the Telnet session. The echo option determines whether or not characters that are sent by the client are echoed on the display, and by which end. If characters that are typed at the terminal are to be echoed back by the host application the Telnet server will send WILL <echo> to the client, which will agree to this by sending DO <echo>. This option can be changed during a session (for example, to suppress the display of password characters.) Transmission mode Another Telnet option to be negotiated is the transmission mode. The usual mode is character-at-a-time mode, where each character typed at the terminal is echoed back by the server unless the host application specifically turns echoing off. You can tell when character-at-a-time mode is being used because there is a delay between a key being pressed and a character appearing in the terminal window. The main alternative to character-at-a-time mode is line mode. In this mode, the client displays the characters typed and provides line editing capabilities for the user. Only completed lines are sent to the server. Line mode is used by some mainframe terminal emulations. Again, it is possible to switch modes during a Telnet session if it is required to interact with an application running on the host that responds to single keystrokes rather than whole lines of input.

The urgent flag and urgent pointer in a TCP segment come into use when a Telnet terminal user presses the Break key to interrupt a process on the host. Break is converted by the Telnet client into two Telnet commands which are sent to the server: IP (interrupt process) followed by DO <timing mark> (again, we use angle brackets to indicate a byte representing an option). The server responds to the latter with WILL <timing mark> followed by a DM (datamark) command. The urgent pointer is set to point to the DM command byte, so even if flow control has halted the transmission of normal data this command will still be received. Data mark is a synchronization marker which causes any queued data up to that point to be discarded. Most of the data that passes between client and server during a Telnet session is user input and application data. The important thing to realize is that Telnet does not package up this data with additional headers or control information: it is simply passed directly to TCP. One side effect of this is that you can use a Telnet client to talk to other TCP applications that use ASCIIbased protocols simply by connecting to the appropriate port. Though it might not normally be sensible to do this it's a hard way to read your emails on a POP3 server, for example - it can be a useful troubleshooting tool. Finger Finger is a simple example of a TCP/IP application that uses an ASCII-based protocol. A Finger server is a program that supplies information to a requesting client. The information supplied usually relates to the user accounts on a host, though many ISPs use Finger servers to provide status information. The well-known Finger port is TCP port 79. A Finger client opens the Finger port and then sends a request, which is either a null string or a user name. The server responds by sending some text and closing the connection. If a null string was sent, you may receive information about all users known to the system; a user name will return information about that specific user. The Finger protocol was invented before anyone thought of spam. For obvious reasons, most organizations no longer run Finger servers, or else they have them reply with a standard message whatever the request. From our perspective the point of interest is that the protocol is pure ASCII text, as you can verify by connecting to a Finger server using a Telnet client. File Transfer Protocol Telnet allows you to interact with an application running on a remote computer, but it has no facility for enabling you to copy a file from that computers hard disk to yours, nor for you to upload files to the remote system. That function is carried out using File Transfer Protocol (FTP). The FTP specification caters for several different file types, structures and transfer modes, but in practice FTP implementations recognize either text files or binary files. Text files are converted from their native format to 7-bit ASCII with each line terminated by a carriage-return, line-feed pair for transmission. They are converted back to the native text file format by the FTP client. FTP therefore provides a cross-platform transfer mechanism for text files. Binary files are transmitted exactly as-is. Data is transferred as a continuous stream of bytes. The TCP transport protocol provides all the reliability, making sure that data that is lost is re-sent and checking that it is received correctly. It is worth noting that error detection uses a simple 16-bit checksum so the probability of undetected errors is high compared to a file transfer protocol like Zmodem which uses a 32-bit CRC. FTP is unusual compared to other TCP applications in that it uses two TCP connections. A control connection is made to the well-known FTP port number 21, and this is used to send FTP commands and receive replies. A separate data connection is established whenever a file or other information is to be transferred, and closed when the data transfer has finished. Keeping data and commands separate makes life easier for the client software, and means that the control connection is always free to send an ABOR (abort) command to terminate a lengthy data transfer. FTP commands are sent in plain 7-bit ASCII, and consist of a command of up to 4 characters followed by zero or more parameters (those familiar with text mode FTP clients like that supplied with Microsoft Windows may find it curious that FTP commands are not the same as the commands given to the FTP client). The replies consist of a three digit number followed by an optional text explanation, for example, 250 CWD command successful. The numbers are for easy interpretation by FTP client software, the explanations are for the benefit of the user. It is instructive to see what happens during a simple FTP session. When you connect to the FTP server (TCP port 21) it sends its welcome message prefixed by the numeric code 220. The FTP client prompts you for your username, which it then sends using the FTP command USER username. The server may respond with 331 Need password

for username. The client detects this, prompts you for the password and sends this to the server using the command PASS password. If the password is correct the client will receive the response 230 Access granted. The next thing you might do is type DIR, to list the current directory on the server. This command to the client results in two FTP commands being issued to the server. The first, PORT x,x,x,x,y1,y2 tells the server the IP address (x.x.x.x) and port number (y1 * 256 + y2) to use for the data connection. The port number is one in the range 1024 to 4999, a range used for ephemeral connections (those that are used briefly for some specific purpose). The second, LIST, causes the server to open the specified port, send the directory list, and close it again. Downloading The sequence for downloading a file is very similar to that for obtaining a directory list. First, a PORT command is used to specify the data connection port, and then the command RETR filename is sent to specify the file to be retrieved. The server opens the data port and sends the data, which the client writes to the hard disk. The server closes the TCP connection to the data port when the file transfer has finished, which is the signal to the client to close the newly-created file. Although you are unlikely to have to write your own client or server for any of these protocols, the descriptions above offer some useful insights into the working of Internet applications. Perhaps the most striking thing about Internet protocols is how simple they are. Because the lower protocol levels take care of reliability, routing and physical transfer matters, the application protocol need concern itself only with things relating to the application. This is the beauty of using a layered protocol stack. Simple Mail Transfer Protocol: SMTP Simple Mail Transfer Protocol (SMTP) is one of the most venerable of the Internet protocols. Designed in the early 1980s, its function is purely and simply to transfer electronic mail across and between networks and other transport systems. As such, its use need not be restricted to systems that use TCP/IP. Any communications system capable of handling lines of up to 1,000 7-bit ASCII characters could be used to carry messages using SMTP. On a TCP/IP network, however, TCP provides the transport mechanism. Application Protocol Reply Codes Many Internet application layer protocols which are based on ASCII text commands use a system of replies in which an initial three-digit code provides the essential status information. Each digit has a particular meaning, as shown below. 1st digit 1xx Positive Preliminary Reply. Command accepted but server awaits further confirmation (continue or abort). Positive Completion Reply. Command completed. Awaiting next command. Positive Intermediate Reply. Command accepted but server awaits further information (such as a password). Transient Negative Completion Reply. Command not accepted due to a temporary error condition (such as an HTTP server busy). The command may be tried again later. Permanent Negative Completion Reply. Command not accepted due to a permanent error condition.

2xx 3xx

4xx

5xx

2nd digit x0x Syntax Error. For example, command unimplemented or valid but incorrect in the circumstances. Information. The text following the code contains the answer to an information request.

x1x

x2x x5x

Connections. Reply relates to the communications channel. Server. Message reply relates to the state of the server.

The third digit is used to distinguish individual messages. In SMTP the sender is the client, but a client may communicate with many different servers. Mail can be sent directly from the sending host to the receiving host, requiring a separate TCP connection to be made for each copy of each message. However, few mail recipients run their own SMTP servers. It is more usual for the destination of an SMTP message to be a server that serves a group of users such as an Internet domain. The server receives all mail intended for its users and then allows them to collect it using POP3 (Post Office Protocol version 3) or some other mail protocol. Similarly, most SMTP clients send messages to a single "smart host" server, whose job it is to relay those messages on to their eventual recipients. An SMTP transaction begins when the sender client opens a TCP connection with the receiver using the well known port number 25. The server acknowledges the connection by sending back a message of the form 220 SMTP Server Ready. SMTP uses a similar format of replies to FTP, which we looked at previously. The three digit code is all the client software needs to tell if everything is going OK. The text is there to help the humans who might be troubleshooting a problem by analyzing a log of the transaction. The box Application Protocol Reply Codes provides more information about message reply codes. An SMTP relay server might refuse a connection by sending back a message with a 421 Service not available reply code. For example, an Internet Service Providers SMTP server provided for use by its subscribers to relay outgoing mail might refuse a connection from a host whose IP address indicates that it is not a subscriber to that ISP. The basic SMTP protocol has no form of access control - the way it can be used to relay messages would make this impractical - so this is about the only way ISPs can prevent non-subscribers such as spammers from using their mail servers to send out messages. Having received the correct acknowledgement the sender signs on to the server by sending the string HELO hostname. HELO is the sign-on command and hostname is the name of the host. As we will see, the hostname is used in the Received: header which the server adds to the message when it sends it on its way. This information allows the recipient to trace the path taken by the message. Sending Once the sender gets a 250 OK acknowledgement it can start sending messages. The protocol is extremely simple. All the sender has to do is say who the message is from, who it is to, and supply the contents of the message. Who a message is from is specified with the command MAIL FROM: <address>. This command also tells the receiver that it is about to receive a new message, so it knows to clear out its list of recipients. The address in the angle brackets (which are required) is the return path for the message. The return path is the address that any error report - such as would be generated if the message is undeliverable - is sent to. It is valid for the return path to be null, as in MAIL FROM: <>. This is typically used when sending an error report. A null return path means that no delivery failure report is required. Its main purpose is to avoid getting into the situation in which delivery failure messages continually shuttle back and forth because both sender and recipient addresses are unreachable. The recipients of a message are defined using the command RCPT TO: <address>. Each address is enclosed in angle brackets. A message may have many recipients, and an RCPT TO: command is sent for each one. It is the RCPT TO: command, not anything in the message headers, that results in a message arriving at its destination. In the case of blind carbon copies or list server messages the recipient address will not appear in the headers at all. Each recipient is acknowledged with a 250 OK reply. A recipient may also be rejected using a reply with a 550 reply code. This depends on how the server has been configured. Dial-up ISP SMTP relay servers may accept every RCPT TO: command, even if the address specified is invalid, because the server doesnt know that the address is invalid until it does a DNS lookup on it. However, a mail server intended to receive messages for local users or a specific domain would reject mail for addresses that are not at that domain. Other replies may be received in response to RCPT TO: messages as a result of the SMTP server being helpful. If an address is incorrect but the server knows the correct address it could respond with 251 User not local; will forward to <address> or 551 User not local; please try <address>. Note the different reply codes signifying whether the

server has routed the message or not. These replies arent common, and a mail client may simply treat the 551 response as an error, rather than try to parse the alternative address out of the reply text. For the sake of completeness it should be pointed out that RCPT TO: commands may specify routes, not merely addresses. A route would be expressed in the form RCPT TO: Today this capability is rarely needed. Message text Once all the recipients have been specified, all that remains is for the sender to send the message itself. First it sends the command DATA, and then waits for a reply like: 354 Start mail input; end with <CRLF>.<CRLF>. The message is then sent as a succession of lines of text. No acknowledgement is received for each line, though the sender needs to watch for a reply that indicates an error condition. The end of the message is, as indicated by the reply shown above, a period (full stop) on a line of its own. Thus, one of the simplest but most essential things that a mail client must do is ensure that a line containing a single period does not appear in the actual text. The end of the message is acknowledged with 250 OK. Its worth noting that SMTP isnt in the least bit interested in the content of the message. It could be absolutely anything, though strictly speaking it should not contain any characters with ASCII values in the range 128 to 255, and lines of text may not exceed 1,000 characters. There is no requirement for the headers to show the same sender and recipient addresses that were used in the SMTP commands, which makes it easy to make a message appear to have come from someone other than the true sender. Tracking When a message is relayed by the server it inserts a Received: header at the start of the message showing the identity of the host that sent the message, its own host name, and a time stamp. Each SMTP server that a message passes through adds its own Received: header. Thus it is possible to track the path taken by a message. Although this information doesn't identify the sender it may shed some light on where the message came from. After the 250 OK that acknowledges the end of the message, the sender can start again with a new message by sending a new MAIL FROM: command, or it can sign off from the server using QUIT. A 221 reply will be received in response to the QUIT command. SMTP servers should support two further commands for a minimum implementation. NOOP does nothing, but should provoke a 250 OK reply. RSET aborts the current message transaction. Other commands such as HELP are really only of interest to those trying to communicate with SMTP servers interactively.IEEE 802 standards A set of network standards developed by the IEEE. They include:

IEEE 802.1: Standards related to network management. IEEE 802.2: General standard for the data link layer in the OSI Reference Model. The IEEE divides this layer into two sublayers -- the logical link control (LLC) layer and the media access control (MAC) layer. The MAC layer varies for different network types and is defined by standards IEEE 802.3 through IEEE 802.5. IEEE 802.3: Defines the MAC layer for bus networks that use CSMA/CD. This is the basis of the Ethernet standard. IEEE 802.4: Defines the MAC layer for bus networks that use a token-passing mechanism (token bus networks). IEEE 802.5: Defines the MAC layer for token-ring networks. IEEE 802.6: Standard for Metropolitan Area Networks (MANs).

---------------------------------------------------------------------------------------------2. Explain IP datagram format. Explain fragmentation field of IP header in detail.

------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------3. What is CIDR? Explain. An organization is granted the block 211.17.180.0/24. The administrator wants to create 32 subnets. a. Find the subnet mask. b. Find the number of addresses in each subnet. c. Find the first and last addresses in subnet 1. d. Find the first and last addresses in subnet 32.

---------------------------------------------------------------------------------------------Ans: CIDR (Classless Inter-Domain Routing, sometimes known as super netting) is a way to allocate and specify the Internet addresses used in inter-domain routing more flexibly than with the original system of Internet Protocol (IP) address classes. As a result, the number of available Internet addresses has been greatly increased. CIDR is now the routing system used by virtually all gateway hosts on the Internet's backbone network. The Internet's regulating authorities now expect every Internet service provider (ISP) to use it for routing. The original Internet Protocol defines IP addresses in four major classes of address structure, Classes A through D. Each of these classes allocates one portion of the 32-bit Internet address format to a network address and the remaining portion to the specific host machines within the network specified by the address. One of the most commonly used classes is (or was) Class B, which allocates space for up to 65,533 host addresses. A company who needed more than 254 host machines but far fewer than the 65,533 host addresses possible would essentially be "wasting" most of the block of addresses allocated. For this reason, the Internet was, until the arrival of CIDR, running out of address space much more quickly than necessary. CIDR effectively solved the problem by providing a new and more flexible way to specify network addresses in routers. (With a new version of the Internet Protocol IPv6 - a 128-bit address is possible, greatly expanding the number of possible addresses on the Internet. However, it will be some time before IPv6 is in widespread use.) Using CIDR, each IP address has a network prefix that identifies either an aggregation of network gateways or an individual gateway. The length of the network prefix is also specified as part of the IP address and varies depending on the number of bits that are needed (rather than any arbitrary class assignment structure). A destination IP address or route that describes many possible destinations has a shorter prefix and is said to be less specific. A longer prefix describes a destination gateway more specifically. Routers are required to use the most specific or longest network prefix in the routing table when forwarding packets. A CIDR network address looks like this: 192.30.250.00/18 The "192.30.250.00" is the network address itself and the "18" says that the first 18 bits are the network part of the address, leaving the last 14 bits for specific host addresses. CIDR lets one routing table entry represent an aggregation of networks that exist in the forward path that don't need to be specified on that particular gateway, much as the public telephone system uses area codes to channel calls toward a certain part of the network. This aggregation of networks in a single address is sometimes referred to as a supernet. CIDR is supported by the Border Gateway Protocol, the prevailing exterior (interdomain) gateway protocol. (The older exterior or interdomain gateway protocols, Exterior Gateway Protocol and Routing Information Protocol, do not support CIDR.) CIDR is also supported by the OSPF interior or intradomain gateway protocol. Block of 211.17.180.0/24 subnetted into 32 subnets: a) Given block /24 have 256 addresses (0-255). Divide 256 by 32 to determine that each subnet will have 8 addresses. Using binary, determine the net mask that will achieve a total of 8 addresses in each network. Since 0 is the first value in a range of 8 addresses, we want zero through 7 or (000-111). This confirms that only 3 bits are required to represent the addresses in each of the 32 subnets.

(xxxxxyyy - xxxxxyyy) illustrates the binary range for the addresses within each of the 32 subnets. The "xxxxx" represents the additional 5 bits that will be used to define the network portion of the IP address. The "yyy" portion illustrates the host portion of the IP address. This means that the resulting netmask is /24 + /5 = /29 or 255.255.255.248 11111111.11111111.11111111.11111000 = 255.255.255.248 b) The subnet mask above defines that 3 bits are used to define the host portion of the IP address. binary 111 = decimal 7 (considering range of 0-7, shows 8 addresses per subnet) c) xxxxxyyy - represents the bits used to define the network and host portion of the IP address. 00000yyy = first subnet in range 00001yyy = second subnet in range 00010yyy = third subnet in range 00011yyy = fourth subnet in range 11100yyy = 29th subnet in range 11101yyy = 30th subnet in range 11110yyy = 31st subnet in range 11111yyy = 32nd subnet in range The first and last addresses in subnet 1: 00000yyy = first subnet in range 00000000 = first address in subnet 1 (decimal 0) 211.17.180.0/29 00000111 = last address in subnet 1 (decimal 7) 211.17.180.7/29 The first and last addresses in subnet 32: 11111yyy = 32nd subnet in range 11111000 = first address in subnet 32 (decimal 248) 211.17.180.248/29 11111111 = last address in subnet 32 (decimal 255) 211.17.180.255/29

---------------------------------------------------------------------------------------------4. What is Enhanced Interior Gateway Routing Protocol? Explain the salient features of EIGRP and OSPF.

---------------------------------------------------------------------------------------------Ans: The Enhanced Interior Gateway Routing Protocol (Enhanced IGRP) is a routing protocol developed by Cisco Systems and introduced with Software Release 9.21 and Cisco Internetworking Operating System (Cisco IOS) Software Release 10.0. Enhanced IGRP combines the advantages of distance vector protocols, such as IGRP, with the advantages of link-state protocols, such as Open Shortest Path First (OSPF). Enhanced IGRP uses the Diffusing Update Algorithm (DUAL) to achieve convergence quickly.

Enhanced IGRP includes support for IP, Novell NetWare, and AppleTalk. Enhanced IGRP Network Topology Enhanced IGRP uses a nonhierarchical (or flat) topology by default. Enhanced IGRP automatically summarizes subnet routes of directly connected networks at a network number boundary. This automatic summarization is sufficient for most IP networks. See the section "Enhanced IGRP Route Summarization" later in this chapter for more detail. Enhanced IGRP Addressing The first step in designing an Enhanced IGRP network is to decide on how to address the network. In many cases, a company is assigned a single NIC address (such as a Class B network address) to be allocated in a corporate internetwork. Bit-wise subnetting and variable-length subnetwork masks (VLSMs) can be used in combination to save address space. Enhanced IGRP for IP supports the use of VLSMs. Enhanced IGRP Route Summarization With Enhanced IGRP, subnet routes of directly connected networks are automatically summarized at network number boundaries. In addition, a network administrator can configure route summarization at any interface with any bit boundary, allowing ranges of networks to be summarized arbitrarily. Enhanced IGRP Route Selection Routing protocols compare route metrics to select the best route from a group of possible routes. The following factors are important to understand when designing an Enhanced IGRP internetwork. Enhanced IGRP uses the same vector of metrics as IGRP. Separate metric values are assigned for bandwidth, delay, reliability and load. By default, Enhanced IGRP computes the metric for a route by using the minimum bandwidth of each hop in the path and adding a media-specific delay for each hop. The metrics used by Enhanced IGRP are as follows:

Bandwidth-Bandwidth is deduced from the interface type. Bandwidth can be modified with the bandwidth command. Delay-Each media type has a propagation delay associated with it. Modifying delay is very useful to optimize routing in network with satellite links. Delay can be modified with the delay command. Reliability-Reliability is dynamically computed as a rolling weighted average over five seconds. Load-Load is dynamically computed as a rolling weighted average over five seconds.

When Enhanced IGRP summarizes a group of routes, it uses the metric of the best route in the summary as the metric for the summary. Enhanced IGRP Convergence Enhanced IGRP implements a new convergence algorithm known as DUAL (Diffusing Update Algorithm). DUAL uses two techniques that allow Enhanced IGRP to converge very quickly. First, each Enhanced IGRP router stores its neighbors routing tables. This allows the router to use a new route to a destination instantly if another feasible route is known. If no feasible route is known based upon the routing information previously learned from its neighbors, a router running Enhanced IGRP becomes active for that destination and sends a query to each of its neighbors asking for an alternate route to the destination. These queries propagate until an alternate route is found. Routers that are not affected by a topology change remain passive and do not need to be involved in the query and response. A router using Enhanced IGRP receives full routing tables from its neighbors when it first communicates with the neighbors. Thereafter, only changes to the routing tables are sent and only to routers that are affected by the change. A successor is a neighboring router that is currently being used for packet forwarding, provides the least cost route to the destination, and is not part of a routing loop. Information in the routing table is based on feasible successors. Feasible successor routes can be used in case the existing route fails. Feasible successors provide the next least-cost path without introducing routing loops. The routing table keeps a list of the computed costs of reaching networks. The topology table keeps a list of all routes advertised by neighbors. For each network, the router keeps the real cost of getting to that network and also keeps the advertised cost from its neighbor. In the event of a failure, convergence is instant if a feasible successor

can be found. A neighbor is a feasible successor if it meets the feasibility condition set by DUAL. DUAL finds feasible successors by the performing the following computations: Enhanced IGRP Network Scalability Network scalability is limited by two factors: operational issues and technical issues. Operationally, Enhanced IGRP provides easy configuration and growth. Technically, Enhanced IGRP uses resources at less than a linear rate with the growth of a network. Memory A router running Enhanced IGRP stores all routes advertised by neighbors so that it can adapt quickly to alternate routes. The more neighbors a router has, the more memory a router uses. Enhanced IGRP automatic route aggregation bounds the routing table growth naturally. Additional bounding is possible with manual route aggregation. CPU Enhanced IGRP uses the DUAL algorithm to provide fast convergence. DUAL re-computes only routes, which are affected by a topology change. DUAL is not computationally complex, so it does not require a lot of CPU. Bandwidth Enhanced IGRP uses partial updates. Partial updates are generated only when a change occurs; only the changed information is sent, and this changed information is sent only to the routers affected. Because of this, Enhanced IGRP is very efficient in its usage of bandwidth. Some additional bandwidth is used by Enhanced IGRP's HELLO protocol to maintain adjacencies between neighboring routers. Enhanced IGRP Security Enhanced IGRP is available only on Cisco routers. This prevents accidental or malicious routing disruption caused by hosts in a network. In addition, route filters can be set up on any interface to prevent learning or propagating routing information inappropriately. OSPF OSPF is an Interior Gateway Protocol (IGP) developed for use in Internet Protocol (IP)-based internetworks. As an IGP, OSPF distributes routing information between routers belonging to a single autonomous system (AS). An AS is a group of routers exchanging routing information via a common routing protocol. The OSPF protocol is based on shortest-path-first, or link-state, technology. Two design activities are critically important to a successful OSPF implementation: Definition of area boundaries Address assignment

Ensuring that these activities are properly planned and executed will make all the difference in an OSPF implementation. Each is addressed in more detail with the discussions that follow. OSPF Network Topology OSPF works best in a hierarchical routing environment. The first and most important decision when designing an OSPF network is to determine which routers and links are to be included in the backbone and which are to be included in each area. There are several important guidelines to consider when designing an OSPF topology: The number of routers in an area---OSPF uses a CPU-intensive algorithm. The number of calculations that must be performed given n link-state packets is proportional to n log n. As a result, the larger and more unstable the area, the greater the likelihood for performance problems associated with routing protocol recalculation. Generally, an area should have no more than 50 routers. Areas with unstable links should be smaller.

The number of neighbors for any one router---OSPF floods all link-state changes to all routers in an area. Routers with many neighbors have the most work to do when link-state changes occur. In general, any one router should have no more than 60 neighbors. The number of areas supported by any one router---A router must run the link-state algorithm for each link-state change that occurs for every area in which the router resides. Every area border router is in at least two areas (the backbone and one area). In general, to maximize stability, one router should not be in more than three areas. Designated router selection---In general, the designated router and backup designated router on a localarea network (LAN) have the most OSPF work to do. It is a good idea to select routers that are not already heavily loaded with CPU-intensive activities to be the designated router and backup designated router. In addition, it is generally not a good idea to select the same router to be designated router on many LANs simultaneously. Backbone Considerations Stability and redundancy are the most important criteria for the backbone. Keeping the size of the backbone reasonable increases stability. This is caused by the fact that every router in the backbone needs to re-compute its routes after every link-state change. Keeping the backbone small reduces the likelihood of a change and reduces the amount of CPU cycles required to re-compute routes. As a general rule, each area (including the backbone) should contain no more than 50 routers. If link quality is high and the number of routes is small, the number of routers can be increased. Redundancy is important in the backbone to prevent partition when a link fails. Good backbones are designed so that no single link failure can cause a partition. OSPF backbones must be contiguous. All routers in the backbone should be directly connected to other backbone routers. OSPF includes the concept of virtual links. A virtual link creates a path between two area border routers (an area border router is a router connects an area to the backbone) that are not directly connected. A virtual link can be used to heal a partitioned backbone. However, it is not a good idea to design an OSPF network to require the use of virtual links. The stability of a virtual link is determined by the stability of the underlying area. This dependency can make troubleshooting more difficult. In addition, virtual links cannot run across stub areas. See the section "Backbone-to-Area Route Advertisement," later in this chapter for a detailed discussion of stub areas. Avoid placing hosts (such as workstations, file servers or other shared resources) in the backbone area. Keeping hosts out of the backbone area simplifies internetwork expansion and creates a more stable environment. Area Considerations Individual areas must be contiguous. In this context, a contiguous area is one in which a continuous path can be traced from any router in an area to any other router in the same area. This does not mean that all routers must share a common network media. It is not possible to use virtual links to connect a partitioned area. Ideally, areas should be richly connected internally to prevent partitioning. The two most critical aspects of area design follow: Determining how the area is addressed Determining how the area is connected to the backbone

Areas should have a contiguous set of network and/or subnet addresses. Without a contiguous address space, it is not possible to implement route summarization. The routers that connect an area to the backbone are called area border routers. Areas can have a single area border router or they can have multiple area border routers. In general, it is desirable to have more than one area border router per area to minimize the chance of the area becoming disconnected from the backbone. When creating large-scale OSPF internetworks, the definition of areas and assignment of resources within areas must be done with a pragmatic view of your internetwork. The following are general rules that will help ensure that your internetwork remains flexible and provides the kind of performance needed to deliver reliable resource access. Consider physical proximity when defining areas---If a particular location is densely connected, create an area specifically for nodes at that location. Reduce the maximum size of areas if links are unstable---If your internetwork includes unstable links, consider implementing smaller areas to reduce the effects of route flapping. Whenever a route is lost or comes online, each affected area must converge on a new topology. The Dykstra algorithm will run on all the affected

routers. By segmenting your internetwork into smaller areas, you can isolate unstable links and deliver more reliable overall service. OSPF Addressing and Route Summarization Address assignment and route summarization are inextricably linked when designing OSPF internetworks. To create a scalable OSPF internetwork, you should implement route summarization. To create an environment capable of supporting route summarization, you must implement an effective hierarchical addressing scheme. The addressing structure that you implement can have a profound impact on the performance and scalability of your OSPF internetwork. The following sections discuss OSPF route summarization and three addressing options: Separate network numbers for each area Network Information Center (NIC)-authorized address areas created using bit-wise subnetting and VLSM Private addressing, with a "demilitarized zone" (DMZ) buffer to the official Internet world

Note: You should keep your addressing scheme as simple as possible, but be wary of oversimplifying your address assignment scheme. Although simplicity in addressing saves time later when operating and troubleshooting your network, taking short cuts can have certain severe consequences. In building a scalable addressing environment, use a structured approach. If necessary, use bit-wise subnetting---but make sure that route summarization can be accomplished at the area border routers. OSPF Route Summarization Route summarization is extremely desirable for a reliable and scalable OSPF internetwork. The effectiveness of route summarization, and your OSPF implementation in general, hinges on the addressing scheme that you adopt. Summarization in an OSPF internetwork occurs between each area and the backbone area. Summarization must be configured manually in OSPF. When planning your OSPF internetwork, consider the following issues: Be sure that your network addressing scheme is configured so that the range of subnets assigned within an area is contiguous. Create an address space that will permit you to split areas easily as your network grows. If possible, assign subnets according to simple octet boundaries. If you cannot assign addresses in an easy-to-remember and easyto-divide manner, be sure to have a thoroughly defined addressing structure. If you know how your entire address space is assigned (or will be assigned), you can plan for changes more effectively. Plan ahead for the addition of new routers to your OSPF environment. Be sure that new routers are inserted appropriately as area, backbone, or border routers. Because the addition of new routers creates a new topology, inserting new routers can cause unexpected routing changes (and possibly performance changes) when your OSPF topology is recomputed. Separate Address Structures for Each Area One of the simplest ways to allocate addresses in OSPF is to assign a separate network number for each area. With this scheme, you create a backbone and multiple areas, and assign a separate IP network number to each area. The following are some clear benefits of assigning separate address structures to each area: Address assignment is relatively easy to remember. Configuration of routers is relatively easy and mistakes are less likely. Network operations are streamlined because each area has a simple, unique network number.

Bit-Wise Subnetting and VLSM Bit-wise subnetting and variable-length subnetwork masks (VLSMs) can be used in combination to save address space. Consider a hypothetical network where a Class B address is subdivided using an area mask and distributed among 16 areas. Route Summarization Techniques

Route summarization is particularly important in an OSPF environment because it increases the stability of the network. If route summarization is being used, routes within an area that change do not need to be changed in the backbone or in other areas. Route summarization addresses two important questions of route information distribution: What information does the backbone need to know about each area? The answer to this question focuses attention on area-to-backbone routing information. What information does each area need to know about the backbone and other areas? The answer to this question focuses attention on backbone-to-area routing information. Area-to-Backbone Route Advertisement There are several key considerations when setting up your OSPF areas for proper summarization: OSPF route summarization occurs in the area border routers. OSPF supports VLSM, so it is possible to summarize on any bit boundary in a network or subnet address.

OSPF requires manual summarization. As you design the areas, you need to determine summarization at each area border router. Backbone-to-Area Route Advertisement There are four potential types of routing information in an area: Default. If an explicit route cannot be found for a given IP network or subnetwork, the router will forward the packet to the destination specified in the default route. Intra-area routes. Explicit network or subnet routes must be carried for all networks or subnets inside an area. Inter-area routes. Areas may carry explicit network or subnet routes for networks or subnets that are in this AS but not in this area. External routes. When different ASs exchange routing information, the routes they exchange are referred to as external routes. In general, it is desirable to restrict routing information in any area to the minimal set that the area needs. There are three types of areas, and they are defined in accordance with the routing information that is used in them: Non-stub areas---Non-stub areas carry a default route, static routes, intra-area routes, inter-area routes and external routes. An area must be a non-stub area when it contains a router that uses both OSPF and any other protocol, such as the Routing Information Protocol (RIP). Such a router is known as an autonomous system border router (ASBR). An area must also be a non-stub area when a virtual link is configured across the area. Non-stub areas are the most resource-intensive type of area. Stub areas---Stub areas carry a default route, intra-area routes and inter-area routes, but they do not carry external routes. Stub areas are recommended for areas that have only one area border router and they are often useful in areas with multiple area border routers. See "Controlling Inter-area Traffic," later in this chapter for a detailed discussion of the design trade-offs in areas with multiple area border routers. There are two restrictions on the use of stub areas: virtual links cannot be configured across them, and they cannot contain an ASBR. Stub areas without summaries---Software releases 9.1(11), 9.21(2), and 10.0(1) and later support stub areas without summaries, allowing you to create areas that carry only a default route and intra-area routes. Stub areas without summaries do not carry inter-area routes or external routes. This type of area is recommended for simple configurations where a single router connects an area to the backbone. OSPF Route Selection When designing an OSPF internetwork for efficient route selection, consider three important topics: Tuning OSPF Metrics

Controlling Inter-area Traffic Load Balancing in OSPF Internetworks

Tuning OSPF Metrics The default value for OSPF metrics is based on bandwidth. The following characteristics show how OSPF metrics are generated: Each link is given a metric value based on its bandwidth. The metric for a specific link is the inverse of the bandwidth for that link. Link metrics are normalized to give Fast Ethernet a metric of 1. The metric for a route is the sum of the metrics for all the links in the route. Note: In some cases, your network might implement a media type that is faster than the fastest default media configurable for OSPF (Fast Ethernet). An example of a faster media is ATM. By default, a faster media will be assigned a cost equal to the cost of an Fast Ethernet link---a link-state metric cost of 1. Given an environment with both Fast Ethernet and a faster media type, you must manually configure link costs to configure the faster link with a lower metric. Configure any Fast Ethernet link with a cost greater than 1, and the faster link with a cost less than the assigned Fast Ethernet link cost. Use the ip ospf cost interface configuration command to modify link-state cost. When route summarization is enabled, OSPF uses the metric of the best route in the summary.

There are two forms of external metrics: type 1 and type 2. Using an external type 1 metric results in routes adding the internal OSPF metric to the external route metric. External type 2 metrics do not add the internal metric to external routes. The external type 1 metric is generally preferred. If you have more than one external connection, either metric can affect how multiple paths are used. Controlling Inter-area Traffic When an area has only a single area border router, all traffic that does not belong in the area will be sent to the area border router. In areas that have multiple area border routers, two choices are available for traffic that needs to leave the area: Use the area border router closest to the originator of the traffic. (Traffic leaves the area as soon as possible.) Use the area border router closest to the destination of the traffic. (Traffic leaves the area as late as possible.) If the area border routers inject only the default route, the traffic goes to the area border router that is closest to the source of the traffic. Generally, this behavior is desirable because the backbone typically has higher bandwidth lines available. However, if you want the traffic to use the area border router that is nearest the destination (so that traffic leaves the area as late as possible), the area border routers should inject summaries into the area instead of just injecting the default route. Most network designers prefer to avoid asymmetric routing (that is, using a different path for packets that are going from A to B than for those packets that are going from B to A.) It is important to understand how routing occurs between areas to avoid asymmetric routing. Load Balancing in OSPF Internetworks Internetwork topologies are typically designed to provide redundant routes in order to prevent a partitioned network. Redundancy is also useful to provide additional bandwidth for high traffic areas. If equal-cost paths between nodes exist, Cisco routers automatically load balance in an OSPF environment. OSPF Convergence One of the most attractive features about OSPF is the ability to quickly adapt to topology changes. There are two components to routing convergence: Detection of topology changes---OSPF uses two mechanisms to detect topology changes. Interface status changes (such as carrier failure on a serial link) is the first mechanism. The second mechanism is failure of OSPF to receive a hello packet from its neighbor within a timing window called a dead timer. Once this timer expires, the router assumes the neighbor is down. The dead timer is configured using the ip ospf dead-interval

interface configuration command. The default value of the dead timer is four times the value of the Hello interval. That results in a dead timer default of 40 seconds for broadcast networks and 2 minutes for nonbroadcast networks. Recalculation of routes---Once a failure has been detected, the router that detected the failure sends a link-state packet with the change information to all routers in the area. All the routers recalculate all of their routes using the Dykstra (or SPF) algorithm. The time required to run the algorithm depends on a combination of the size of the area and the number of routes in the database. OSPF Network Scalability Your ability to scale an OSPF internetwork depends on your overall network structure and addressing scheme. As outlined in the preceding discussions concerning network topology and route summarization, adopting a hierarchical addressing environment and a structured address assignment will be the most important factors in determining the scalability of your internetwork. Network scalability is affected by operational and technical considerations: Operationally, OSPF networks should be designed so that areas do not need to be split to accommodate growth. Address space should be reserved to permit the addition of new areas. Technically, scaling is determined by the utilization of three resources: memory, CPU, and bandwidth.

Memory An OSPF router stores all of the link states for all of the areas that it is in. In addition, it can store summaries and externals. Careful use of summarization and stub areas can reduce memory use substantially. CPU An OSPF router uses CPU cycles whenever a link-state change occurs. Keeping areas small and using summarization dramatically reduces CPU use and creates a more stable environment for OSPF. Bandwidth OSPF sends partial updates when a link-state change occurs. The updates are flooded to all routers in the area. In a quiet network, OSPF is a quiet protocol. In a network with substantial topology changes, OSPF minimizes the amount of bandwidth used. OSPF Security Two kinds of security are applicable to routing protocols: Controlling the routers that participate in an OSPF network

OSPF contains an optional authentication field. All routers within an area must agree on the value of the authentication field. Because OSPF is a standard protocol available on many platforms, including some hosts, using the authentication field prevents the inadvertent startup of OSPF in an uncontrolled platform on your network and reduces the potential for instability. Controlling the routing information that routers exchange

All routers must have the same data within an OSPF area. As a result, it is not possible to use route filters in an OSPF network to provide security.

---------------------------------------------------------------------------------------------5. Discuss in detail, the RSVP message format. Discuss the approaches for connecting Intserv networks with Diffserv

networks.

------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------6. Clearly differentiate between recursive and iterative DNS query. Discuss the various steps in domain name resolution. Explain various steps involved in SMTP mail transaction flow.

---------------------------------------------------------------------------------------------Ans:
Recursive queries When a client system sends a recursive query to a local name server, that local name server must return the IP address for the friendly name entered, indicate that it can't find an address, or return an error saying that the requested address does not exist. Name servers do not refer the client system requesting a recursive query to other DNS servers. When answering recursive queries, the originating client does not receive address information directly from any DNS server other than the local name server. Typically, the local name server will first check DNS data from its own boot file, cache, database, or reverse lookup file. If the server is unsuccessful in obtaining the answer from those local sources, it may contact other DNS servers for assistance using iterative queries and then pass the information it receives back to the client that originated the name resolution request. Iterative queries In iterative queries, name servers return the best information they have. Although a DNS server may not know the IP address for a given friendly name, it might know the IP address of another name server likely to have the IP address being sought, so it sends that information back. The response to an iterative query can be likened to a DNS server saying, "I don't have the IP address you seek, but the name server at 10.1.2.3 can tell you." The process is straightforward. Here's one example in which a local name server uses iterative queries to resolve an address for a client: The local name server receives a name resolution request from a client system for a friendly name (such as www.techrepublic.com). The local name server checks its records. If it finds the address, it returns it to the client. If no address is found, the local name server proceeds to the next step. The local name server sends an iterative request to the root (the "." in .com) name server. The root name server provides the local name server with the address for the top-level domain (.com, .net, etc.) server. The local name server sends an iterative query to the top-level domain server. The top-level domain server replies with the IP address of the name server that manages the friendly name's domain (such as techrepublic.com). The local name server sends an iterative request to the friendly name's domain name server. The friendly name's domain name server provides the IP address for the friendly name (www.techrepublic.com) being sought. The local name server passes that IP address to the client. It seems complicated, but the process completes in a matter of moments. Or, if an address isn't found, a 404 error message is returned to the client. DNS Name Servers and Name Resolution The preceding two sections describe the Domain Name System's hierarchical name space, and the authorities that manage it and are responsible for name registration. These two elements, the name space and name registration, are the more intangible parts of the name system, which define how it is created and managed. The tangible aspect of the name system is the set of software and hardware that enables its primary active function: name resolution. This is the specific task that allows a name system to replace cumbersome numeric addresses with easy-to-use text names. Name resolution is the part of DNS that generally gets the most attention, because it is the portion of the system that most people work with on a daily basis. DNS uses a very capable client/server name resolution method that

makes use of a distributed database of name information. The most commonly used implementation of the DNS name resolution process is, of course, the one used for the Internet itself, which resolves many billions of name requests every day. In this section I explain in detail the concepts and operation of the DNS name resolution function. The section is broken into three subsections. The first two cover each of the two key software elements that work together to implement the DNS client/server name resolution function. The first describes DNS name servers, and how they represent, manage and provide data when resolution is invoked. The second describes DNS clients, called resolvers, how they initiate resolution, and the steps involved in the resolution process. After these I have a third subsection that ties together the information about name servers and resolvers by providing a look at message exchange between these units, and describing the formats of messages, resource records and DNS master files. SMTP Mail Transaction Flow Although mail commands and replies are rigidly defined, the exchange can easily be followed in Fig. 8.2. All exchanged commands, replies, and data are text lines delimited by a <CRLF>. All replies have a numeric code at the beginning of the line. The steps of this flow are: 1. The sender SMTP establishes a TCP connection with the destination SMTP and then waits for the server to send a 220 Service ready message or a 421 Service not available message when the destination is temporarily unable to proceed. 2. HELO (HELO is an abbreviation for hello) is sent, to which the receiver will identify itself by sending back its domain name. The sender-SMTP can use this to verify that it contacted the right destination SMTP. The sender SMTP can substitute an EHLO command in place of the HELO command. A receiver SMTP that does not support service extensions will respond with a 500 Syntax Error, command unrecognized message. The sender SMTP then retries with HELO, or if it cannot transmit the message without one or more service extensions, it sends a QUIT message. If a receiver-SMTP supports service extensions, it responds with a multiline 250 OK message, which includes a list of service extensions that it supports. 3. The sender now initiates the start of a mail transaction by sending a MAIL command to the receiver. This command contains the reverse-path that can be used to report errors. Note that a path can be more than just the user mailbox@host domain name pair. In addition, it can contain a list of routing hosts. Examples of this are when we pass a mail bridge, or when explicit routing information is provided in the destination address. If accepted, the receiver replies with a 250 OK. 4. The second step of the actual mail exchange consists of providing the server SMTP with the destinations for the message. There can be more than one recipient. This is done by sending one or more RCPTTO:<forward-path> commands. Each of them will receive a reply 250 OK if the destination is known to the server, or a 550 No such user here if it is not. 5. When all RCPT commands are sent, the sender issues a DATA command to notify the receiver that the message contents will follow. The server replies with 354 Start mail input, end with <CRLF>.<CRLF>. Note the ending sequence that the sender should use to terminate the message data. 6. The client now sends the data line by line, ending with the 5-character sequence <CRLF>.<CRLF> line, upon which the receiver will acknowledge with a 250 OK, or an appropriate error message if anything went wrong. 7. At this juncture, the client now has several possible actions:

If the client has no more messages to send, it can end the connection with a QUIT command, which will be answered with a 221 Service closing transmission channel reply. If the sender has no more messages to send, but is ready to receive messages (if any) from the other side, it can issue the TURN command. The two SMTPs now switch their role of sender/receiver, and the sender (previously the receiver) can now send messages by starting with step 3. If the sender has another message to send, it returns to step 3 and sends a new MAIL command.

Fig. 8.2 illustrates the normal transmission of a single message from a client to a server. Additionally, we provide a textual scenario in Fig. 8.3.

Fig. 8.2: SMTP: Normal SMTP data flow In the previous description, we mentioned only the most important commands. All of these are commands that must be recognized in each SMTP implementation. Other commands exist, but are optional. RFC standards do not require that every SMTP entity implement them. However, they provide very interesting functions such as relaying, forwarding, mailing lists, and so on.

----------------------------------------------------------------------------------------------

También podría gustarte