Está en la página 1de 16

Master of Computer Application (MCA) Semester 6 MC0087 Internetworking with TCP/IP 4 Credits (Book ID: B1008)

Assignment Set 1 (60 Marks)

1. What is fragmentation? Explain its significance When an IP datagram travels from one host to another, it can pass through different physical networks. Each physical network has a maximum frame size. This is called the maximum transmission unit (MTU). It limits the length of a datagram that can be placed in one physical frame. IP implements a process to fragment datagrams exceeding the MTU. The process creates a set of datagrams within the maximum size. The receiving host reassembles the original datagram. IP requires that each link support a minimum MTU of 68 octets. This is the sum of the maximum IP header length (60 octets) and the minimum possible length of data in a non-final fragment (8 octets). If any network provides a lower value than this, fragmentation and reassembly must be implemented in the network interface layer. This must be transparent to IP. IP implementations are not required to handle unfragmented datagrams larger than 576 bytes. In practice, most implementations will accommodate larger values. An unfragmented datagram has an all-zero fragmentation information field. That is, the more fragments flag bit is zero and the fragment offset is zero. The following steps fragment the datagram: The DF flag bit is checked to see if fragmentation is allowed. If the bit is set, the datagram will be discarded and an ICMP error returned to the originator. 2. Based on the MTU value, the data field is split into two or more parts. All newly created data portions must have a length that is a multiple of 8 octets, with the exception of the last data portion. 3. Each data portion is placed in an IP datagram. The headers of these datagrams are minor modifications of the original: The more fragments flag bit is set in all fragments except the last. The fragment offset field in each is set to the location this data portion occupied in the original datagram, relative to the beginning of the original unfragmented datagram. The offset is measured in 8-octet units. If options were included in the original datagram, the high order bit of the option type byte determines if this information is copied to all fragment datagrams or only the first datagram. For example, source route options are copied in all fragments. The header length field of the new datagram is set. The total length field of the new datagram is set. The header checksum field is re-calculated. 4. Each of these fragmented datagrams is now forwarded as a normal IP datagram. IP handles each fragment independently. The fragments can traverse different routers to the intended destination. They can be subject to further fragmentation if they pass through networks specifying a smaller MTU. At the destination host, the data is reassembled into the original datagram. The identification field set by the sending host is used together with the source and destination IP addresses in the datagram. Fragmentation does not alter this field. In order to reassemble the fragments, the receiving host allocates a storage buffer when the first fragment arrives. The host also starts a timer. When subsequent fragments of the datagram arrive, the data is copied into the buffer storage at the location indicated by the fragment offset field. When all fragments have arrived, the complete original unfragmented datagram is restored. Processing continues as for unfragmented datagrams. If the timer is exceeded and fragments remain outstanding, the datagram is discarded. The initial value of this timer is called the IP datagram time to live (TTL) value. It is implementation-dependent. Some implementations allow it to be configured. The netstat command can be used on some IP hosts to list the details of fragmentation.

2.

Briefly discuss the functions of transport layer

Transport layer accepts data from session layer breaks it into packets and delivers these packets to the network layer. It is the responsibility of transport layer to guarantee successful arrival of data at the destination device. It provides an end-to-end dialog that is the transport layer at the source device directly communicates with transport layer at destination device. Message headers and control messages are used for this purpose. It separates the upper layers from the low level details of data transmission and makes sure an efficient delivery. OSI model provides connection-oriented service at transport layer. It is responsible for the determination of the type of service that is to be provided to the upper layer. Normally it transmits packets in the same order in which they are sent however it can also facilitate the transmission of isolated messages. There is no surety that these isolated messages are delivered to the destination devices in case of broadcast networks and they will be in the same order as were sent from the source. If the network layer do not provide adequate services for the data transmission. Data loss due to poor network management is handled by using transport layer. It checks for any packets that are lost or damaged along the way.

3 . What is CIDR? Explain. Ans: CIDR (Classless Inter-Domain Routing, sometimes known as super netting) is a way to allocate and specify the Internet addresses used in inter-domain routing more flexibly than with the original system of Internet Protocol (IP) address classes. As a result, the number of available Internet addresses has been greatly increased. CIDR is now the routing system used by virtually all gateway hosts on the Internet's backbone network. The Internet's regulating authorities now expect every Internet service provider (ISP) to use it for routing. The original Internet Protocol defines IP addresses in four major classes of address structure, Classes A through D. Each of these classes allocates one portion of the 32-bit Internet address format to a network address and the remaining portion to the specific host machines within the network specified by the address. One of the most commonly used classes is (or was) Class B, which allocates space for up to 65,533 host addresses. A company who needed more than 254 host machines but far fewer than the 65,533 host addresses possible would essentially be "wasting" most of the block of addresses allocated. For this reason, the Internet was, until the arrival of CIDR, running out of address space much more quickly than necessary. CIDR effectively solved the problem by providing a new and more flexible way to specify network addresses in routers. (With a new version of the Internet Protocol - IPv6 - a 128-bit address is possible, greatly expanding the number of possible addresses on the Internet. However, it will be some time before IPv6 is in widespread use.) Using CIDR, each IP address has a network prefix that identifies either an aggregation of network gateways or an individual gateway. The length of the network prefix is also specified as part of the IP address and varies depending on the number of bits that are needed (rather than any arbitrary class assignment structure). A destination IP address or route that describes many possible destinations has a shorter prefix and is said to be less specific. A longer prefix describes a destination gateway more specifically. Routers are required to use the most specific or longest network prefix in the routing table when forwarding packets.

A CIDR network address looks like this: 192.30.250.00/18 The "192.30.250.00" is the network address itself and the "18" says that the first 18 bits are the network part of the address, leaving the last 14 bits for specific host addresses. CIDR lets one routing table entry represent an aggregation of networks that exist in the forward path that don't need to be specified on that particular gateway, much as the public telephone system uses area codes to channel calls toward a certain part of the network. This aggregation of networks in a single address is sometimes referred to as a supernet. CIDR is supported by the Border Gateway Protocol, the prevailing exterior (interdomain) gateway protocol. (The older exterior or interdomain gateway protocols, Exterior Gateway Protocol and Routing Information Protocol, do not support CIDR.) CIDR is also supported by the OSPF interior or intradomain gateway protocol. Block of 211.17.180.0/24 subnetted into 32 subnets: a) Given block /24 have 256 addresses (0-255). Divide 256 by 32 to determine that each subnet will have 8 addresses. Using binary, determine the net mask that will achieve a total of 8 addresses in each network. Since 0 is the first value in a range of 8 addresses, we want zero through 7 or (000-111). This confirms that only 3 bits are required to represent the addresses in each of the 32 subnets. (xxxxxyyy - xxxxxyyy) illustrates the binary range for the addresses within each of the 32 subnets. The "xxxxx" represents the additional 5 bits that will be used to define the network portion of the IP address. The "yyy" portion illustrates the host portion of the IP address. This means that the resulting netmask is /24 + /5 = /29 or 255.255.255.248 11111111.11111111.11111111.11111000 = 255.255.255.248 b) The subnet mask above defines that 3 bits are used to define the host portion of the IP address. binary 111 = decimal 7 (considering range of 0-7, shows 8 addresses per subnet) c) xxxxxyyy - represents the bits used to define the network and host portion of the IP address. 00000yyy = first subnet in range 00001yyy = second subnet in range 00010yyy = third subnet in range 00011yyy = fourth subnet in range

11100yyy = 29th subnet in range 11101yyy = 30th subnet in range 11110yyy = 31st subnet in range 11111yyy = 32nd subnet in range The first and last addresses in subnet 1: 00000yyy = first subnet in range 00000000 = first address in subnet 1 (decimal 0) 211.17.180.0/29 00000111 = last address in subnet 1 (decimal 7) 211.17.180.7/29 The first and last addresses in subnet 32: 11111yyy = 32nd subnet in range 11111000 = first address in subnet 32 (decimal 248) 211.17.180.248/29 11111111 = last address in subnet 32 (decimal 255) 211.17.180.255/29

Q.4. What is congestion .mention few algorithms to overcome congestion


Ans: TCP is the popular transport protocol for best-effort trafc in Internet. However, TCP is not well-suited for many applications such as streaming multimedia, because TCP congestion control algorithms introduce large variations in the congestion window size (and corresponding large variations in the sending rate). Such variability in the sending rate is not acceptable to many multimedia applications. Hence, many multimedia applications are built over UDP and use no congestion control at all. The absence of congestion control in applications built over UDP may lead to congestion collapse on the Internet. In addition, the UDP ows may starve any competing TCP ows. To overcome these adverse effects, congestion control needs to be incorporated into all applications using the Internet, whether at the transport layer or provided by the application itself. Furthermore, the congestion control algorithms must be TCP-friendly, i.e. the TCP-friendly ows should not gain more throughput than competing TCP ows in the long run. Thus, in recent years, many researchers have focussed on developing TCP-friendly transport protocols which are suitable for many applications that currently use UDP. In this direction, IETF is currently working on developing a new protocol called, Datagram Congestion Control Protocol (DCCP), that provides an unreliable datagram service with congestion control. DCCP is designed to use any suitable TCP- friendly congestion control algorithm. With a multitude of TCP-friendly

congestion control algo- rithms available, some important questions that need to be answered are: What are the strengths and weakness of the various TCP-friendly algorithms? Is there a single algorithm which is uniformly superior over other algorithms?. The rst step in answering these questions is to study the short-term and long-term behavior of these algorithms. Although the goal of all TCP-friendly algorithms is to emulate the behavior of TCP in the long term, these algorithms may have an adverse impact in the short-term on competing TCP ows. Since TCPfriendly algorithms are designed for smoother sending rates than TCP, these algorithms may react slowly to new connections that share a common bottleneck link. Such a slower response may have a deleterious effect on TCP ows. For example, a TCP connection suffering losses in its slow start phase may enter the congestion avoidance phase with a small window, and consequently obtain lesser throughput than other competing ows. Hence, it is clear that a detailed study is required on the short-term (transient)behavior of TCP-friendly ows in addition to their long-term behavior. In this paper, we study the transient behavior of three TCP-friendly congestion control algorithms: general AIMD congestion control, TFRC and binomial congestion control algorithm . Prior work has studied the transient behavior of these algorithms when RED queues are used at the bottleneck link. However, as droptail queues are still widely used in practice, in this paper we study the transient behavior of these algorithms with droptail queues. Past work has also identied certain unfairness of AIMD and binomial congestion control algorithms to TCP with droptail queues, but has not identied the reasons for this unfairness. In this paper, we analyze the reasons for this unfairness, and validate the analysis by simulations. The rest of the paper is organized as follows. In Section II, we briey overview the various TCPfriendly congestioncontrol algorithms proposed in literature. In Section III, we dene the transient behaviors studied in this paper, and analyze the expected transient behaviors of the various TCPfriendly congestion control algorithms. Section IV analyzes in detail the reasons for unfairness of AIMD and binomial congestion control algorithms with droptail queues. We present our simulation results in Section V, and we conclude in Section VI. few algorithms to overcome congestion A. Transient behaviors evaluated in the paper B. Equation-Based Congestion Control Algorithm C. General AIMD-Based Congestion Control Algorithms D. Binomial Congestion Control Algorithm

Q.5. Explain the following with respect to Transport Protocols: a. Ports and Sockets b. User Datagram Protocol (UDP) c. Transmission Control Protocol (TCP)

Ans: Ports: Each process that wants to communicate with another process identifies itself to the TCP/IP protocol suite by one or more ports. A port is a 16-bit number, used by the host-to-host protocol to identify to which higher-level protocol or application program (process) it must deliver incoming messages. As some higher-level programs are themselves protocols, standardized in the TCP/IP protocol suite, such as TELNET and FTP, they use the same port number in all TCP/IP implementations. Those "assigned" port numbers are called well-known ports and the standard applications wellknown services. The "well-known" ports are controlled and assigned by the Internet Assigned Numbers Authority (IANA) and on most systems can only be used by system processes or by programs executed by privileged users. The assigned "well-known" ports occupy port numbers in the range 0 to 1023. The ports with numbers in the range 1024-65535 are not controlled by the IANA and on most systems can be used by ordinary user-developed programs. Confusion due to two different applications trying to use the same port numbers on one host is avoided by writing those applications to request an available port from TCP/IP. Because this port number is dynamically assigned, it may differ from one invocation of an application to the next. UDP, TCP and ISO TP-4 all use the same "port principle". and To the extent possible, the same port numbers are used for the same services on top of UDP, TCP and ISO TP-4.

Sockets : A socket is a special type of file handle which is used by a process to request

network services from the operating system. A socket address is the triple: {protocol, local-address, local-process} In the TCP/IP suite, for example: {tcp, 193.44.234.3, 12345}

A conversation is the communication link between two processes. An association is the 5-tuple that completely specifies the two processes that comprise a connection: {protocol, local-address, local-process, foreign-address, foreign-process} In the TCP/IP suite, for example: {tcp, 193.44.234.3, 1500, 193.44.234.5, 21} could be a valid association.

A half-association is either:

{protocol, local-address, local-process} or {protocol, foreign-address, foreign-process} which specify each half of a connection.

The half-association is also called a socket or a transport address. That is, a socket is an end point for communication that can be named and addressed in a network.

The socket interface is one of several application programming interfaces (APIs) to the communication protocols. Designed to be a generic communication programming interface, it was first introduced by the 4.2BSD UNIX system. Although it has not been standardized, it has become a de facto industry standard. 4.2BSD allowed two different communication domains: Internet and UNIX. 4.3BSD has added the Xerox Network System (XNS) protocols and 4.4BSD will add an extended interface to support the ISO OSI protocols.
User Datagram Protocol (UDP) : The User Datagram Protocol (UDP) is a transport layer protocol

defined for use with the IP network layer protocol. It is defined by RFC 768 written by John Postel. It provides a best-effort datagram service to an End System (IP host). The service provided by UDP is an unreliable service that provides no guarantees for delivery and no protection from duplication (e.g. if this arises due to software errors within an Intermediate System (IS)). The simplicity of UDP reduces the overhead from using the protocol and the services may be adequate in many cases. UDP provides a minimal, unreliable, best-effort, message-passing transport to applications and upper-layer protocols. Compared to other transport protocols, UDP and its UDP-Lite variant are unique in that they do not establish end-to-end connections between communicating end systems. UDP communication consequently does not incur connection establishment and teardown overheads and there is minimal associated end system state. Because of these characteristics, UDP can offer a very efficient communication transport to some applications, but has no inherent congestion control or reliability. A second unique characteristic of UDP is that it provides no inherent On many platforms, applications can send UDP datagrams at the line rate of the link interface, which is often much greater than the available path capacity, and doing so would contribute to congestion along the path, applications therefore need to be designed responsibly. One increasingly popular use of UDP is as a tunneling protocol, where a tunnel endpoint encapsulates the packets of another protocol inside UDP datagrams and transmits them to another tunnel endpoint, which decapsulates the UDP datagrams and forwards the original packets contained in the payload. Tunnels establish virtual links that appear to directly connect locations that are distant in the physical Internet topology, and can be used to create virtual (private) networks. Using UDP as a tunneling protocol is attractive when the payload protocol is not

supported by middleboxes that may exist along the path, because many middleboxes support UDP transmissions. UDP does not provide any communications security. Applications that need to protect their communications against eavesdropping, tampering, or message forgery therefore need to separately provide security services using additional protocol mechanisms.

Protocol Header
A computer may send UDP packets without first establishing a connection to the recipient. A UDP datagram is carried in a single IP packet and is hence limited to a maximum payload of 65,507 bytes for IPv4 and 65,527 bytes for IPv6. The transmission of large IP packets usually requires IP fragmentation. Fragmentation decreases communication reliability and efficiency and should theerfore be avoided. To transmit a UDP datagram, a computer completes the appropriate fields in the UDP header (PCI) and forwards the data together with the header for transmission by the IP network layer.

The UDP protocol header consists of 8 bytes of Protocol Control Information (PCI) The UDP header consists of four fields each of 2 bytes in length:

Source Port (UDP packets from a client use this as a service access point (SAP) to indicate the session on the local client that originated the packet. UDP packets from a server carry the server SAP in this field) Destination Port (UDP packets from a client use this as a service access point (SAP) to indicate the service required from the remote server. UDP packets from a server carry the client SAP in this field) UDP length (The number of bytes comprising the combined UDP header information and payload data) UDP Checksum (A checksum to verify that the end to end data has not been corrupted by routers or bridges in the network or by the processing in an end system. The algorithm to compute the checksum is the Standard Internet Checksum algorithm. This allows the receiver to verify that it was the intended destination of the packet, because it covers the IP addresses, port numbers and protocol number, and it verifies that the packet is not truncated or padded, because it covers the size field. Therefore, this protects an application against receiving corrupted payload data in place of, or in addition to, the data that was sent. In the cases where this check is not required, the value of 0x0000 is placed in this field, in which case the data is not checked by the receiver.

Like for other transport protocols, the UDP header and data are not processed by Intermediate Systems (IS) in the network, and are delivered to the final destination in the same form as originally transmitted. a) At the final destination, the UDP protocol layer receives packets from the IP network layer. These are checked using the checksum (when >0, this checks correct end-to-end operation of the network service) and all invalid PDUs are discarded. UDP does not make any provision for error reporting if the packets are not delivered. Valid data are passed to the appropriate session layer protocol identified by the source and destination port numbers (i.e. the session service access points). UDP and UDP-Lite also may be used for multicast and broadcast, allowing senders to transmit to multiple receivers.

Transmission Control Protocol (TCP) : The Transmission Control Protocol (TCP) is a connection-

oriented reliable protocol. It provides a reliable transport service between pairs of processes executing on End Systems (ES) using the network layer service provided by the IP protocol.

TCP providing reliable data transfer to FTP over an IP network using Ethernet TCP is stream oriented, that is, TCP protocol entities exchange streams of data. Individual bytes of data 9e.g. from an application or session layer protocol) are placed in memory buffers and transmitted by TCP in transport Protocol Data Units (for TCP these are usually known as "segments"). The reliabel, flow-controlled TCP service is much more complex than UDP, which only provides a Best Effort service. To implement the service, TCP uses a number of protocol timers that ensure reliable and synchronised communication between the two End Systems.

For most networks approximately 90% of current traffic uses this transport service. It is used by such applications as telnet, World Wide Web (WWW), ftp, electronic mail. The transport header contains a Service Access Point which indicates the protocol which is being used (e.g. 23 = Telnet; 25 = Mail; 69 = TFTP; 80 = WWW (http)). The port numbers associated with these services generally have the same value as those used for UDP services (a full list of all port numbers is provided in the reference at the end of this page).

6. With diagram explain the components of a VoIP networking system. Ans:

IP Telephony Server(s) This is the heart of the IP Telephony systems which provides complete Call Control, Dial Plan control and all the basic vocie applications (In case of smaller systems, all the functionalities of the below mentioned application servers can also be bundled with this) Application Servers Some times applications like IVR (Interactive Voice Response Auto Attendant), Call Recording, Voice Mail, Data Base Integration require to be hosted in separate servers Especially for larger VOIP installations. IP Phones These IP Phones connect directly to the IP Network (RJ-45 based UTP Cables) and provide all the voice functionalities hitherto provided by analog phones like caller ID display, speaker phones, speed dial keys, memory etc. Soft Phones These are basically software utilities that have all the telephony functions but use the computer, head-set with microphone to make and receive calls. Wi-Fi Phones/ Dual Mode Cell Phones Wi-Fi phones are based on IP Technology and connect to the wireless network and act as mobile extensions. Certain Cell phones come with Wi-Fi adaptors and can be used as a Wi-Fi Phone (if the manufacturer supports the same). Cell Phones can also connect to the IP Telephony server through 3G Networks/ CDMA networks for making a VOIP Call. Analog Telephony Adapters (ATA) These are specialised devices that connect to the LAN at one end and connect to FXO (Analog Trunks) or FXS (Analog Extensions) at the other end. PRI Cards These are used to connect PRI/E1/T1 Trunk Lines to IP Telephony Servers Usually they connect directly with the PCI/ PCI Express Slot in the server. Computer IP Network An IP based Computer Network is used to carry the voice signals across the enterprise and sometimes even to remote locations. IP Phones are much more expensive when compared to the cost of analog phones. The voice call quality (over IP Networks) depends on a number of parameters like the configuration of right QoS parameters, latency, jitter, available bandwidth etc across the network. IP Networks need to be built with sufficient redundancy and security for continuous availability of IP Telephony services If there is a DOS attack on the network (for example), the telephones also become inactive along with the computers. Scaling of IP Telephony systems needs to be planned properly Failing which, the IP telephony server may not be able to handle high concurrent call loads. There are hardware/ license based restrictions on the maximum number of concurrent calls that a single server can handle/ maximum number of end points that can connect to a single server.

Master of Computer Application (MCA) Semester 6 MC0087 Internetworking with TCP/IP 4 Credits (Book ID: B1008)
Assignment Set 2 (60 Marks)

1. Explain the following with respect to Internetworking protocols: a. Internet Protocol (IP) b. Internet Control Message Protocol (ICMP) c. Address Resolution Protocol (ARP) Ans: a. IP (Internet Protocol) is the primary network protocol used on the Internet, developed in the
1970s. On the Internet and many other networks, IP is often used together with the Transport Control Protocol (TCP) and referred to interchangeably as TCP/IP.

IP supports unique addressing for computers on a network. Most networks use the Internet Protocol version 4 (IPv4) standard that features IP addresses four bytes (32 bits) in length. The newer Internet Protocol version 6 (IPv6) standard features addresses 16 bytes (128 bits) in length. Data on an Internet Protocol network is organized into packets. Each IP packet includes both a header (that specifies source, destination, and other information about the data) and the message data itself. IP functions at layer 3 of the OSI model. It can therefore run on top of different data link interfaces including Ethernet and Wi-Fi.
b. ICMP is a network protocol useful in Internet Protocol (IP) network management and administration. ICMP is a required element of IP implementations. ICMP is a control protocol, meaning that it does not carry application data, but rather information about the status of the network itself. ICMP can be used to report:

errors in the underlying communications of network applications availability of remote hosts network congestion

Perhaps the best known example of ICMP in practice is the ping utility, that uses ICMP to probe remote hosts for responsiveness and overall round-trip time of the probe messages. ICMP also supports trace route, that can identify intermediate "hops" between a given source and destination. c. ARP Short for Address Resolution Protocol, a network layer protocol used to convert an IP address into a physical address (called a DLC address), such as an Ethernet address. A host wishing to obtain a physical address broadcasts an ARP request onto the TCP/IP network. The host on the network that has the IP address in the request then replies with its physical hardware address.

There is also Reverse ARP (RARP) which can be used by a host to discover its IP address. In this case, the host broadcasts its physical address and a RARP server replies with the host's IP address. Q. 3. Differentiate between NAT and NAPT. Ans: NAT vs NAPT Network Address Translation (NAT) is the process that modifies the IP address in a header of an IP packet, while it is travelling through a routing device. NAT allows one set of IP addresses to be used for traffic within a LAN (Local Area Network) and another set of IP addresses to be used for outside traffic. One to one transformation of IP addresses are provided by the simplest form of NAT. NAPT (Network Address and Port Translation) is an extension of NAT that allows many IP addresses to be mapped in to a single IP address. This is done with the help of TCP and UDP port information in the outgoing traffic. NAT: Network Address Translation modifies IP address in a header of an IP packet, while it is travelling through a routing device. NAT allows one set of IP addresses to be used for traffic within a LAN and another set of IP addresses for outside traffic. One to one transformation of IP addresses are provided by the simplest form of NAT. NAT has several advantages. It improves the security of a LAN since it provides the option to hide internal IP addresses. Furthermore, as the IP addresses are only used internally, it will not cause any conflicts with IP addresses used in other organizations. Also, using a single internet connection for all the computers in a LAN is made possible by NAT. NAT works with the use of a NAT box, which is situated in the interface where the LAN is connected to the internet. It contains a set of valid IP addresses and it is responsible for performing the IP address translations. NAPT: NAPT (Network Address and Port Translation) is used to map a set of private IP addresses using a single public IP address or a small group of public IP addresses. NAPT is also referred to as PAT (Port Address Translation), IP masquerading, NAT Overload and many-to-one NAT. In NAPT, many IP addresses are mapped to a single IP address. This would cause an ambiguity when routing the returned packets. To avoid this problem NAPT makes use of the TCP/ UDP port information in the outgoing traffic and maintains a translation table. This would allow routing the returned packets correctly to the requester. Difference between NAT and NAPT NAT modifies IP address in a header of an IP packet, while it is travelling through a routing device and allows to use a different set of IP addresses to be used for traffic within a LAN than the set of IP addresses used for outside traffic, while NAPT is a special kind of NAT where multiple private IP addresses are mapped in to as single IP or a small group of public IP addresses. Therefore NAPT involves a many-to-one translation of IP addresses. NAPT is the most widely used NAT, therefore most of the time NAPT is referred as NAT.

4.

Discuss the various steps in domain name resolution

Ans: The domain name resolution process can be summarized in the following steps:

1. A user program issues a request such as the gethostbyname() system call (this particular call asks for the IP address of a host by passing the host name) or the gethostname() system call (which asks for a host name of a host by passing the IP address).

2. The resolver formulates a query to the name server. (Full resolvers have a local name cache to consult first; stub revolvers do not.

3. The name server checks to see if the answer is in its local authoritative database or cache, and if so, returns it to the client. Otherwise, it queries other available name servers, starting down from the root of the DNS tree or as high up the tree as possible.

4. The user program is finally given a corresponding IP address (or host name, depending on the query) or an error if the query could not be answered. Normally, the program will not be given a list of all the name servers that have been consulted to process the query. Domain name resolution is a client/server process .The client function (called the resolver or name resolver) is transparent to the user and is called by an application to resolve symbolic high-level names into real IP addresses or vice versa. The name server (also called a domain name server) is the server application providing the translation between high- level machine names and the IP addresses. The query/reply messages can be transported by either UDP or TCP. Q.6. Describe the following mail applications: a. Simple Mail Transfer Protocol b. Post Office Protocol (POP) c. Internet Message Access Protocol (IMAP4) Ans: a. Simple Mail Transfer Protocol : SMTP is a core Internet protocol used to transfer email messages between servers (first defined in RFC 821 in 1982). This contrasts with protocols such as POP3 and IMAP, which are used by messaging clients to retrieve e-mail. SMTP servers look at the destination address of a message and contact the target mail server directly. Of course, this means the Domain Name Service (DNS) has to be configured correctly otherwise mail could be handed to the wrong server - potentially a big problem because, unless you have encrypted your messages, your e-mail will be in plain text! SMTP was designed to be a reliable message delivery system. Reliable in this case means that a message handled by SMTP is intended to get to its destination or generate an error message accordingly. This is not the same as a guaranteed delivery service, it just does its best. To

guarantee delivery requires all sorts of data exchanges that would add considerable communications overhead that would be pointless for everyday purposes. SMTP communications are transported by TCP to ensure reliable end-to-end transport. RFC 822 defines the format of SMTP messages. RFC 822 is a straightforward specification that breaks the message into headers and bodies separated by a blank line. In the header are a number of keywords and values that define the sending date, sender's address, where replies should go, and so on, while the body contains the data. To send an SMTP message requires an exchange between the sender and receiver. First, the sending server says "HELO." Honest - SMTP servers are very polite. The sender should announce the domain it is sending from, and the receiver should reply with a completion code of 200 if it is willing to talk. Post Office Protocol (POP): (1) POP is short for Post Office Protocol, a protocol used to retrieve e-mail from a mail server. Most e-mail applications (sometimes called an e-mail client) use the POP protocol, although some can use the newer IMAP (Internet Message Access Protocol). There are two versions of POP. The first, called POP2, became a standard in the mid-80's and requires SMTP to send messages. The newer version, POP3, can be used with or without SMTP. (2) Pop is short for point of presence, an access point to the Internet. ISPs have typically multiple POPs. A point of presence is a physical location, either part of the facilities of a telecommunications provider that the ISP rents or a separate location from the telecommunications provider, that houses servers, routers, ATM switches and digital/analog call aggregators. (3) Pop is short for Programmed Operator (POP), a pseudo-opcode in a virtual machine language executed by an interpretive program. The Programmed Operator instructions provide the ability to define an instruction set for efficient encoding by calling subprograms into primary memory. (4) POP is short for picture-outside-picture POP is a feature found on some televisions that allows the user to divide the screen into two same-size pictures, enabling you to view a second program. Compare with picture-in-picture (PIP). Internet Message Access Protocol (IMAP4): IMAP (Internet Message Access Protocol) is a standard protocol for accessing e-mail from your local server. IMAP (the latest version is IMAP Version 4) is a client/server protocol in which e-mail is received and held for you by your Internet server. You (or your e-mail client) can view just the heading and the sender of the letter and then decide whether to download the mail. You can also create and manipulate multiple folders or mailboxes on the server, delete messages, or search for certain parts or an entire note. IMAP requires continual access to the server during the time that you are working with your mail.

A less sophisticated protocol is Post Office Protocol 3 (POP3). With POP3, your mail is saved for you in a single mailbox on the server. When you read your mail, all of it is immediately downloaded to your computer and, except when previously arranged, no longer maintained on the server. IMAP can be thought of as a remote file server. POP3 can be thought of as a "store-and-forward" service. POP3 and IMAP deal with the receiving of e-mail from your local server and are not to be confused with Simple Mail Transfer Protocol (SMTP), a protocol used for exchanging e-mail between points on the Internet. Typically, SMTP is used for sending only and POP3 or IMAP are used to read e-mail.

También podría gustarte