Está en la página 1de 72

CareerAces

Operating system

Operating System

D e fin itio n s
computer It is the first thing that is loaded into memory when one turn ones computer on. A program that acts as an intermediary between a user of a computer and computer hardware A software that manages the sharing of the resources of a computer and provides programmers with an interface used to access those resources. An operating system processes system data and user input, and responds by allocating and managing tasks and internal system resources as a service to users and programs of the system. Most operating systems come with an application that provides a user interface for managing the operating system, such as a command line interpreter or graphical user interface.

o A program that manages hardware and software resources of a o o o o

Interface
o Command Line Interface CLI o Graphical User Interface GUI o

Types System Calls


Process control File management Device management Information maintenance Communications

Virtual Machine

System Boot o
o Operating system must be made available to hardware

so o hardware can start it. o o o Bootstrap loader is a s mall piece of code o which locates the kernel, loads it into memory, and o starts it.

Kernel

The kernel's primary purpose is to manage the computer's resources and allow other programs to run and use the resources like the CPU, memory and the I/O devices in the computer.

The facilities provides by the kernel are: o Memory management - The kernel has full access to the system's memory and must allow processes to access safely this memory as they require it. o Device management - To perform useful functions, processes need access to the peripherals connected to the computer, which are controlled by the kernel through device drivers o System calls - To actually perform useful work, a process must be able to access the services provided by the kernel.

Types of Kernel: by most programs which cannot be put in a library is in the kernel space: Device drivers Scheduler Memory handling File systems Network stacks Micro kernels - In Micro kernels, parts which really require to be in a privileged mode are in kernel space: -Inter-Process Communication, -Basic scheduling -Basic memory handling -Basic I/O primitives

o Monolithic kernels - Every part which is to be accessed o o o o o o

File Systems
o Fi e syste m s a re a n i te g ra lp a rt o f a n y l n

o p e ra ti g syste m s w i th e ca p a ci fo r l n g n th ty o te rm sto ra g e . o Tw o d i n ct p a rts o f a fi e syste m sti l


o The mechanism for storing files and o The directory structure into which they are

organized

Implementation strategy

Contiguous allocation

o First implementation strategy was that of a contiguous o o o o o o o o o

allocation Lay out of various files are in contiguous disk blocks. Used in VM/CMS - an old IBM interactive system Quick and easy calculation of block holding data - just offset from start of file For sequential access, almost no seeks required. Even direct access is fast - just seek and read. Only one disk access. Where is best place to put a new file? Problems when file gets bigger - may have to move whole file!! External Fragmentation. Compaction may be required, and it can be very expensive.

Linked allocation
o Next implementation strategy was that of a linked o o o o o o o

allocation. All files stored in fixed size blocks. Link together adjacent blocks like a linked list. No more variable-sized file allocation problems. Everything takes place in fixed-size chunks, which makes memory allocation a lot easier. No more external fragmentation. No need to compact or relocate files. Potentially terrible performance for direct access files - have to follow pointers from one disk block to the next! Even sequential access is less efficient than for contiguous files because may generate long seeks between blocks. Reliability -if lose one pointer, have big problems.

FAT allocation

o N ext i p l m e n ta ti n stra te g y i FA T ( Fi e m e o s l

A l o ca ti n Ta b l ) a l o ca ti n l o e l o

3rd Class Process, Threads

CareerAces

Process

The term "process" was first used by the designers of the MULTICS in 1960's. The process has been given many definitions for instance o A program in Execution. o An asynchronous activity. o The 'animated sprit' of a procedure in execution. o The entity to which processors are assigned. o The 'dispatch able' unit. o There is no universally agreed upon definition, but the definition "Program in Execution" seem to be most frequently used. Process is not the same as program. A process is more than a program code. A process is an 'active' entity as oppose to program which consider being a 'passive' entity. o A process is the unit of work in a system. In Process model, all software on the computer is organized into a number of sequential processes. A process includes PC, registers, and variables. Conceptually, each process has its own virtual CPU. In reality, the CPU switches back and forth among processes

Process states
occur o Ready process o Terminated

o New The process is being created o Running Instructions are being executed o Waiting / Blocking The process is waiting for some event to

The process is waiting to be assigned to a The process has finished execution

Process Control Blocks (PCB)


A p ro ce ss i a n o p e ra ti g syste m i re p re se n te d b y a d a ta n n s stru ctu re kn o w n a s a p ro ce ss co n tro l b l ck ( PC B ) o r p ro ce ss o d e scri to r. p T h e P C B co n ta i s i n m p o rta n t i fo rm a ti n a b o u t th e sp e ci c n o fi p ro ce ss i cl d i g n u n o T h e cu rre n t sta te o f th e p ro ce ss ie ., i i i i re a d y , . f t s n ru n n i g , w a i n g . n ti o U n i u e i e n ti ca ti n o f th e p ro ce ss i o rd e r to tra ck th e q d fi o n co n ce rn e d p ro ce ss. o A p o i te r to p a re n t p ro ce ss. n o A p o i te r to ch i d p ro ce ss ( i i exi ). n l f t sts o C P U sch e d u l n g i fo rm a ti n - T h e p ri ri o f p ro ce ss i n o o ty o Po i te rs to l ca te m e m o ry o f p ro ce sse s. n o o A re g i r sa ve a re a . ste o T h e p ro ce sso r i i ru n n i g o n . t s n o A cco u n ti g i fo rm a ti n n n o o I O sta tu s i fo rm a ti n / n o

Different types of Processes


Cooperating Processes A process which can affect or can be affected by other processes is called co-operating process Concurrent Process A sequential computer program consists of a series of instructions to be executed one after another. A concurrent program consists of several sequential programs to be executed in parallel. Each of the concurrently executing sequential programs is called a process Independent Process Independent process cannot affect or be affected by the execution of another process.

o o

Thread
o Process has is a thread of execution o Threads add to the process model

is to allow multiple executions to take place in the same process environment, to a large degree independent of one another. o Although a thread must execute in some process, the thread and its process are different concepts and can be treated separately. Processes are used to group resources together; threads are the entities scheduled for execution on the CPU. o The threads share an address space, open files, and other resources, processes share physical memory, disks, printers, and other resources o Because threads have some of the properties of processes, they are sometimes called lightweight processes

Three processes with one thread

One process with three threads

Processes Vs Threads
Similarities o Like processes threads share CPU and only one thread active (running) at a time. o Like processes, threads within processes, threads within a process execute sequentially. o Like processes, thread can create children. o And like process, if one thread is blocked, another thread can run. Differences o Unlike processes, threads are not independent of one another. o Unlike processes, all threads can access every address in the task. o Unlike processes, threads are design to assist one other. Note that processes might or might not assist one another because processes may originate from different users. o

Multi tasking, Multi programming & Multi threading

Context Switch

To give each process on a multi programmed machine a fair share of the CPU, a hardware clock generates interrupts periodically. This allows the operating system to schedule all processes in main memory (using scheduling algorithm) to run on the CPU at equal intervals. Each time a clock interrupt occurs, the interrupt handler checks how much time the current running process has used. If it has used up its entire time slice, then the CPU scheduling algorithm (in kernel) picks a different process to run. Each switch of the CPU from one process to another is called a context switch.

4th Class CPU Scheduling and IPC

CareerAces

CPU Scheduling
o The objective of multiprogramming is to have

some process running all the time, to maximize CPU utilization Maximum CPU utilization obtained with multiprogramming. o CPU scheduling is the basis of multi programmed operating systems. By switching the CPU among processes, the operating system can make the computer more productive. o Scheduling schemes can be divided into two parts:
o Non-preemptive scheduling scheme and o Preemptive scheduling scheme.

Non-preemptive scheduling scheme

Non-preemptive scheduling is a scheme where once a process has control of the CPU no other processes can preemptively take the CPU away. The process retains the CPU until either it terminates or enters the waiting state.

There are two algorithms that can be used for non-preemptive scheduling. o First - Come , First - Served algorithm ( FCFS ) o Shortest - Job - First ( SJF )

o First - Come , First - Served algorithm ( FCFS )

The first process to request the CPU is the one that is allocated the CPU first and is very simple to implement. It can be managed using a First-In, First-Out (FIFO) queue. When the CPU is free, it is allocated to the first process waiting in the FIFO queue. Once that process is finished the CPU goes back to the queue and selects the first job in the queue. o Shortest - Job - First ( SJF ) In this scheduling scheme the process with the shortest next CPU burst will get the CPU first. By moving all the short jobs ahead of the longer jobs the average waiting time is decrease. However, it is impossible to know the length of the next CPU burst.

scheme

Preemptive scheduling is the second scheduling scheme. In preemptive scheduling there is no guarantee that the process using the CPU will keep it until it is finished. This is because the running task may be interrupted and rescheduled by the arrival of a higher priority process. There are two preemptive scheduling algorithms that are preemptive.
o The Round Robin (RR) and o The Shortest Remaining Time First (SRTF)

Round - Robin ( RR ) o The Round-Robin scheduling scheme is similar to that of FCFS except preemption is added to it. In the RR scheduling scheme the CPU picks a process from the ready queue and sets a timer to interrupt after one time quantum. During this scheme two things may happen. o The process may need less than one time quantum to execute. o The process needs more than one time quantum.

In the first case when a process is allocated the CPU it executes. Because the time required by the process is less than one time quantum, the process gives up the CPU freely. This causes the scheduler to go and select another process from the ready queue. In the second case if a process needs more than one quantum time to execute it must wait. In the RR scheme each process is given only one time quantum. The only way for the process to gain access to the CPU for more than one time quantum is if it is the only process left. If that is not the case, then after one time quantum the process will be interrupted by the timer. This will cause the process to go to the end of the ready queue. The next process in line will get allocated the CPU and will be allotted one time quantum.

Shortest Remaining Time First ( SRTF ) o In the Shortest Remaining Time First (SRTF) algorithm, the process that is running is compared to the processes in the ready queue. If a process in the ready queue is shorter than the process running, then the running task is preempted and the CPU is given to the shorter process until it is finished.

Inter Process Communication (IPC)

Since processes frequently need to communicate with other processes therefore, there is a need for a well-structured communication, without using interrupts, among processes. It is a set of techniques for the exchange of data among multiple processes. Processes may be running on one or more computers connected by a network. IPC techniques are divided into methods for message passing, synchronization, shared memory, and remote procedure calls (RPC). The method of IPC used may vary based on the bandwidth and latency of communication between the process, and the type of data being communicated. IPC may also be referred to as inter-thread communication and inter-application communication. It is a mechanism for processes to communicate and to synchronize their actions

o o

Process synchronization
Process synchronization is the task of organizing the access of several concurrent processes to shared (i.e. jointly used) resources without causing any conflicts. The shared resources are most often memory locations (shared data) or some hardware. Process synchronization can be divided into two subcategories: o Synchronizing competing processes: Several processes compete for one exclusive resource. This is solved by one of the mutual exclusion mechanisms. o Synchronizing cooperating processes: Several processes have to notify each other of their progress so e.g. common results can be passed on. The consumer producer problem is an example. This problem is often solved with semaphores or messaging.

Race conditions

I o p e ra ti g syste m s, p ro ce sse s th a t a re n n w o rki g to g e th e r sh a re so m e co m m o n sto ra g e n ( main memory , fileetc .) that each process can re a d a n d w ri . W h e n tw o o r m o re p ro ce sse s te a re re a d i g o r w ri n g so m e sh a re d d a ta a n d n ti th e fi a l re su l d e p e n d s o n w h o ru n s p re ci l n t se y w h e n , a re ca l e d ra ce co n d i o n s. l ti Avoiding Race conditions The key to preventing trouble involving shared storage is find some way to prohibit more than one process from reading and writing the shared data simultaneously. That part of the program where the shared memory is accessed is called the Critical Section.

Deadlock
A se t o f p ro ce ss i i a d e a d l ck sta te i e a ch p ro ce ss s n o f i th e se t i n s w a i n g ti fo r a n e ve n t th a t ca n b e ca u se d b y o n l y a n o th e r p ro ce ss i n th e se t.

Conditions for Deadlock o Mutual exclusion o Hold and wait o No preemption o Circular wait

Deadlock Prevention
o E lm i o E lm i o E lm i o E lm i

i a ti n n o i a ti n n o i a ti n n o i a ti n n o

o f M u tu a lE xcl si n C o n d i o n u o ti o f H o l a n d W a i C o n d i o n d t ti o f N o -p re e m p ti n C o n d i o n o ti o f C i l r W a i C o n d i o n rcu a t ti

Virtual memory

Cache memory

V i a l m e m o ry i h a rd w a re te ch n i u e w h e re rtu s q th e syste m a p p e a rs to h a ve m o re m e m o ry th a t i a ctu a l y d o e s. T h i i d o n e b y ti e -sh a ri g , t l s s m n th e p h ysi l m e m o ry a n d sto ra g e p a rts o f th e ca m e m o ry o n e d i w h e n th e y a re n o t a cti l sk ve y b e i g u se d . n


Cache memory is random access memory (RAM) that a computer microprocessor can access more quickly than it can access regular RAM. As the microprocessor processes data, it looks first in the cache memory and if it finds the data there (from a previous reading of data), it does not have to do the more time-consuming reading of data from larger memory.

Paging
o Pa g i g n

i a n a sp e ct o f vi a l m e m o ry s rtu a d d re ssi g w h e re b y re l ti l i a cti p a g e s n a ve y n ve ca n b e te m p o ra ri y re m o ve d fro m p h ysi l l ca m e m o ry i n e ce ssa ry. f o I th e se p a g e s h a ve b e e n m o d i e d , th e y m u st f fi b e sa ve d to a te m p o ra ry sto ra g e a re a o n d i , ca l e d a p a g i g fi e o r sw a p sp a ce . sk l n l o T h e o p e ra ti n o f w ri n g o n e i a cti p a g e o r a o ti n ve cl ste r o f i a cti u n ve m e m o ry p a g e s to d i i sk s ca l e d a p a g e o u t, a n d th e co rre sp o n d i g l n o p e ra ti n o f re a d i g th e m i a g a i l te r w h e n o n n n a o n e o f th e p a g e s i re fe re n ce d i ca l e d a s s l page i . n
o

Paging operation
o B a si c

i e a i to a l o ca te p h ysi l m e m o ry to d s l ca p ro ce sse s i fi d si n xe ze ch u n ks ca l e d p a g e l fra m e s. o I si e m a ch i e , b re a k a d d re ss sp a ce o f a p p l ca ti n n d n i o u p i to fi d si ch u n ks ca l e d p a g e s. n xe ze l o W h e n p ro ce ss g e n e ra te s a n a d d re ss, d yn a m i l y ca l tra n sl te to th e p h ysi l p a g e fra m e w h i h o l s a ca ch d d a ta fo r th a t p a g e . o S o , a vi a l a d d re ss n o w co n si rtu sts o f tw o p i ce s: a e p a g e n u m b e r a n d a n o ffse t w i i th a t p a g e . th n o To a cce ss a p i ce o f d a ta a t a g i n a d d re ss, e ve

syste m a u to m a ti l y d o e s th e fo l o w i g : ca l l n
o o o o

Extracts page number. Extracts offset. Translate page number to physical page frame id. Accesses data at offset in physical page frame.

o System

performs translation using a page table. Page table is a linear array indexed by virtual page number that gives the physical page frame that contains that page. It contains:
o Extract page number. o Extract offset. o Check that page number is within address space

of process. o Look up page number in page table. o Add offset to resulting physical page number o Access memory location.

Virtual to Physical memory

Fragmentation

Fragmentation occurs in a dynamic memory allocation system when many of the free blocks are too small to satisfy any request. External Fragmentation: External Fragmentation happens when a dynamic memory allocation algorithm allocates some memory and a small piece is left over that cannot be effectively used. If too much external fragmentation occurs, the amount of usable memory is drastically reduced. Total memory space exists to satisfy a request, but it is not contiguous. Internal Fragmentation: Internal fragmentation is the space wasted inside of allocated memory blocks because of restriction on the allowed sizes of allocated blocks. Allocated memory may be slightly larger than requested memory; this size difference is memory internal to a partition, but not being used

Thrashing
o If the number of frames allocated to a low priority

process is lower than the minimum number required by the computer architecture then we must suspend the execution of this low priority process. o We should page out all of its remaining pages and free all of its allocated frames. This provision introduces a swap in, swap-out level of intermediate CPU scheduling. o If the process does not have the number of frames it needs to support pages in active use, it will quickly page fault. The only option remains here for process is to replace some active pages with the page that requires a frame. However, since all of its pages are in active use, it must replace a page that will be needed again right away. Consequently, it quickly faults again and again that mean replacing pages that it must bring back in immediately. This high paging activity is called Thrashing. o The system can detect thrashing by evaluating the level of CPU utilization as compared to the level of multiprogramming. It can be eliminated by reducing the level of multiprogramming.

Real Time Operating System(RTOS)


o R e a l ti e - m

co m p u ti g ( R TC ) i th e stu d y o f n s h a rd w a re a n d so ftw a re syste m s w h i ch a re su b j ct to a " re a l ti e co n stra i t" ie ., e - m n . o p e ra ti n a l d e a d l n e s fro m e ve n t to syste m o i re sp o n se . o A re a l ti e syste m i o n e i w h i m s n ch th e co rre ctn e ss o f th e co m p u ta ti n s n o t o n l o y d e p e n d s u p o n th e l g i l co rre ctn e ss o f th e o ca co m p u ta ti n b u t a l u p o n th e ti e a t w h i o so m ch th e re su l i t s p ro d u ce d . I th e ti i g f m n co n stra i ts o f th e syste m a re n o t m e t, syste m n fa i u re i sa i to h a ve o ccu rre d l s d

oNetworking

o o o Q. What is OSI model ? List layers.

Slide Title | CONFIDENTIAL 2006

OSI (Open Systems Interconnect)Model


Standards that ensure varying devices and products can communicate with each other over any network. This set of standards is called a model. The International Standards Organization (ISO) created an industry wide model, or framework, for defining the rules networks should employ to ensure reliable communications. This model was termed as OSI This network model is broken into layers, with each layer having a distinctive job in the communication process. Groping into Layers reduces the complexity in implementing the Network Architecture Provides Compatibility and allows multi-vendor Integration Facilitates Modularization and allows developer to swap out new changes at a particular Layer without affecing the other Layers

Slide Title

CONFIDENTIAL

2006

OSI Model Overview

Application Application (Upper) Layers Presentation Session Transport Layer Network Layer Data Link Physical

Data Flow Layers

OSI Model
Application
Professional Workst ation 500 0

P110

SD

Presentation

Session

Transport

Network

LAN 1

LAN 2

Data link

Physical

Slide Title | CONFIDENTIAL 2006

o o o o Q. Explain working of each layer.

Slide Title | CONFIDENTIAL 2006

ile Transfer, Email,Email, Remote Login File Transfer, Remote Login ASCII ASCII Sound (syntaxsyntax) layer) Text, Text, Sound ( layer Establish/managemanage connection Establish/ connection

End-to-End-to-end control & checking end control & error error checking ensureensure completetransfer): TCP ): TCP ( complete data data transfer

Routing and Forwarding Address: IP IP Routing and Forwarding Address: Two party communication: Ethernet Two party communication: Ethernet

to transmit signal; coding How to transmit signal; coding ware means means of sending and Hardware of sending and iving data on a carrier receiving data on a carrier

The Application Layer


Application Application Presentation Session Transport
P110
SD

Provides the user interface. Connects the user to the network. Provides file transfer service, mail service,

Network Data Link


Professional Workstation 5000

Physical

Layer
Application Presentation Presentation Session Transport Network
B

Encodes and decodes data. Determines the format and structure of data. Compresses and decompresses data. Encrypts and decrypts.

Physical

decoder ring
Slide Title | CONFIDENTIAL 2006

Data Link

A
08/12/2011

The Sessions Layer


Application Presentation Session Session Transport Network Data Link Physical
Establishes and maintains connection. Manages upper layer errors. Handles remote procedure calls. Synchronizes communicating nodes.

Slide Title

CONFIDENTIAL

2006

08/12/2011

The Transport Layer


Application Presentation Session Transport Transport Network Data Link Physical
Re er send c ei pt rn re tu f or

Takes action to correct faulty transmission. Controls the flow of data. Acknowledges successful receipt of data. Fragments and reassembles data.

Slide Title

CONFIDENTIAL

2006

08/12/2011

The Network Layer


Application Presentation Session Transport Network Network Data Link Physical LAN 1 LAN 2
Moves information to the correct address. Assembles and disassembles packets. Addresses and routes data packets. Determines best path for moving data through the network.

Slide Title

CONFIDENTIAL

2006

08/12/2011

The Data Link Layer


Application Presentation Session Transport Network Data Link Data Link Physical
Controls access to the communication channel. Controls the flow of data. Organizes data into logical frames. Identifies specific computer on the network. Detects errors.
010101010011001 010101010011001
Data from upper layers

10010011 0000011 1010011 0010111


Slide Title | CONFIDENTIAL 2006

08/12/2011

The Physical Layer


Application Presentation Session Transport Network Data Link Physical Physical
Provides electrical and mechanical interfaces for a network. Specifies type of media used to connect network devices.

Slide Title

CONFIDENTIAL

2006

08/12/2011

Q. What is TCP/IP

Slide Title

CONFIDENTIAL

2006

08/12/2011

o TheTransmission Control Protocol(TCP) is one of the core o

o o o

protocolsof theInternet Protocol Suite. TCP is one of the two original components of the suite, complementing theInternet Protocol(IP), and therefore the entire suite is commonly referred to asTCP/IP. TCP is the protocol that major Internet applications rely on, applications such as theWorld Wide Web,e-mail, andfile transfer. TCP provides a point-to-point channel for applications that require reliable communications. TheHypertext Transfer Protocol(HTTP),File Transfer Protocol(FTP) are all examples of applications that require a reliable communication channel.

Slide Title | CONFIDENTIAL 2006

o o o Q . What is connection less and connection

oriented protocol?

Slide Title | CONFIDENTIAL 2006

Connection-Oriented
o Connection-Orientedmeans that when

devices communicate, they perform handshaking to set up an end-to-end connection. o Connection-Oriented systems can only work in bi-directional communications environments. o To negotiate a connection, both sides must be able to communicate with each other. This will not work in a unidirectional environment.
Slide Title | CONFIDENTIAL 2006

Connectionless
o Connectionlessmeans that no effort is made

to set up a dedicated end-to-end connection.

Slide Title | CONFIDENTIAL 2006

o What Is difference between switch and

hub?

Slide Title | CONFIDENTIAL 2006

o Hub is a multiport repeater it broadcast

information (it receives on any port) to all ports hence is called non-intelligent or dumb.

o HUB works on Physical layer where as SWITCH

works on data link layer,HUB based networks are on one collision domain where as in Switch based network switch divides networks into multiple collision domains.Switch also maintains MAC address tables.

Slide Title | CONFIDENTIAL 2006

A S i p l E xa m p l m e e H u b - T h i k o f a p o stm a n w i a l tte r to d e l ve r n th e i i a ro w n o f h o u se s, n o n e o f th e h o u se s h a ve n u m b e rs so h e h a s to vi t e a ch h o u se si a n d a sk th e o w n e r i th e l tte r i fo r th e m . f e s S w i tch - A l th e h o u se s a re n u m b e re d , so th e l p o stm a n kn o w s w h e re to g o , a n d d o e sn ' h a ve t to b o th e r a n y o th e r h o m e o w n e rs.

o o

o Q. What is DHCP? o o o

Slide Title | CONFIDENTIAL 2006

T h e D yn a m i H o st C o n fi u ra ti n c g o Pro to co l ( D H C P ) is an automatic co n fi u ra ti n p ro to co lu se d o n I g o P n e tw o rks. C o m p u te rs th a t a re co n n e cte d to I P n e tw o rks m u st b e co n fi u re d b e fo re th e y g ca n co m m u n i te w i o th e r co m p u te rs ca th o n th e n e tw o rk . D H C P a l o w s a co m p u te r to b e co n fi u re d l g a u to m a ti l y , e l m i a ti g th e n e e d fo r ca l i n n i te rve n ti n b y a n e tw o rk a d m i i n o n stra to r. I a l p ro vi e s a ce n tra ld a ta b a se fo r t so d ke e p i g tra ck o f co m p u te rs th a t h a ve n b e e n co n n e cte d to th e n e tw o rk . T h i p re ve n ts tw o co m p u te rs fro m s a cci e n taSlide Titlee| i g co n fi u re d w i th e d l y b CONFIDENTIAL 2006 l n g th

o o o Q. What is difference between routers

and gateways.
o

Slide Title | CONFIDENTIAL 2006

D i re n ce b e tw e e n R o u te r a n d G a te w a y ffe I si p l r te rm s a ro u te r i l ke a e l va to r in n m e s i e th e b u i d i g . I ca n ta ke yo u to a n y fl o r l n t o [ destination ] and back again [ source ] . This w o u l w o rk w i a n y ro u ta b l p ro to co l [ tcp / i , d th e p i x , d e cn e t..] p Yo u r fi rst d o o r to th e e l va to r i yo u r g a te w a y. e s T h i i a l yo u r p c n e e d s to kn o w si ce th e s s l n ro u te r w i l ta ke i fro m th e re a n d m a ke su re i l t t g e ts to w h e re yo u w a n t a n d b a ck a g a i . Yo u n ca n a cce ss th e W o rl b y g o i g th ru th a t fi d n rst d o o r [ g a te w a y ]
Slide Title | CONFIDENTIAL 2006

Q: What is the PDU of "Network layer" and "Data link layer.

Slide Title | CONFIDENTIAL 2006

o o o PDU(Protocol Data Unit) for Network Layer is:

"Packet" and PDU for Data Link Layer is :"Frame"

o o

Slide Title | CONFIDENTIAL 2006

o o

o Why we use cross cable to connect

same devices?

Slide Title | CONFIDENTIAL 2006

o Same devices like PC-2-PC, it uses(NIC for PC)

1,2 for transmission & 3,6 for reception. If we don't use cross cable then we can't transfer data.
o o While in the case of switch/hub they receive

(NIC of SWITCH/HUB)data on 1,2 & transmit on 3,6.


o o Thats why we use straight cable for de-similar

host & cross cable for similar hosts.


Slide Title | CONFIDENTIAL 2006

o o Q. What are associated TCP/IP Protocols &

its Services?

Slide Title | CONFIDENTIAL 2006

A sso ci te d TC P / I Pro to co l & a P s S e rvi s ce


HTTP This protocol, the core of the World Wide Web, facilitates retrieval and transfer of hypertext (mixed media) documents. Stands for the HyperText Transfer protocol A remote terminal emulation protocol that enables clients to log on to remote hosts on the network. Used to remotely manage network devices. Stands for the Simple Network Management Protocol. Provides meaningful names like achilles.mycorp.com for computers to replace numerical addresses like 123.45.67.89. Stands for the Domain Name System. SLIP (Serial Line Internet Protocol) and PPP (Point to Point Protocol) encapsulate the IP packets so that they can be sent over a dial up phone connection to an access providers modem.

Telnet SNMP DNS

SLIP/ PPP

También podría gustarte