Está en la página 1de 7

PART-A

Q.1 Enlist a few objectives of the file management system.


Ans. A file is a named collection of related information that is recorded on secondary
storage. From user point of view a file is the smallest allotment of logical secondary
storage.
• The way a user of application may access files
• Programmer does not need to develop file management software

Objectives:
• Meet the data management needs and requirements of the user
• Guarantee that the data in the file are valid
• Optimize performance
• Provide I/O support for a variety of storage device types
• Minimize or eliminate the potential for lost or destroyed data
• Provide a standardized set of I/O interface routines
• Provide I/O support for multiple users

Q.2 Discuss the type of directory structure implemented in Linux. Is it same as


the one implemented in MS Windows? If not, what type of directory structure is
implemented in Windows?
ANS. LINUX, retains the UNIX’S standard file system model. In UNIX, a file does not
have to be an object stored on the disk or fetched over a network from a remote file
server. Rather UNIX files can be anything capable of handling the input or output of a
stream of data. Device drivers can appear as files, and the inter-process communication
channels or network connections also look like files to the user.
The LINUX kernel handles all these types of file by hiding the implementation
details of any single file type behind a layer of software, the virtual file
system(VFS).
Difference between Linux and windows operating system:
• In windows operating protection and security is greater than Linux.
• Windows has graphical user interface while Linux does not.

• In Linux a user can work on a single document while in Windows, he can


work on multiple documents.
Q.3. “MS DOS uses a simple and efficient disk-space allocation method called
FAT, which is a variation on Linked Allocation Method.” Discuss the variation
brought by FAT into the basic Linked allocation method?

Ans. An important variation on the linked allocation method is the use of a file
allocation table (FAT).
It is a simple but efficient method of disk-space allocation and is used by the
MS-DOS and OS/2 Operating systems. A section of the disk at the beginning of each
partition is set aside to contain the Table. The table has one entry for each disk block,
and is indexed by block number. The FAT is used Much as is a linked list. The
directory entry contains the block number of the first block of the file. The table
entry indexed by the block number then contains the block number of the next block
in the file. This chain continues until the last block, which has a special end of file
value as the table entry.
Unused blocks are indicated by a 0 table value. Allocating a new block to a file is
simply a matter of Finding the first 0-valued table entry, and replacing the previous end
of the file value with the Address of the new block. The 0 is then replaced with end of
the file value.
The FAT allocation scheme can result in a significant number of disk head
seeks, unless the FAT is cached. The disk head must move to the start of the partition
to read the FAT and find the location of the block itself. In the worst case, both
moves occur for each of the blocks. A benefit is that random access time is
improved, because the disk head can find the location of any block by reading the
information of the FAT.
PART-B

Q.4.Differentiate between the concepts of Buffering, Caching and Spooling.


Ans.
BUFFERING :-
Buffering is a method of overlapping the computation of a job with its execution.
It temporarily stores input or output data in an attempt to better match the speeds of
two devices such as a fast CPU and a slow disk drive. If, for example, the CPU writes
information to the buffer, it can continue in its computation while the disk drive stores
the information. The operating system uses buffer in the main memory to hold
disk data. These buffers are also Used as a cache, to improve the I/O efficiency for files
that are shared by applications or that are Being written and reread rapidly.

SPOOLING:-
Spooling is refers to a process of transferring data by placing it in a temporary
working area where another program may access it for processing at a later point in
time.
The most common spooling application is print spooling:
Documents formatted for printing are stored onto a buffer (usually an area on a
disk) by a fast processor and retrieved and printed by a relatively slower printer at its
own rate. As soon as the fast processor has written the document to the spool device it
has finished with the job and is fully available for other processes.
Without spooling, a word processor would be unable to continue until printing
finished. Without spooling, most programs would be relegated to patterns of fast
processing and long waits, an inefficient paradigm.

CACHE :-
Small memories on the CPU can operate faster and much larger main memory.
CACHE memory is a component that transparently stores data so that future requests
for that data can be served faster. A cache is the region of the fast memory that holds
copies of data. Access to the cached copy is more efficient than access to the original.
For instance, the instructions of the currently running Process are stored on disk,
cached in physical memory, and copied again in the CPU’S secondary and primary
caches.
The difference between a buffer and a cache is that a buffer may hold the only
existing copy of a Data item, whereas a cache, by definition, just holds a copy on the
faster storage of an item that Resides elsewhere. For instance, to preserve semantics
and to enable efficient scheduling of disk I/O,

The major simplicity between cache, a spool is a buffer that holds output of a
device, such as a printer, that Cannot accept interleaved data streams. Although a printer
can serve only one job at a time, several applications may wish to print their output
concurrently, without having their current output mixed together. The operating system
solves this problem by intercepting all output to the printer. Each applications output is
spooled to a separate disk file.

Q.5. Compute the total head movement using FCFS and SSTF algorithms
(assuming the head is initially positioned at track 26) for the following
disk accesses (in order): 26,37,100,14,88,33,99,12.

Ans.
FCFS ALGORITHM: TOTAL HEAD MOVEMENTS

BETWEEN 26 AND 37: 37-26=11


BETWEEN 37 AND 100: 100-37=63
BETWEEN 100 AND 14: 100-14=86
BETWEEN 14 AND 88: 88-14=74
BETWEEN 88 AND 33: 88-33=55
BETWEEN 33 AND 99: 99-33=66
BETWEEN 99 AND 12: 99-12=87

SO TOTAL HEAD MOVEMENTS IN FCFS ARE


(11+63+86+74+55+66+87) =442

SSTF ALGORITHM: TOTAL HEAD MOVEMENTS


BETWEEN 26 AND 33: 33-26=7
BETWEEN 33 AND 37: 37-33=4
BETWEEN 37 AND 14: 37-14=23
BETWEEN 14 AND 12: 14 -12=2
BETWEEN 12 AND 88: 88-12=76
BETWEEN 88 AND 99: 99-88=11
BETWEEN 99 AND 100: 100-99=1

SO TOTAL HEAD MOVEMENTS IN SSTF ALGORITHM ARE


(7+4+23+2+76+11+1) =124

Ques6: Briefly demonstrate the different levels of RAID structure. Also


discuss why RAID systems are considered more reliable (on the same time,
expensive).
Ans. Mirroring in mass storage structure provides high reliability, but it is expensive.
Striping provides data high data transfer rates, but it does not improve reliability.
Numerous schemes to provide redundancy at lower cost by using the idea of disk striping
combined with “parity” bits have been proposed. These schemes have different cost –
performance trade-offs and are classified into levels called RAID LEVELS.

THE FOLLOWING ARE THE RAID LEVELS:

RAID LEVEL 0 (NON-REDUNDANT STRIPING): RAID LEVEL 0 refers to disk


arrays with striping at the level of blocks, but without any redundancy (such as
mirroring or parity bits).

RAID LEVEL 1(MIRRORED DISKS): This level refers to disk mirroring.


RAID LEVEL 2(MEMORY-STYLE ERROR CORRECTING CODES) : This is
also known as memory – style error correcting codes (ECC) organization. Memory
systems have long implemented error detection using parity bits. The idea of ECC can
be used directly in disk arrays via striping of bytes across disks.

RAID LEVEL 3(BIT INTERLEAVED PARITY): This level improves on level 2


by noting that, unlike memory systems, disk controllers can detect whether a sector has
been read correctly, so a single parity bit can be used for error correction, as well as for
detection.

RAID LEVEL 4(BLOCK INTERLEAVED PARITY): It uses block level striping,


as in RAID 0, and in addition keeps a parity block on a separate disk for corresponding
blocks from N other disks. If one of the disks fails, the parity block can be used with
the corresponding blocks from the other disks to restore the blocks of the failed disk.
RAID LEVEL 5(BLOCKED INTERLEAVED DISTRIBUTED PARITY): It
differs from level 4 by spreading data and parity among all N+1 disks, rather than
storing data in N disks and parity in one disk. For each block, one of the disks store the
parity, and the other stores data.

RAID LEVEL 0+1: This is a combination of RAID levels 0 and 1. Raid 0 provides
the performance while RAID1 provides the reliability. Generally it provides better
performance than RAID 5.

También podría gustarte