Documentos de Académico
Documentos de Profesional
Documentos de Cultura
Host Storage
m A set of hosts and devices that are connected
into a single loop
m cost-effective to connect up to 126 devices and
hosts into a single network
m each device on the loop must obtain an
Arbitrated Loop Physical Address
Host A Host B
Device Device
E C
Device
D
§
m devices are connected in a many-to-many
topology using
àà
m
àà
àà à
à
à
à
Device D Device C
m more complex storage fabric device that
provides the full fibre channel bandwidth to
each port independently
m allow ports to be configured in either an
arbitrated loop or a switched mode fabric
m full fibre channel speed (1Gbit/Sec or
2Gbit/sec today)
m support 16, 32, 64 or even 128 ports
m Several manufacturers such as Brocade and
McData provide a range of switches for
different deployment configurations
m interconnected to provide a scalable storage
fabric Cor backbon
switch fabric
provid a v ry
high p rformanc
storag ar a
int rconn ct
Data
Data
Data Data
Data
Datac nt r
Datac nt r
S rv r
S rv r
dg switch s
conn ct
d partm ntal
storag and
s rv rs into a
common fabric
m All devices and hosts would be SAN-aware
and all would interoperate in a single
m many hosts and storage components are
already deployed using different interconnect
technologies
m To allow these types of devices to play in a
storage fabric environment, a wide variety of
or devices allow technologies to
interoperate
SCSI-to-fibre bridges
routers allow parallel SCSI (SCSI-2 and SCSI-3
devices)
m iSCSI is a device interconnect using IP as the communications
mechanism and layering the SCSI protocol on top of IP) devices to
connect into a switch SAN fabric (SCSI over IP)
m SCSI to Fibre Channel bridge
l
r
frc r
?r r
ls fr
cc
PU yscl sk
rvs
ckl
yclly
Mmry ru
ls r
crllr
Fr hl ch
D shlf
ch
Physcl
sk rvs
h
sr
cl vcs
c
xs y
sr crllr
m All areas must be considered including:
No single point of failure (cables or components such as
switches, HBAs or storage controllers)
highly available storage controller solutions from storage
vendors have redundant components and can tolerate many
different kinds of failures
Transparent and dynamic path detection and failover at the
host
Built-in hot-swap and hot-plug for all components from HBAs
to switches and controllers
m standard network topology design:
Multiple independent fabrics
Federated fabrics
Core Backbone
m each device or host is connected to multiple
fabrics
S itc
S itc
m multiple switches are connected together
m Individual hosts and devices are connected to
at least two switches
h
h
r-sh lk
fr
shs
sl hhly
vll fr
m a way to scale-out a federated fabric
environment
Cor backbon
switch fabric
provid a v ry
high p rformanc
storag ar a
int rconn ct
Data
Data
Data Data
Data
Datac nt r
Datac nt r
S rv r
S rv r
dg switch s
conn ct
d partm ntal
storag and
s rv rs into a
common fabric
m D
m
m
m
t A
t r e 1
t B
wt
t C
t r e 2
Z e B
m Storage controller in multiple zones
Zone ?
os ?
orage 1
os B
os
ndivida L Ns
Zone B are proeed
sing fine-
gran!ariy
orage onro!!er
seriy provided
is in Zone ? and
by e sorage
Zone B
onro!!er
m Storage controllers typically provide LUN-level
access controls that enable an administrator to
restrict access to a given LUN to one or more
hosts
m LUN masking is a host-based mechanism that
´hidesµ specific LUNs from applications
m LUN masking is a policy-driven software
security and access control mechanism
enforced at the host
m Different vendors provide a wide range of tools
for :
setting up
Configuring
monitoring
managing
m
à
à
m The logical devices presented by the controller
to the storage fabric are some composite of the
real physical devices in the storage cabinet
m Logical devices can be materialized from that
storage pool with specific attributes such as
´must survive a single failure, have xyz
performance characteristicsµ
m The storage controller is then free to store the
data associated with the logical devices
anywhere
L "s exposed o oss
ese L "s appear as disk drives in disk
ad inisraor and an en be o bined and
sed by vo! e anager o os fi!e syse s
a app!iaions an en se
!ip!e
independen
L "s an be
reaed a
ap o e
sa e b!oks
as anoer
L "
napsos an
be reaed a
Logia! disks or
sare daa i
L "s are arved
e origina! disk
o by e syse
and sore on!y
ad inisraor as
differenia!
reqired - e
anges
onro!!er akes irrored sorage poo! "on-redndan #?
-5 sorage poo!
are of i sorage poo!
poo! of sorage
based on e
araerisis
reqired O
onro!!er bi!ds
p poo!s of
Pysia! disk sorage b!oks
drives in e i varos
sorage abine aribes, e
baking sore for
ese poo!s is e
pysia! disks
m Targeted Reset
m Single Storage Bus Configurations
m Shared Disk Verses Shared-Nothing
m SAN Versus NAS
m Devices can be failed over in the event of failures
m Server clusters implements a challenge/response
mechanism that can detect dead or hung server nodes
m They may not have crashed and therefore the storage
fabric is unaware that the server is not responding
m The reservations are periodically broken by other
nodes in the cluster using SCSI bus reset commands.
m This feature requires no administration to enable it. If
the device driver supports the targeted reset functions
they will be automatically used
m In a SAN environment, the goal is to centralize all
storage into a single fabric accessible through a single
port
m In Windows Server 2003 Cluster server has a switch
that when enabled, allows any disk on the system,
regardless of the bus it is on, to be eligible as a cluster-
managed disk
m This feature is enabled by setting the following registry
key:
HKLM\SYSTEM\CurrentControlSet\Services\ClusSvc\Para
meters\ManageDisksOnSystemBuses 0x01
m This feature is enabled by a registry key to ensure that
it is not accidentally enabled by customers that do not
understand the implications of this configuration
m A single storage bus configuration MUST have device
drivers that support the targeted reset functionality
previously defined
m Physical view of cluster topologies
hh lus hsk lus
lus sks
h
h sks s sh
hyslly s
h fsuu
sl suh s
mm f
hl
f
m Application view of cluster topologies
a ed noting ! te a ed di $ ! te
On! $app!i ation $on$one Ote $ ! te $node $a e ?pp!i ation $ nning$on
node$ an$a e $a$gi en a$pat$to$te$di $bt$do bot$node $ an
di $at$an $point$in$time not$ e$it$nti!$te im!taneo ! $a e
app!i ation$i $%ai!ed$o e te$data$on$te$ ame$di
m Network attached storage (NAS) is built using
standard network components such as Ethernet
or other LAN technologies
m The application servers access storage using file
system functions such as open file, read file,
write file, close file, etc«
m These higher-level functions are encapsulated
in protocols such as CIFS, NFS or AppleShare
and run across standard IP-based connections
·· · ·· · ·· ·
Ô (e E ere&
'
r l
m Hybrid NAS and SAN solution
# l /
W r l
S r F r
··
·
·· · ·· ·
Ô ( r&
FS
r l
?pplaon Blok-level aess Fle-level aess
erver
?ess
meods
ommnao over Fbre F , "F ( ?pple are
n proool annel
( over P&
"eork ypally sorage- )eneral prpose L?"
pysal spef eg )gab erne
enology (eg Fbre-annel&
b may be g-
speed erne
xample ompaq "eork ?pplane "e?pp Flers(
orage orageWorks ?" Maxor "? xxxx( ompaq
Vendors famly( M ask mar "-seres
ymmerx
m è
m
m
m
m
m
m !"
m
m
m
m #
m
m $ "
m
m %#
m #