Está en la página 1de 45

m What is a cluster

m Server Clusters : Storage Area Networks


m Management
m Cluster Service Features in Windows Server
2003
m Practice: Using Microsoft Virtual Server 2005 to
Create and Configure a Two-Node Microsoft
Windows Server 2003 Cluster
m A cluster is a group of computers (called
nodes) function as a single computer/system to
provide:
high availability and high fault tolerance for
applications or services
m Windows 2003 Servers can participate in a
cluster configuration through the use of Cluster
Services.
m All nodes of the cluster use a Shared Disk
An external disk or disk subsystem
is accessible for all nodes through SCSI (2 Nodes) or
Fibre Channel (more than 2 nodes).
All data will be stored on the shared disk or an
external disk subsystem (ex: Exchange databases).
m A cluster can be
Active/Active
Active/Passive (Failover, Failback)
m The number of cluster nodes (Windows 2003
Enterprise and Datacenter is 8 nodes )
m Every cluster node must have two network interfaces
One network interface for the cluster communication called the
 
One network interface called the 
  ë
m The private NIC is used for the Heartbeat
communication (Cluster communication).
A Heartbeat is much like a ping which can be used to test if the
other cluster node is still available.
If the heartbeat fails, the Failover process occurs
m A storage area network (SAN)
Set of interconnected    (e.g. disks and tapes)
and    (connected to a common communication
and data transfer infrastructure such as fibre channel
m 
 : common communication and data
transfer mechanism for a given deployment
m Fibre Channel Topologies
m Host Bus Adapters
m Hubs, Switches, Routers and Bridges
m Storage Components
m Highly Available Solutions
m Point-to-point
m Fibre Channel Arbitrated Loop (FC-AL)
m Switched Fibre Channel Fabrics (FC-SW).
m A simple way to connect two (and only two)
devices directly together.
m It is the fibre channel equivalent of direct
attached storage (DAS)
m à

 
à 
à  à

Host Storage
m A set of hosts and devices that are connected
into a single loop
m cost-effective to connect up to 126 devices and
hosts into a single network
m each device on the loop must obtain an
Arbitrated Loop Physical Address

Host A Host B

Device Device
E C

Device
D

§
m devices are connected in a many-to-many
topology using  
àà
 
m  

àà
àà à






 à
 
à
à
  



 

Host A Host B Host C Host D

Switches Fibre Channel Fabric

Device Device Device Device Device


E F G H I
m FC-AL ÿ ed Fabr
m O  ÿ O 
Low cost
Loops are easily  asy o deploy
expanded and combined  ppors 16 mllon oss and
with up to 126 hosts and deves
devices
Easy for vendors to  ommnae a fll re-
develop speed, no sared meda
m   es provde fal solaon
Difficult to deploy and re-rong
Maximum 126 devices ÿ 
Devices share media thus
lower overall bandwidth 
ffl for vendors o develop
 neroperably sses beeen
omponens from dfferen
vendors
 es an be expensve
m A HBA is an interface card that resides inside a
server or a host computer
m The functional equivalent of the NIC in a
traditional Ethernet network
m All traffic to the storage fabric or loop is done
via the HBA
m HBAs, with the exception of older Compaq
cards and early Tachyon based cards, support
both FC-AL and Fabric (since 1999).
m  
m 

m 
  

m The simplest form of fibre channel devices and are
used to connect devices and hosts into arbitrated loop
configurations
m have 4, 8, 12 or 16 ports allowing up to 16 devices and
hosts to be attached .the bandwidth on a hub is
shared(hafl-duplex) Host A Host B

Fibre Channel Hub

Device D Device C
m more complex storage fabric device that
provides the full fibre channel bandwidth to
each port independently
m allow ports to be configured in either an
arbitrated loop or a switched mode fabric
m full fibre channel speed (1Gbit/Sec or
2Gbit/sec today)
m support 16, 32, 64 or even 128 ports
m Several manufacturers such as Brocade and
McData provide a range of switches for
different deployment configurations
m interconnected to provide a scalable storage
fabric Cor backbon
switch fabric
provid a v ry
high p rformanc
storag ar a
int rconn ct
Data
Data
Data Data

Data

Datac nt r
Datac nt r
S rv r
S rv r


dg switch s
conn ct
d partm ntal
storag and
s rv rs into a
common fabric
m All devices and hosts would be SAN-aware
and all would interoperate in a single
m many hosts and storage components are
already deployed using different interconnect
technologies
m To allow these types of devices to play in a
storage fabric environment, a wide variety of
 or   devices allow technologies to
interoperate
SCSI-to-fibre bridges
routers allow parallel SCSI (SCSI-2 and SCSI-3
devices)
m iSCSI is a device interconnect using IP as the communications
mechanism and layering the SCSI protocol on top of IP) devices to
connect into a switch SAN fabric (SCSI over IP)
m SCSI to Fibre Channel bridge

 
 





 
  l  
 

m Storage devices such  Cll


 u  
¢  

as disk and tape are


attached to the
storage fabric using
a 
à  ¢
an EMC Symmetrix
or a Compaq   l  
StorageWorks RAID  l 
 
 
 u  
controller.  f
 

IBM would refer to


these types of
components as Fibre
RAID controllers 

m a storage controller is a box that houses a set of
disks and provides a single (potentially
redundant and highly available) connection to
a SAN fabric
m Disks in this type of controller appear as
individual devices that map directly to the
individual spindles housed in the controller
m Controllers offer a wide variety of RAID levels
such as RAID 1, RAID 5, RAID 0+1
m Controller presents a virtual view of highly
available storage devices to the hosts called
 
  
cl Physcl
r crllr
ckl

r
frc r

?r r
ls fr
cc
PU  yscl sk
rvs 
ckl
yclly  
Mmry ru
ls r
crllr
Fr hl ch
D shlf
ch

Physcl
sk rvs
 h
sr
cl vcs
c
xs y
sr crllr
     
m All areas must be considered including:
No single point of failure (cables or components such as
switches, HBAs or storage controllers)
highly available storage controller solutions from storage
vendors have redundant components and can tolerate many
different kinds of failures
Transparent and dynamic path detection and failover at the
host
Built-in hot-swap and hot-plug for all components from HBAs
to switches and controllers
m standard network topology design:
Multiple independent fabrics
Federated fabrics
Core Backbone
m each device or host is connected to multiple
fabrics

S itc

S itc
m multiple switches are connected together
m Individual hosts and devices are connected to
at least two switches
h

h

r-sh lk
 fr
shs  
sl hhly
vll fr
m a way to scale-out a federated fabric
environment 
Cor backbon

switch fabric
 
provid a v ry

high p rformanc

 
storag ar a
 
int rconn ct
Data

Data
Data Data

Data

 
Datac nt r  
  Datac nt r
S rv r  
S rv r

  
dg switch s

conn ct
 
d partm ntal

storag and
 
s rv rs into a
common fabric
m D
m 
 
  
  
m 


m   

 

m A mechanism, implemented at the switch level


m provides an isolation boundary
m A port (either host adapters or storage controller ports)
can be configured as part of a zone. Only ports in a
given zone can communicate with other ports in that
zone
m Access control is implemented by the switches in the
fabric, so a host adapter cannot spoof the zones that it
is in and gain access to data for which it has not been
configured
m Not only is it a security feature, but it also limits the
traffic flow within a given SAN environment
Z e A

 t A

t r e 1

 t B

wt

 t C
t r e 2
Z e B
m Storage controller in multiple zones

Zone ?

os ?

orage 1

os B



os
ndivida L Ns
Zone B are proeed
sing fine-
gran!ariy
orage onro!!er
seriy provided
is in Zone ? and
by e sorage
Zone B
onro!!er
m Storage controllers typically provide LUN-level
access controls that enable an administrator to
restrict access to a given LUN to one or more
hosts
m LUN masking is a host-based mechanism that
´hidesµ specific LUNs from applications
m LUN masking is a policy-driven software
security and access control mechanism
enforced at the host
m Different vendors provide a wide range of tools
for :
setting up
Configuring
monitoring
managing
m 

 






  
à




 

m The logical devices presented by the controller
to the storage fabric are some composite of the
real physical devices in the storage cabinet
m Logical devices can be materialized from that
storage pool with specific attributes such as
´must survive a single failure, have xyz
performance characteristicsµ
m The storage controller is then free to store the
data associated with the logical devices
anywhere
L "s exposed o oss
ese L "s appear as disk drives in disk
ad inisraor and an en be o bined and
sed by vo! e anager o os fi!e syse s
a app!iaions an en se

›!ip!e
independen
L "s an be
reaed a
ap o e
sa e b!oks
as anoer
L "

  

napsos an
be reaed a
Logia! disks or
sare daa i
L "s are arved
e origina! disk
o by e syse
and sore on!y
ad inisraor as
differenia!
reqired - e
anges
onro!!er akes ›irrored sorage poo! "on-redndan #?
-5 sorage poo!
are of i sorage poo!
poo! of sorage
based on e
araerisis
reqired O
  

    
onro!!er bi!ds
p poo!s of
Pysia! disk sorage b!oks
drives in e i varos
sorage abine aribes, e
baking sore for
ese poo!s is e
pysia! disks
m Targeted Reset
m Single Storage Bus Configurations
m Shared Disk Verses Shared-Nothing
m SAN Versus NAS
m Devices can be failed over in the event of failures
m Server clusters implements a challenge/response
mechanism that can detect dead or hung server nodes
m They may not have crashed and therefore the storage
fabric is unaware that the server is not responding
m The reservations are periodically broken by other
nodes in the cluster using SCSI bus reset commands.
m This feature requires no administration to enable it. If
the device driver supports the targeted reset functions
they will be automatically used
m In a SAN environment, the goal is to centralize all
storage into a single fabric accessible through a single
port
m In Windows Server 2003 Cluster server has a switch
that when enabled, allows any disk on the system,
regardless of the bus it is on, to be eligible as a cluster-
managed disk
m This feature is enabled by setting the following registry
key:
HKLM\SYSTEM\CurrentControlSet\Services\ClusSvc\Para
meters\ManageDisksOnSystemBuses 0x01
m This feature is enabled by a registry key to ensure that
it is not accidentally enabled by customers that do not
understand the implications of this configuration
m A single storage bus configuration MUST have device
drivers that support the targeted reset functionality
previously defined
m Physical view of cluster topologies
h h lus h sk lus

lus sks
 h
h sks s   sh
hyslly s
h   fsuu
sl suh s  
mm   f
hl
f
m Application view of cluster topologies
a ed noting ! te a ed di $ ! te

?pp ?pp ?pp

On! $app!i ation $on$one Ote $ ! te $node $a e ?pp!i ation $ nning$on
node$ an$a e $a$gi en a$pat$to$te$di $bt$do bot$node $ an
di $at$an $point$in$time not$ e$it$nti!$te im!taneo ! $a e
app!i ation$i $%ai!ed$o e te$data$on$te$ ame$di
m Network attached storage (NAS) is built using
standard network components such as Ethernet
or other LAN technologies
m The application servers access storage using file
system functions such as open file, read file,
write file, close file, etc«
m These higher-level functions are encapsulated
in protocols such as CIFS, NFS or AppleShare
and run across standard IP-based connections
      
      

··   ·  ··   ·  ··   · 

Ô  (e  E ere&

'
r   l

     
m Hybrid NAS and SAN solution
 
 

#   l  /
W r l 

S r  F r

      
   
 

··  
·
··   · ··   ·

Ô (    r&

FS
 r   l

    
        
  
  
?pplaon Blok-level aess Fle-level aess
erver
?ess
meods
ommnao over Fbre F , "F ( ?pple are
n proool annel 
( over P&
"eork ypally sorage- )eneral prpose L?"
pysal spef eg )gab erne
enology (eg Fbre-annel&
b may be g-
speed erne
xample ompaq "eork ?pplane "e?pp Flers(
orage orageWorks ?" Maxor "? xxxx( ompaq
Vendors famly( M ask mar "-seres
ymmerx
m è  

m       
m   
m        
  
m            
  
m       
m       !" 
m
 
  
  

m
 
  

   

m      
m   #  
m
   

 
m   $ " 
m       
m %#    
m  #  

También podría gustarte