Documentos de Académico
Documentos de Profesional
Documentos de Cultura
Ans: A fabric is a virtual space in which all storage nodes communicate with each other over
distances. It can be created with a single switch or a group of switches connected together. Each
switch contains a unique domain identifier which is used in the address schema of the fabric.
In order to identify the nodes in a fabric, 24-bit fibre channel addressing is used.
Fabric services: When a device logs into a fabric, its information is maintained in a database. The
common services found in a fabric are:
Login Service
Name Service
Fabric Controller
Management Server
Fabric Management : Monitoring and managing the switches is a daily activity for most SAN
administrators. Activities include accessing the specific management software for monitoring
purposes and zoning.
2. What is ISL?
Ans: Switches are connected to each other in a fabric using Inter-switch Links (ISL).
3. Switch fabric ?
Ans: Switched Fabric - Each device has a unique dedicated I/O path to the device it is
communicating with. This is accomplished by implementing a fabric switch.
4. Lun Migration?
Ans: LUN Migration Information:
•LUN Migration provides the ability to migrate data from one LUN to another dynamically.
•The target LUN assumes the identity of the source LUN.
•The source LUN is unbound when migration process is complete.
•Host access to the LUN can continue during the migration process.
•The target LUN must be the same size or larger than the source.
•The source and target LUNs do not need to be the same RAID type or disk type (FC<->ATA).
•Both LUNs and metaLUNs can be sources and targets.
•Individual components LUNs in a metaLUN cannot be migrated indepedently - the entire
metaLUN must be migrated as a unit.
•The migration process can be throttled.
•Reserved LUNs cannot be migrated.
5. What is heterogeneous?
Ans: A network that includes computers and other devices from different manufacturers. For
example, local-area networks (LANs) that connect PCs with Apple Macintosh computers are
heterogeneous.
6. What is zoning? What all the different types of zoning?
Ans: There are several configuration layers involved in granting nodes the ability to
communicate with each other:
Members - Nodes within the SAN which can be included in a zone.
Zones - Contains a set of members that can access each other. A port or a node can be members
of multiple zones.
Zone Sets - A group of zones that can be activated or deactivated as a single entity in either a
single unit or a multi-unit fabric. Only one zone set can be active at one time per fabric. Can also
be referred to as a Zone Configuration.
15. How you will check compatibility for server when you’re installing new box?
Ans: Compatibility Matrix in E-Lab Interoperability navigator powerlink.emc.com
20. DAS,NAS,SAN?
DAS : In a Direct Attached Storage (DAS) environment, servers connect directly to the disk array
typically via a SCSI interface. The same connectivity port on the Disk array cannot be shared
between multiple servers. Clients connect to the Servers through the Local Area Network The
distance between the Server and the Disk array is governed by the SCSI limitations. With the
advent of Storage Area Networks and Fibre Channel interface, this method of Disk array access is
becoming less prevalent.
NAS: In a Network Attached Storage (NAS) environment, NAS Devices access the disks in an
array via direct connection or through external connectivity. The NAS heads are optimized for
file serving. They are setup to export/share file systems. Servers called NAS clients access these
file systems over the Local Area Network (LAN) to run applications. The clients connect to these
servers also over the LAN.
SAN: In a Storage Area Network (SAN) environment, servers access the disk array through
adedicated network designated as SAN in the slide. SAN consists of Fibre Channel switches that
provide connectivity between the servers and the disk array. In this model, multiple servers can
access the same Fibre Channel port on the disk array. The distance between the server and the
disk array can also be greater than that permitted in a direct attached SCSI environment. Clients
communicate with the servers over the Local Area Network (LAN).
21.What is DPE2?
Ans: All CX-series models now ship with the new UltraPoint Disk Array Enclosure (DAE2P).
30. What all storage Arrays & SAN Devices supported by EMC ECC?
Ans:
Storage arrays:
EMC Symmetrix
EMC CLARiiON
EMC Centera
EMC Celerra and Network Appliances NAS servers
EMC Invista
Hitachi Data Systems (including the HP and Sun resold versions)
HP Storageworks
IBM ESS
SMI-S (Storage Management Initiative Specification) compliant arrays
SAN Devices:
EMC Connectrix
Brocade
McData
Cisco
Inrange (CNT)
IBM Blade Server (IBM-branded Brocade models only)
Dell Blade Server (Dell-branded Brocade models only)
JBOD: JBOD is an acronym for “just a bunch of disks”. The drives in a JBOD array can be
independently addressed and accessed by the Server.
DISK ARRAY : Disk arrays extend the concept of JBODs by improving performance and
reliability. They have multiple host I/O ports. This enables connecting multiple hosts to the same
disk array. Array management software allows the partitioning or segregation of array resources,
so that a disk orgroup of disks can be allocated to each of the hosts. Typically they have
controllers that can perform RAID (Redundant Array of Independent Disks) calculations.
32.What is BCV?
Ans: The most fundamental element of TimeFinder/Mirror is a specially defined volume called a
Business Continuity Volume. A BCV is a Symmetrix volume with special attributes that allows it
to be attached to another Symmetrix Logical Volume within the same Symmetrix as the next
available mirror. It must be of the same size, type, and emulation (for mainframe 3380/3390) as
the device which it will mirror. Each BCV has its own host address and Symmetrix device
number.
Types of zones:
– Port Zoning (Hard Zoning)
Port-to-Port traffic
Ports can be members of more than one zone
Each HBA only “sees” the ports in the same zone
If a cable is moved to a different port, zone has to be modified
– WWN based Zoning (Soft Zoning)
Access is controlled using WWN
WWNs defined as part of a zone “see” each other regardless of the
switch port they are plugged into
HBA replacement requires the zone to be modified
– Hybrid zones (Mixed Zoning)
Contain ports and WWNs
Port Zoning Advantages: More Secure, Simplified HBA replacement
Disadvantages: Reconfiguration
WWPN Zoning Advantage: Flexibility, Reconfiguration, Troubleshooting
Disadvantages: – “Spoofing, HBA replacement
Forced flushing: If the data in the write cache continues to increase, forced flushing will occur at a
point where there is not enough room in write cache for the next write I/O to fit. Write I/Os to
the array will be halted until enough data is flushed to make sufficient room available for the
next I/O. This process will continue until the forced flushing plus the scheduled (watermark)
flushing creates enough room for normal caching to continue. All during this process write cache
remains enabled.
Clones: Full Synchronous copies of LUNs within the same array. Can be used as a point-in-time,
FULL copy of a LUN through the fracture process, or as a data recovery solution in the event of a
failure.
4Add-on solutions
SRDf/Star-Multi point replication
SRDF/CG - Consistency group
SRDF/AR - Automated Replication
SRDF/CE - Culster Enable
Modes:
Synchronous Replication
Semi-synchronous replication
Adaptive Copy Replication
Asynchronous Replication
Conectivity options:
RLD: Remote Link Director
RFD: Remote fiber director
Communication is peer-to-peer 2 types
GigiE Remote Directors and MPCD
MPCD: Multiprotocal Chancel director
SAN Copy can use any CLARiiON SP ports to copy data, provided the port is not being used for
MirrorView connections. Multiple sessions can share the same port. You choose which ports SAN
Copy sessions use through switch zoning.
There are three basic methods of communication using Fibre Channel infrastructure
– Point to point (P-to-P)
A direct connection between two devices
– Fibre Channel Arbitrated Loop (FC-AL)
A daisy chain connecting two or more devices
– Fabric connect (FC-SW)
Multiple devices connected via switching technologies
Back To Index
• 'Connects' is the max number of NICs/HBAs that can connect to the array, regardless of
the number of hosts.
Back To Index
• The max number of FC drives per RAID Group is 16, making the largest possible current
FC RAID group size 4.8 TB (raw) (16 x 300GB on a CX-Series array).
• All sizes are raw.
Back To Index
• The max number of ATA drives per RAID Group is 16, making the largest possible
current ATA RAID group size 5.12TB (raw) (16 x 320GB on a CX-Series array).
• All sizes are raw and assume that all DAEs, other than the first one, are ATA.
• The 250GB drives are 7200RPM SATA - the 320GB drives are 5400RPM PATA.
• All sizes do not include the FC drives in the first DAE.
• The first DAE must be FC and contains at least 5 drives. All other DAEs can be ATA or a
mix of ATA & FC.
• 250GB SATA and 320GB PATA drives can be mixed in the same DAE.
Back To Index
Back To Index
• iSCSI arrays have 1Gb copper Ethernet FE ports instead of Fibre Channel FE ports.
• You cannot mix Fibre Channel and iSCSI ports on the same array.
• Refer to the EMC Support Matrix for supported host iSCSI connectivity.
• The iSCSI ports and the 10/100 host management ports can be on the same IP subnet.
• iSNS is supported.
• IPSEC is not supported natively on the arrays.
• Both standard NICs as well as iSCSI HBAs (e.g. QLogic QLA4010) are supported for host
access.
• PowerPath (V4.3.1 or later) supports multi-pathing and load balancing for iSCSI arrays.
• MirrorView/S/A and SAN Copy are not supported on iSCSI arrays.
• Direct Gigabit Ethernet attach is supported.
• 10/100 NIC connections direct to the iSCSI array are not supported, except to the
management port.
• RADIUS is not currently supported.
• Gigabit Ethernet Jumbo frames are not currently supported.
• The Clariion storage systems allow one login per iSCSI name per SP port.
• When using the Microsoft iSCSI Initiator all NICs in the same host will use the same
iSCSI name. The name will identify the host and the individual NICs will not be
identifiable. This behavior allows one login per server to each array SP port.
• When using Qlogic iSCSI adapters each HBA will have unique iSCSI names. The name
will identify the individual HBA in the host. This behavior allows one login per HBA to
each array SP port.
• When using physically separated networks, each network MUST use a unique sub-
network address to allow proper routing of traffic. This type of configuration is always
required for direct connect environments, and is also applicable whenever dedicated
subnets are used for the data paths
• A single host cannot mix iSCSI HBAs and NICs to connect to the same CLARiiON array.
• A single host cannot mix iSCSI HBAs and NICs to connect to different CLARiiON arrays.
• Hosts with iSCSI HBAs and separate hosts with NICs can connect to the same Array.
• A single host can not attach to a Fibre Array and ISCSI Array at the same time.
• A single host can not attach to a Clariion CX iSCSI array and a Clariion AX iSCSI array at
the same time.
• A single host can attach to Clariion CX iSCSI arrays and Symmetrix iSCSI arrays when
there is common network configuration, failover software, and driver support for both
platforms.
• A single host can attach to Clariion CX iSCSI arrays and IP/FC Switches to Clariion CX
Fibre Channel arrays when there is common network configuration, failover software,
and driver support for both platforms.
• Using the OSCG definition of Fan-in (server to storage system), a server can be connected
to a max of 4 Clariion storage systems (iSCSI and FC).
• Target array addresses and names can be configured manually in the Initiators, or iSNS
can be used to configure them dynamically.
• Support is provided for up to 4 HBAs or 4 NICs in one host connecting to one CX500i
array.
• Currently it is not possible to boot a Windows system using an iSCSI disk volume
provided by the Microsoft iSCSI Initiator. The only currently supported method for
booting a Windows system using iSCSI is via a supported iSCSI HBA.
• Dynamic disks are not supported on an iSCSI session using the Microsoft iSCSI Initiator.
Back To Index
metaLUN Information
• metaLUNs form an abstract LUN that is presented to the host as a single piece of storage
but consists of 2 or more 'back end' LUNs
• The use of metaLUNs is optional. The capability is available in the base FLARE upgrade
R12.
• metaLUNs and traditional LUNs can be mixed on the same array.
• metaLUNs are created from an initial LUN referred to as the 'base' LUN.
• The metaLUN takes on the characteristics of the base LUN when it is created (WWN,
Nice name, etc.), which can be modified by the user.
• Creation of a metaLUN is dynamic - the creation process is functionally transparent to
any hosts accessing the base LUN.
• FC and ATA LUNs cannot be mixed in the same metaLUN.
• Ownership of the back-end LUNs that make up a metaLUN will all be moved to the
same SP as the base LUN.
• All LUNs that make up a metaLUN become private.
• Destroying a metaLUN destroys all the LUNs that make up that metaLUN.
• If a LUN uses SnapView, MirrorView or SAN Copy it must be removed from those
applications before it can be expanded using metaLUNs.
• metaLUN components do not count against the max LUN count for an array; however,
they have their own limits (see below).
• metaLUNs can be striped or concatenated.
• Striping Considerations
o All striped LUNs must be the same size and RAID type.
o Striping will generally provide better performance since more spindles are
available.
o If a new LUN is added to a striped metaLUN, all data on the existing LUNs will
be restriped.
o The new space will not be available until re-striping occurs.
o For optimal performance LUNs should be in different RAID groups (spindles).
• Concatenation Considerations
o Any LUN types can be concatenated together except for R0 LUNs.
o R0 LUNs can only be concatenated with other R0 LUNs.
o Concatenation occurs by adding components to a base or existing metaLUN
LUN.
o A component is a collection of one or more LUNs identical in RAID type and size
that are striped together.
o The space added by concatenating a LUN is available immediately for use.
o You can only add LUNs to the last component in a metaLUN. You cannot insert
LUNs into the chain of component LUNs.
metaLUN Configuration
Item CX700 CX500 CX300
Max metaLUNs. 1024 512 256
LUNs/Cmpnt 32 32 16
Concat. Cmpnts/metaLUN 16 8 8
LUNs in metaLUNs 512 256 128
Back To Index
• LUN Migration provides the ability to migrate data from one LUN to another
dynamically.
• The target LUN assumes the identity of the source LUN.
• The source LUN is unbound when migration process is complete.
• Host access to the LUN can continue during the migration process.
• The target LUN must be the same size or larger than the source.
• The source and target LUNs do not need to be the same RAID type or disk type (FC<-
>ATA).
• Both LUNs and metaLUNs can be sources and targets.
• Individual components LUNs in a metaLUN cannot be migrated indepedently - the
entire metaLUN must be migrated as a unit.
• The migration process can be throttled.
• Reserved LUNs cannot be migrated.
Reliability/Availability Features
• All components are dual-redundant and hot swappable (no single point of failure).
• Write cache is protected by a 'vault' area on disk. On a failure the contents are written to
disks (de-staged or dumped). When the failure is corrected the contents are written to the
back-end disks and write cache is re-enabled.
• The de-stage process is supported by batteries during power failures. Write cache will
not be re-enabled until the batteries are sufficiently recharged to support another cache
de-stage.
• The following conditions must be met for write-cache to be enabled:
o There must be a standby power supply present, and it must be fully charged.
o At least 4 vault drives must be present (all 5 if 'Non-HA' option is not selected);
they cannot be faulted or rebuilding.
o The ability to keep write cache enabled when a single vault drive fails is optional
under R12 and later.
o Both storage processors must be present and functional.
o Both power supplies must be present in the DPE/SPE.
o Both fan packs must be present in the DPE/SPE.
o The DPE/SPE and all DAEs must have two non-faulted link control cards (LCC)
each.
• Each data block on a CLARiiON contain 8 bytes of error checking data.
o 8 bytes consist of LRC, Shedstamp, Writestamp and Timestamp
• SNiiFER runs in the background and continuously checks all data blocks for errors.
• Updates to the array SW are non-disruptive from a host perspective.
• Failure of an SP results in all LUNs owned by that SP being trespassed to the other SP
(assuming PowerPath is running on the host(s) accessing those LUNs).
• Slightly higher (0.0025%) reliability can be achieved using vertical RAID groups rather
than horizontal ones.
• Striping a RAID1 RG across multiple DAEs that include the first DAE (the one containing
the vault drives) is not recommended.
Back To Index
Security Features
• Navisphere
o Arrays can be configured into domains to control who can manage.
o Named role-based accounts.
o Roles are Read Only (Monitor), Manager and Security Manager.
o All management communications with array are encrypted with 128-bit SSL.
o All actions performed on an array are logged by username@hostname.
• NaviCLI
o username@hostname is authenticated against privileged list of Navi agent (on
host for pre-FC4700, SP for all others).
o No encryption.
o Password is sent in clear.
o Communicates on TCP/IP port 6389.
Back To Index
Software
• Navisphere
• NaviCLI
• Navisphere Integrator
• Navisphere Analyzer
• LUN Masking
• SnapView
• MirrorView/S
• MirrorView/A
• SAN Copy
• PowerPath
• CLARalert/OnAlert
Back To Index
Navisphere (6.19)
Back To SW Index
NaviCLI (6.19)
Host-based package for integrating CLARiiON management into 3rd party packages.
Back To SW Index
Back To SW Index
LUN Masking
• Connect hosts with different OSs to the same array port (through a switch).
• Hosts and LUNs are combined into Storage Groups.
• Allows assignment of a Storage Group to more than one host (for clustering).
• A Storage Group can be assigned up to 256 LUs.
• Supports multiple paths to the Storage Group (in conjunction with host-based
Powerpath).
• Disallows changing or deleting the hidden Management storage group.
• Disallows deleting a storage group that has hosts assigned to it.
• Disallows deleting a storage group that has LUNs in it.
• Disallows unbinding an LUN that is in a storage group.
• When activated, changes Default Storage Group to Management Storage Group.
o Management Storage Group is a communications mechanism only (LUN 0 or
LUN Z).
o It never contains any actual LUNs.
Back To SW Index
SnapView (V2.19)
SnapView Notes:
• SnapView is available for the CX300/400/500/600 and CX700.
• The combined total number of BCV or MirrorView images (source or target) cannot
exceed 50 for the CX500 and 100 for the CX700.
• Snapshot persistence across reboots is optional per snap session.
• AdmSnap is an extended command line interface (CLI) for SnapView.
o Communicates in-band with array (via SAN).
o Adds a higher degree of host integration (i.e. cache flushing).
• In order to mount Snapshots/BCVs on the same host as their source, Replication
Manager must be used to properly modify drive signatures.
• BCVs must be fractured before they can be accessed by a host.
• Snap rollback allows a source LUN to be instantly restored from any snapshot of that
source.
• Write changes to a snapshot can be optionally rolled back with the snapshot.
• BCVs can be incrementally updated from source.
• A Clone Group contains a source LUN and all of its clones..
• Instant restore allows a source LUN to be instantly restored from a BCV.
• Snapshots and BCVs can be mounted and written to on an alternate host. Any data in the
original snapshot/BCV is preserved when writes occur.
• If the snapshot/BCV has been written to before performing a snap rollback, the user has
the option of keeping or deleting any writes that have occurred.
• A protected restore option is available to prevent any new writes to the source LUN from
going to the BCV being used during a reverse synchronization process.
• Each source LUN with SnapView sessions requires a minimum of 1 reserved LUN.
Multiple sessions for that LUN can use the same reserved LUN(s).
• A single snap session for multiple LUNs can be created with a single command (GUI or
CLI) to ensure consistent snapshots across LUNs.
o Snap sessions can be both consistent and persistent.
o A max of 16 snap sessions with a single command on a CX600/700 (8 for all other
supported models)
o A max of 32 consistent set operations (BCVs & Snaps combined) can be in
progress simultaneously per storage system.
• BCVs for multiple LUNs can be fractured with a single command (GUI or CLI) to ensure
consistent BCVs across LUNs.
o BCVs in a consistent fracture operation must be in different Clone Groups.
o After the consistent fracture completes, there is no group association between the
BCVs.
o During the consistent fracture operation, If there is a failure on any of the clones,
the consistent fracture will fail on all of the clones
o If any clones within the set were fractured prior to the failure, SnapView will re-
synchronize those clones.
o A max of 16 BCVs can be fractured with a single command on a CX600/700 (8 for
all other supported models)
o A max of 32 consistent set operations (BCVs & Snaps combined) can be in
progress simultaneously per storage system.
Back To SW Index
MirrorView/S (V2.19)
Back To SW Index
MirrorView/A (V2.19)
• Utilizes delta set technology to track changes between transfer cycles. Whatever changes
between cycles is what's tranferred.
• Available for CX400/500/600/700.
• Supports mirroring between different CLARiiON models (CX400/500/600/700).
• Supports consistency groups (Note: All LUNs in a consistency group are consistent
relative to each other, not necessarily to the applications view of the data).
• Once mirrors are in a Consistency Group, you cannot fracture, synchronize, promote, or
destroy individual mirrors that are in the Consistency Group.
• All secondary images in a Consistency Group must be on the same remote storage
system.
• CX400/CX500 systems are limited to a max of 8 consistency groups with a max of 8
mirrors per group. CX600/CX700 systems are limited to a max of 16 consistency groups
with a max of 16 relationships per group.
• You can create snapshots of both the source and target LUNs. BCVs of a MirrorView
source/target are not supported.
• Each source LUN with a MirrorView/A session requires a minimum of 1 reserved LUN.
Multiple sessions for that LUN can use the same reserved LUN(s).
• Configuration rules:
o Each mirror can have one primary image and zero or one secondary images. Any
single storage system can have only one image of a mirror.
o A storage system can have mirroring connections to a max of four other storage
systems concurrently. (Mirroring connections are common between synchronous
and asynchronous mirrors.)
o You can configure a max of 50 primary and secondary images on CX400 and
CX500 storage systems and a max of 100 primary and secondary images on
CX600 and CX700 storage systems. The total number of primary and secondary
images on the storage system make up this max number.
o To manage remote mirror configurations, the Navisphere management
workstation must have an IP connection to both the local and remote storage
systems. The connection to the remote storage system should have an effective
bandwidth of at least 128 Kbits/second.
o The local and remote storage systems do not need to be in the same Navisphere
domain.
o You must have the MirrorView/A and Access Logix software installed and
enabled on all storage systems you want to participate in a mirror.
o Requires LUN masking.
Back To SW Index
Back To SW Index
PowerPath (4.4)
CLARalert (6.2)
• Components.
o CLARalert
o Navisphere host agent or NaviCLI
• OnAlert is no longer used for ClarAlert dial-home.
• Notifications can be sent via dial-up or email
• Dial-up requires Windows NT/2000 management station with modem
• Email can be via either Windows or Sun/Solaris station
• Max monitered system per central monitor is 1000.
Back To SW Index
Definitions