Documentos de Académico
Documentos de Profesional
Documentos de Cultura
GA32-0543-11
IBM System Storage N series
GA32-0543-11
Note:
Before using this information and the product it supports, read the general information in “Notices” on page 175.
The following paragraph does not apply to any country (or region) where such provisions are inconsistent with local
law.
INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION “AS IS” WITHOUT
WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states (or regions) do
not allow disclaimer of express or implied warranties in certain transactions; therefore, this statement may not apply
to you.
Order publications through your IBM representative or the IBM branch office serving your locality.
© Copyright International Business Machines Corporation 2005, 2008. All rights reserved.
US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract
with IBM Corp.
Contents
Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Contents v
Floor plan considerations . . . . . . . . . . . . . . . . . . . . . 110
Creating a floor plan . . . . . . . . . . . . . . . . . . . . . 110
Security . . . . . . . . . . . . . . . . . . . . . . . . . . 111
Contents vii
Quad-port GbE Ethernet adapter (copper) (FC 1023) . . . . . . . . . 168
Dual-port SCSI Ultra320 HBA for tape attachment (FC 1024) . . . . . . 168
Dual-port Gigabit Ethernet iSCSI target adapter (copper) (FC 1026) . . . . 168
Quad-port 4-Gbps Fibre Channel HBA for disk attachment (FC 1029) . . . 168
Dual-port 10 GbE Ethernet adapter (FC 1031) . . . . . . . . . . . . 169
Dual-port MetroCluster VI HBA (Models A20/G20 only) (FC 1032). . . . . 169
SnapMirror over Fibre Channel HBA (FC 1033) . . . . . . . . . . . 169
Quad-port 4-Gbps Fibre Channel HBA for tape and disk attachment (FC
1035) . . . . . . . . . . . . . . . . . . . . . . . . . . 170
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
Copyrights . . . . . . . . . . . . . . . . . . . . . . . . . . 175
Trademarks. . . . . . . . . . . . . . . . . . . . . . . . . . 176
Electronic emission notices . . . . . . . . . . . . . . . . . . . . 177
Federal Communications Commission (FCC) Class A Statement . . . . . 177
Industry Canada Class A Emission Compliance Statement . . . . . . . 177
Avis de conformité à la réglementation d’Industrie Canada . . . . . . . 177
European Union (EU) Electromagnetic Compatibility Directive . . . . . . 177
Australia and New Zealand Class A statement . . . . . . . . . . . . 178
Germany Electromagnetic Compatibility Directive . . . . . . . . . . . 178
People’s Republic of China Class A Electronic Emission Statement . . . . 179
Taiwan Class A warning statement . . . . . . . . . . . . . . . . 179
Japan VCCI Class A ITE Electronic Emission Statement . . . . . . . . 179
Korean Class A Electronic Emission Statement . . . . . . . . . . . 179
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
The following sections define each type of safety notice and provide examples.
The following notices and statements are used in IBM® documents. They are listed
below in order of increasing severity of potential hazards. Follow the links for more
detailed descriptions and examples of the danger, caution, and attention notices in
the sections that follow.
v Note: These notices provide important tips, guidance, or advice.
v “Attention notices” on page xiii: These notices indicate potential damage to
programs, devices, or data.
v “Caution notices” on page xiii: These statements indicate situations that can
be potentially hazardous to you.
v “Danger notices”: These statements indicate situations that can be potentially
lethal or extremely hazardous to you. Safety labels are also attached directly to
products to warn of these situations.
v In addition to these notices, “Labels” on page xii may be attached to the product
to warn of potential hazards.
Danger notices
A danger notice calls attention to a situation that is potentially lethal or extremely
hazardous to people. A lightning bolt symbol accompanies a danger notice to
represent a dangerous electrical condition. A sample danger notice follows.
DANGER
An electrical outlet that is not correctly wired could place
hazardous voltage on metal parts of the system or the devices
that attach to the system. It is the responsibility of the customer
to ensure that the outlet is correctly wired and grounded to
prevent an electrical shock.
To Disconnect:
1. Turn everything OFF (unless instructed otherwise).
2. Remove power cords from the outlet.
3. Remove signal cables from connectors.
4. Remove all cables from devices.
To Connect:
1. Turn everything OFF (unless instructed otherwise).
2. Attach all cables to devices.
3. Attach signal cables to connectors.
4. Attach power cords to outlet.
5. Turn device ON.
Labels
As an added precaution, safety labels are often installed directly on products or
product components to warn of potential hazards.
The actual product safety labels may differ from these sample safety labels:
DANGER
Hazardous voltage, current, or energy levels are present
inside any component that has this label attached.
Caution notices
A caution notice calls attention to a situation that is potentially hazardous to people
because of some existing condition. A caution notice can be accompanied by
different symbols, as in the examples below:
CAUTION:
This product is equipped with a 3–wire (two conductors and
ground) power cable and plug. Use this power cable with a properly
grounded electrical outlet to avoid electrical shock.
CAUTION:
Data processing environments can contain equipment transmitting
on system links with laser modules that operate at greater than
Class 1 power levels. For this reason, never look into the end of an
optical fiber cable or open receptacle.
Attention notices
An attention notice indicates the possibility of damage to a program, device, or
system, or to data. An exclamation point symbol may accompany an attention
notice, but is not required. A sample attention notice follows:
CAUTION:
This product contains a Class 1M laser. Do not view directly with optical
instruments. (C028)
This equipment contains Class 1 laser products, and complies with FDA radiation
Performance Standards, 21 CFR Subchapter J and the international laser safety
standard IEC 825-2.
CAUTION:
Data processing environments can contain equipment transmitting on
system links with laser modules that operate at greater than Class 1
power levels. For this reason, never look into the end of an optical fiber
cable or open receptacle.
Attention: In the United States, use only SFP or GBIC optical transceivers that
comply with the FDA radiation performance standards, 21 CFR Subchapter J.
Internationally, use only SFP or GBIC optical transceivers that comply with IEC
standard 825–1. Optical products that do not comply with these standards may
product light that is hazardous to the eyes.
Usage restrictions
The optical ports of the modules must be terminated with an optical connector or
with a dust plug.
Rack installation
DANGER
CAUTION:
v Do not install a unit in a rack where the internal rack ambient
temperatures will exceed the manufacturer’s recommended
ambient temperature for all your rack-mounted devices.
v Do not install a unit in a rack where the air flow is compromised.
Ensure that air flow is not blocked or reduced on any side, front,
or back of a unit used for air flow through the unit.
v Consideration should be given to the connection of the
equipment to the supply circuit so that overloading of the
circuits does not compromise the supply wiring or overcurrent
protection.
v To provide the correct power connection to a rack, refer to the
rating labels located on the equipment in the rack to determine
the total power requirement of the supply circuit.
v This drawer is a fixed drawer and should not be moved for
servicing unless specified by manufacturer. Attempting to move
the drawer partially or completely out of the rack may cause the
rack to become unstable or cause the drawer to fall out of the
rack.
Be cautious of potential safety hazards that are not covered in the safety checks. If
the inspection indicates an unacceptable safety condition, the condition must be
corrected before you can service the machine.
Note: It is the responsibility of the owner of the system to correct any unsafe
condition.
L1
L1
L1
RT000015
b. Using the appropriate probe, check for 0.1 ohm or less resistance between
the metal frame and the grounding pin on each of the power outlets on
each power distribution bus.
13. Check for the following conditions for each external device that has an
attached power cord:
v Damage to the power cord.
v The correct grounded power cord.
v With the external power cord connected to the device, check for 0.1 ohm or
less resistance between the ground lug on the external power cord plug and
the metal frame of the device.
14. Close the rear cover of the rack.
15. Perform the power-on procedure for the PDU that is installed in the rack.
For additional information, refer to the documentation for your rack cabinet.
CAUTION:
Make sure that you do the following:
v Before you add or remove drawers, always have the leveling feet lowered
and the front and rear stabilizer installed, or have the rack bolted to the
floor.
v Always install drawers at the bottom of the rack first.
v Always remove drawers from the top of the rack first.
v Always install the heaviest drawers on the bottom of the rack.
v Remove two or three drawers from the top of the rack before you relocate
it.
v Never push on the sides of the rack.
Attention: If the rack has equipment located above EIA location 32, you must
remove the equipment in position 32 and above from the rack before you move it.
Always remove the equipment from the top of the rack first.
For additional information, refer to the documentation for your rack cabinet.
Notice: This mark applies only to countries within the European Union (EU) and
Norway.
In the United States, IBM has established a return process for reuse, recycling, or
proper disposal of used IBM sealed lead acid, nickel cadmium, nickel metal hydride,
and other battery packs from IBM Equipment. For information on proper disposal of
these batteries, contact IBM at 1-800-426-4333. Please have the IBM part number
listed on the battery available prior to your call.
For Taiwan:
Notice: This mark applies only to countries within the European Union (EU).
For California:
Supported features
IBM System Storage N series storage systems and expansion boxes are driven by
NetApp® Data ONTAP® software. Some features described in the product software
documentation are neither offered nor supported by IBM. Please contact your local
IBM representative or reseller for further details.
Information about supported features can also be found at the following Web site:
www.ibm.com/storage/support/nas/
A listing of currently available N series products and features can be found at the
following Web site:
www.ibm.com/storage/nas/
The following types of notices and statements are used in this document:
v Note: These notices provide important tips, guidance, or advice.
v Important: These notices provide information or advice that might help you avoid
inconvenient or problem situations.
v Attention: These notices indicate possible damage to programs, devices, or
data. An attention notice is placed just before the instruction or situation in which
damage could occur.
v Caution: These statements indicate situations that can be potentially hazardous
to you. A caution statement is placed just before the description of a potentially
hazardous procedure step or situation.
v Danger: These statements indicate situations that can be potentially lethal or
extremely hazardous to you. A danger statement is placed just before the
description of a potentially lethal or extremely hazardous procedure step or
situation.
www.ibm.com/storage/support/nas/
Web sites
IBM maintains pages on the World Wide Web where you can get the latest
technical information and download device drivers and updates.
v For NAS product information, go to the following Web site:
www.ibm.com/storage/nas/
v For NAS support information, go to the following Web site:
www.ibm.com/storage/support/nas/
v For AutoSupport information, go to the following Web site:
www.ibm.com/storage/support/nas/
v For the latest version of N series publications, go to the following Web site:
www.ibm.com/storage/support/nas/
www.ibm.com/planetwide/
www.ibm.com/systems/storage/network/interophome.html
Firmware updates
As with all devices, it is recommended that you run the latest level of firmware,
which is embedded in Data ONTAP. If there are changes, they will be posted to the
following Web site:
www.ibm.com/storage/support/nas/
Note: If you do not see new changes on the Web site, you are running the latest
level of firmware.
Verify that the latest level of firmware is installed on your machine before contacting
IBM for technical support.
Note: EXN expansion units are not intended for attachment to a gateway.
The term gateway describes IBM N series models that do not contain internal disk
storage or attach to disk storage expansion units. IBM N series gateways attach to
external storage devices on a Storage Area Network (SAN).
The terms system or storage system refer to either a gateway by itself or a filer,
either by itself or with additional disk drives.
Command conventions
You can enter commands on the system console or from any client that can obtain
access to the appliance using a Telnet session. In examples that illustrate
commands executed on a UNIX® workstation, the command syntax and output
might differ, depending on your version of UNIX.
Formatting conventions
The following table lists different character formats used in this guide to set off
special information.
Keyboard conventions
This guide uses capitalization and some abbreviations to refer to the keys on the
keyboard. The keys on your keyboard might not be labeled exactly as they are in
this guide.
If the Reader Comment Form in the back of this manual is missing, you can direct
your mail to:
When you send information to IBM, you grant IBM a nonexclusive right to use or
distribute the information in any way it believes appropriate without incurring any
obligation to you.
Site preparation is the responsibility of the customer and this document will provide
basic information required to do this preparation. You may want to also enlist the
help of your Field Technical Support Specialist, marketing representative, or other
support personnel.
Your marketing representative is available to ensure that the hardware and software
that you have chosen will meet your needs.
Planning for the IBM N series storage system consists of these main tasks:
1. Understanding the features and functions of the N series storage system and
selecting the proper feature codes (FCs) for your business as described in
Chapter 2, “IBM N series hardware features,” on page 7 and Chapter 3, “IBM N
series storage system software features,” on page 75.
2. Planning for the physical environment where the equipment will operate. This
planning step includes the physical space, electrical, temperature, humidity,
altitude, air flow, service clearance, and similar requirements as described in
Chapter 4, “Site planning,” on page 87.
3. Planning for cabling depends on the adapter feature codes (FCs) selected as
described in Chapter 5, “Cable planning,” on page 113.
4. If required, planning for a dual-node clustered configuration for high availability
as described in “Clustering” on page 4.
5. Planning for reporting error information to IBM as described in Chapter 6,
“AutoSupport,” on page 119.
Note: EXN expansion units are not intended for attachment to a gateway.
Multiple EXN1000s, each having different SATA disk drive feature codes, may be
attached to the same N series filer on the same Fibre Channel loop.
Multiple EXN2000s and EXN4000s, each having different Fibre Channel disk drive
feature codes, may be attached to the same N series filer on the same Fibre
Channel loop.
For the latest storage expansion unit support information, visit the following Web
site:
www.ibm.com/storage/support/nas/
Intermixing Fibre Channel and SATA disk drives in a supported N series filer
configuration is supported as follows:
v Intermixing Fibre Channel disk expansion units with SATA disk expansion units
on the same loop is not supported.
v EXN4000s or EXN2000s (Fibre Channel disk drives) and EXN1000s (SATA disk
drives) may be attached to the same N series filer only if the Fibre Channel disk
expansion units (EXN4000s or EXN2000s) are on separate loops than the SATA
disk expansion units (EXN1000s).
Data ONTAP
N series storage systems are driven by the Data ONTAP operating system. Data
ONTAP is a highly optimized, scalable, and flexible operating system that can
handle mixed SAN and NAS environments. Data ONTAP delivers flexible
management and high availability, ensures business continuance, and provides data
permanence, thereby reducing storage management complexity in your enterprise.
Data ONTAP software integrates seamlessly into UNIX, Windows®, and Web
environments and provides the foundation to build your storage infrastructure and
an enterprise-wide data fabric for mission-critical business applications. The
operating system includes integrated secure access capabilities (SSL, SSH) and
FilerView, a Web-based element manager.
Interoperability
The latest information on software and hardware interoperability can be accessed
at:
www.ibm.com/systems/storage/network/interophome.html
Adapter support
There are no PCI adapter slots on the N3300 and N3700 systems. No additional
adapter options are supported for the N3300 and N3700 systems.
There is one available PCIe adapter slot per node on the N3600 storage system.
For an A20 model, adapters must be added in pairs, one per node, so that both
nodes are populated with one of the same type of PCIe adapter.
Note: The PCIe adapters supported by the N3600 are described in Appendix E,
“Optional adapter cards supported by the N3600,” on page 145.
Note: The PCI-X adapters supported by the N5200 and N5500 storage systems
are described in Appendix F, “Optional adapter cards supported by N5200
and N5500 systems,” on page 149.
There are three available PCIe adapter slots per node on the N5300 and N5600
storage systems. (A10/G10 models have three available PCIe adapter slots.
A20/G20 models have six available PCIe adapter slots.) Adapters must be added in
pairs, one per node, to an A20/G20 model, so that both nodes are populated with
the same number of each type of PCIe adapters.
Note: The PCIe adapters supported by the N5300 and N5600 storage systems are
described in Appendix G, “Optional adapter cards supported by N5300 and
N5600 systems,” on page 157.
There are five available PCIe adapter slots and three available PCI-X adapter slots
per node on the N7700 and N7900 storage systems. (The sixth PCIe adapter slot
on each N7700 or N7900 node is reserved for the NVRAM6 adapter and is not
available for PCIe adapter card use). Adapters must be added in pairs, one per
node, to an A21/G21 model, so that both nodes are populated with the same
number of each type of PCI-X/PCIe adapters.
Note: The PCIe and PCI-X adapters supported by the N7700 and N7900 storage
systems are described in Appendix H, “Optional adapter cards supported by
N7000 series systems,” on page 163.
Clustering
Before beginning hardware planning, the key hardware decision is to decide if the
higher availability obtained by clustering two N series storage system nodes in a
single Model A20 is needed. Clustered configurations are referred to as
active/active configurations.
Two N series storage system nodes can be clustered together for higher availability
using the Cluster Failover (CFO) software feature. Each node continually monitors
its partner, mirroring the data for each other’s NVRAM.
The IBM N3300, N3600, and N3700 systems all contain both clustered nodes in the
same enclosure.
In all IBM N5000 and N7000 series models, a standard cluster contains two nodes,
with each node contained in a different enclosure. Both nodes must be the same N
series model. The two nodes are clustered through an Infiniband (IB) cluster cable
that is attached to the NVRAM5 adapter (for N5200 and N5500 models) or
NVRAM6 adapter (for N5300, N5600 and N7000 series models), which allows one
node to serve data to the disks of its failed partner node.
This chapter also summarizes configuration limits for Fibre Channel and iSCSI in
“Configuration limits for Fibre Channel and iSCSI” on page 63 and rack mounting
information in “Rack mount requirements” on page 73.
www.ibm.com/systems/storage/network/interophome.html
For details about your system's configuration limits for Fibre Channel and iSCSI,
see “Configuration limits for Fibre Channel and iSCSI” on page 63.
N3300 A10
The N3300 Model A10 is designed to provide a single-node storage controller with
iSCSI support, and NFS, CIFS, and FCP support via optional features. The N3300
Model A10 is a 2U storage controller that must be mounted in a standard 19-inch
rack. The base chassis includes:
Note: Support for minimal zero drive configuration for attachment to the
EXN1000, EXN2000 or EXN4000 expansion units is available.
The N3300 Model A10 may be upgraded to an N3300 Model A20. The upgrade
from a Model A10 to a Model A20 is a disruptive upgrade.
N3300 A20
The N3300 Model A20 is designed to provide identical function as the N3300 Model
A10, but with the addition of a second processor control module (PCM) and the
Clustered Failover (CFO) licensed function. The Model A20 consists of two PCMs
that are designed to provide failover and failback function, helping improve overall
availability. The Model A20 is a 2U rack-mountable storage controller.
Note: Support for minimal zero drive configuration for attachment to the
EXN1000, EXN2000 or EXN4000 expansion units is available.
The maximum raw storage capacity of the N3300 system is determined by the
number of disk drives supported. The N3300 Model A10 and Model A20 each
support a maximum of 68 hard disk drives (12 internal and 56 via storage
expansion units).
Table 1 describes the maximum supported total physical storage capacity for the
N3300 Model A10 and Model A20:
Table 1. N3300 raw storage capacity
Maximum Maximum
Disk drive storage Maximum disk physical
Disk enclosure capacity enclosures drives capacity
Internal 144 GB SAS n/a 12 1.72 TB
disk drives
Internal 300 GB SAS n/a 12 3.60 TB
disk drives
Internal 500 GB SATA n/a 12 6 TB
disk drives
Internal 750 GB SATA n/a 12 9 TB
disk drives
Internal 1 TB SATA disk n/a 12 12 TB
drives
EXN1000 250 GB SATA 4 56 14 TB
disk drives
EXN1000 500 GB SATA 4 56 28 TB
disk drives
EXN1000 750 GB SATA 4 56 42 TB
disk drives
EXN1000 1 TB SATA disk 4 56 56 TB
drives
EXN2000 144 GB Fibre 4 56 8.06 TB
Channel disk
drives
EXN2000 300 GB Fibre 4 56 16.8 TB
Channel disk
drives
EXN4000 144 GB Fibre 4 56 8.06 TB
Channel disk
drives
EXN4000 300 GB Fibre 4 56 16.8 TB
Channel disk
drives
EXN1000 SATA storage expansion units must not share a Fibre Channel loop with
EXN2000 or EXN4000 Fibre Channel storage expansion units.
The power cord features for the N3300 are listed in Appendix B, “Power cord list for
N series storage systems,” on page 123.
www.ibm.com/systems/storage/network/interophome.html
For details about your system's configuration limits for Fibre Channel and iSCSI,
see “Configuration limits for Fibre Channel and iSCSI” on page 63.
N3600 A10
The Model A10 is designed to provide a single-node storage controller with iSCSI
support, and NFS, CIFS, and FCP support via optional features. The N3600 Model
A10 is a 4U storage controller that must be mounted in a standard 19-inch rack.
The base chassis includes:
v One 2.2 GHz 32-bit processor
v 2 GB random access ECC memory
v Two integrated gigabit Ethernet RJ-45 ports
v Two integrated 4-Gbps small form factor (SFF) Fibre Channel ports
v One serial console port
v One integrated Remote LAN Management (RLM) port
v Redundant hot-swappable, auto-ranging power supplies and cooling fans
v Support for 12 to 20 Serial Attached SCSI (SAS) or SATA disk drives
The Model A10 supports a maximum of one dual-path Fibre Channel loop. The
Model A10 can be up upgraded to a maximum of 4 gigabit Ethernet ports via the
addition of one optional dual-port Ethernet NIC (feature number 1012 or 1013). The
Model A10 may be upgraded to a Model A20. The upgrade from a Model A10 to a
Model A20 is a disruptive upgrade.
N3600 A20
The Model A20 is designed to provide identical function as the N3600 Model A10,
but with the addition of a second processor control module (PCM) and the
Clustered Failover (CFO) licensed function. The Model A20 also supports a
maximum of 104 drives. The Model A20 consists of two PCMs that are designed to
provide failover and failback function, helping improve overall availability. The Model
A20 is a 4U rack-mountable storage controller.
Note: Support for minimal zero drive configuration for attachment to the
EXN1000, EXN2000 or EXN4000 expansion units is available.
For the Model A20, the maximum number of additional expansion adapters is two.
The Model A20 supports a maximum of one dual-path Fibre Channel loop. The
Model A20 can be upgraded to a maximum of eight gigabit Ethernet ports via the
addition of two optional dual-port gigabit Ethernet NIC (feature number 1012 or
1013).
The maximum raw storage capacity of the N3600 system is determined by the
number of disk drives supported. The N3600 Model A10 and Model A20 each
support a maximum of 104 hard disk drives (20 internal and 84 via storage
expansion units).
Table 4 describes the maximum supported total physical storage capacity for the
N3600 Model A10 and Model A20:
Table 4. N3600 raw storage capacity
Maximum Maximum
Disk drive storage Maximum disk physical
Disk enclosure capacity enclosures drives capacity
Internal 144 GB SAS n/a 20 2.88 TB
disk drives
Internal 300 GB SAS n/a 20 6.00 TB
disk drives
Internal 500 GB SATA n/a 20 10 TB
disk drives
Internal 750 GB SATA n/a 20 15 TB
disk drives
Internal 1 TB SATA disk n/a 20 20 TB
drives
EXN1000 250 GB SATA 6 84 21.00 TB
disk drives
EXN1000 500 GB SATA 6 84 42.00 TB
disk drives
EXN1000 750 GB SATA 6 84 63.00 TB
disk drives
EXN1000 1 TB SATA disk 6 84 84 TB
drives
EXN2000 144 GB Fibre 6 84 12.09 TB
Channel disk
drives
EXN2000 300 GB Fibre 6 84 25.20 TB
Channel disk
drives
EXN4000 144 GB Fibre 6 84 12.09 TB
Channel disk
drives
EXN4000 300 GB Fibre 6 84 25.20 TB
Channel disk
drives
EXN1000 SATA storage expansion units must not share a Fibre Channel loop with
EXN2000 or EXN4000 Fibre Channel storage expansion units.
The power cord features for the N3600 are listed in Appendix B, “Power cord list for
N series storage systems,” on page 123.
The Model A10 is designed to provide a single node filer with NFS, CIFS, iSCSI
and FCP support in a 3U, integrated filer mounted in a standard 19-inch rack. This
base chassis includes redundant hot-plug power supplies with fans and two
integrated 10/100/1000 Ethernet ports.
The N3700 Model A10 is capable of being upgraded to an N3700 Model A20. The
Model A20 is designed to provide identical function as the N3700 Model A10, but
with the addition of the Clustered Failover (CFO) software feature. The N3700
Model A20 consists of two processing nodes in the same enclosure that are
designed to provide failover and failback function, helping improve overall reliability.
Note: Any N3700 drive bays that do not contain hard disk drives must be
populated with drive blank covers (feature 4099) covering the remaining
drive bays.
The N3700 load board feature enables the N3700 (A10 and A20) to operate in a
SATA-only storage environment. If the N3700 load board is ordered, the N3700 is
ordered with no Fibre Channel hard drives and only EXN1000s (SATA drives) are
attached to the storage controller. For more information about the N3700 load
board, see “Connecting expansion units to the N3700” on page 15.
Attention: If your N3700 storage system shipped with load boards, exactly two
N3700 load boards (FC 4020) and 12 HDD blank fillers (FC 4099) are installed in
the system. The two load boards must be installed in bays 0 and 1.
For details about your system's configuration limits for Fibre Channel and iSCSI,
see “Configuration limits for Fibre Channel and iSCSI” on page 63.
Regardless of the CPU module design, all N3700 storage systems offer the same
functionality. Field repairs or upgrades may use the current CPU module design on
any N3700 system.
The early CPU module is easily distinguished from the current CPU module design
by the rear port labeling, shown in Figure 2. The early CPU module uses an
integrated SFP for Fibre Channel Port C (used for third-party devices), and it uses a
special HSS connector for Fibre Channel Port B to connect the N3700 to expansion
units.
The current CPU module design uses pluggable SFP connections for both Fibre
Channel ports. The current CPU module labeling is shown in Figure 3 on page 15.
The SFP required for connections to the Fibre Channel Port C (used for third-party
devices) is included with all N3700s that ship with the current CPU module design.
Te rm A B
C B
On
Off
Note: Any expansion unit drive bays that do not contain hard disk drives must be
populated with drive blank covers (feature 4099) covering the remaining
drive bays.
The N3700 does not support the attachment of mixed expansion unit types. All
expansion units connected to a single N3700 must be either EXN4000s, EXN2000s
or EXN1000s.
Attention: Depending on the CPU module design of your N3700 storage system,
direct connections to expansion units must be made with either Fibre Channel
copper cables or Fibre Channel optical cables, as described in “Understanding the
differences between early and current N3700 CPU modules” on page 14.
In the case where the objective is to have as much low cost SATA storage as
possible, the N3700 can be configured with no Fibre Channel disk drives. In order
to configure the N3700 system (the base unit and expansion units) with no Fibre
Channel disk drives, you must order two HDD load boards (FC 4020) and 12 HDD
blank fillers (FC 4099).
Table 7 describes the maximum supported total physical storage capacity for the
N3700 Model A10 and Model A20:
Table 7. N3700 raw storage capacity
Maximum Maximum disk Maximum
Disk drive storage drives (including physical
Disk enclosure capacity enclosures 14 in N3700) capacity
EXN4000 144-GB Fibre 3 56 8 TB
Channel disk
drives
EXN4000 300-GB Fibre 3 56 16.8 TB
Channel disk
drives
EXN2000 72-GB Fibre 3 56 4 TB
Channel disk
drives
EXN2000 144-GB Fibre 3 56 8 TB
Channel disk
drives
EXN2000 300-GB Fibre 3 56 16.8 TB
Channel disk
drives
1
EXN1000 250-GB SATA disk 3 56 10.5 TB
drives
EXN1000 320-GB SATA disk 3 56 13.44 TB1
drives
EXN1000 500-GB SATA disk 3 56 14 TB1
drives
1
EXN1000 750-GB SATA disk 3 56 16 TB
drives
1
EXN1000 1 TB SATA disk 3 56 16 TB
drives
1
This number does not include the fourteen possible Fibre Channel disk drives that can be
installed in the N3700 base unit. When the capacity of the Fibre Channel disk drives in the
base N3700 unit is added to this number, the total capacity must not exceed 16.8 TB.
The power cord features for the N3700 filer are listed in Appendix B, “Power cord
list for N series storage systems,” on page 123.
The N5200 filers are designed to interoperate with products capable of data
transmission in the industry-standard iSCSI, CIFS, FCP and NFS protocols. These
include the IBM Eserver System p, System i (NFS only), System x and System z
(NFS only) servers. Details and current information on N5200 interoperability is
available at:
www.ibm.com/systems/storage/network/interophome.html
For details about your system's configuration limits for Fibre Channel and iSCSI,
see “Configuration limits for Fibre Channel and iSCSI” on page 63.
N5200 A10
The N5200 Model A10 is designed to provide a single node filer with NFS, CIFS,
FCP and iSCSI support in a 3U filer mounted in a standard 19-inch rack.
The N5200 Model A10 does not include storage in the base chassis. The base
chassis includes:
v One Intel® 2.8 GHz Xeon® processor
v 2 GB of ECC memory
v 512 MB of non-volatile random access memory (NVRAM)
v Four integrated 10/100/1000 Ethernet ports
For the N5200 Model A10, the maximum number of all additional PCI-X adapters is
three.
Note: The PCI-X adapters supported by the N5200 and N5500 storage systems
are described in Appendix F, “Optional adapter cards supported by N5200
and N5500 systems,” on page 149.
The Model A10 can have a maximum of four dual-path (redundant) Fibre Channel
storage loops for attaching storage expansion units (EXN1000, EXN2000, and
EXN4000). On the Model A10, in order to have the maximum number of dual-path
Fibre Channel storage loops, two additional dual-port Fibre Channel HBAs for Disk
Attachment (FC 1004) are needed. The four onboard 2-Gbps Fibre Channel ports
can be configured as either FCP Initiators for attaching disk storage expansion units
(EXN1000, EXN2000, or EXN4000) or as FCP targets for attaching to Fibre
Channel application hosts (attached either directly or through a Fibre Channel
SAN). If some of the onboard Fibre Channel ports are configured as FCP targets,
then more than two dual-port Fibre Channel HBAs for Disk Attachment (FC 1004)
are required to reach the maximum of four dual-path Fibre Channel storage loops
for disk storage expansion units.
The Model A10 can be upgraded to a maximum of ten 10/100/1000 Ethernet ports
via the addition of three optional dual-port fiber Gigabit Ethernet Network Interface
cards (NICs) (feature number 1003) or 16 10/100/100 Ethernet ports via the
addition of three optional quad-port copper Gigabit Ethernet NICs (feature number
1007).
The Model A10 may be upgraded to a Model A20. The upgrade from a Model A10
to a Model A20 is a disruptive upgrade.
N5200 A20
The N5200 Model A20 is designed to provide identical function as the N5200 Model
A10, but with the addition of a second processing node and the Clustered Failover
(CFO) software feature. The Model A20 consists of two processing nodes that are
designed to provide failover and failback function, helping improve overall
availability. For the Model A20, each processing node is a 3U rack-mountable filer.
Therefore, the Model A20 occupies a total of 6U of rack space.
For the N5200 Model A20, the maximum number of all additional PCI-X adapters is
six.
Note: The PCI-X adapters supported by the N5200 and N5500 storage systems
are described in Appendix F, “Optional adapter cards supported by N5200
and N5500 systems,” on page 149.
The Model A20 can have a maximum of eight dual-path (redundant) Fibre Channel
storage loops for attaching storage expansion units (EXN1000, EXN2000, and
EXN4000). On the Model A20, in order to have the maximum number of dual-path
Fibre Channel storage loops, four additional dual-port Fibre Channel HBAs for Disk
Attachment (FC 1004), two per node, are needed. The onboard 2-Gbps Fibre
Channel ports can be configured as either FCP Initiators for attaching disk storage
expansion units (EXN1000, EXN2000, or EXN4000) or as FCP targets for attaching
to Fibre Channel application hosts (attached either directly or through a Fibre
Channel SAN). If some of the onboard Fibre Channel ports are configured as FCP
targets, then more than four dual-port Fibre Channel HBAs for Disk Attachment (FC
1004) are required to reach the maximum of eight dual-path Fibre Channel storage
loops for disk storage expansion units.
The physical proximity of the two processing nodes within a Model A20 (with
respect to each other) is determined by which Infiniband cluster interconnect cables
are ordered (feature numbers 1037, 1038, 1039, 1040 and 1041). Optical cables
(feature numbers 1040 and 1041) also require feature number 1042.
Within a single EXN1000, EXN2000, or EXN4000 expansion unit, all disk drives
must be of a particular type (rotational speed/capacity). Although the original order
for expansion units may contain expansion units with no more than two different
types (rotational speed/capacity) of disk drives, later upgrades to add additional
expansion units do not have to meet this requirement.
The maximum raw storage capacity of the N5200 system varies depending on the
type of disk storage expansion unit (SATA or Fibre Channel) and the capacity of
disk drives used. Table 10 on page 20 describes the maximum supported total
physical storage capacity for both the N5200.
Dual-path Fibre Channel cabling is supported for the N5200 filer. Dual-path Fibre
Channel cabling is designed to improve reliability, availability and serviceability of
the expansion units attached to the storage controller by creating two redundant
paths from each storage controller to each loop of the expansion units. For more
information about using dual-path Fibre Channel cabling, see the Installation and
Setup Instructions that came with your system.
The power cord features for the N5200 filer are listed in Appendix B, “Power cord
list for N series storage systems,” on page 123.
The N5200 gateways are designed to interoperate with products capable of data
transmission in the industry-standard iSCSI, CIFS, FCP and NFS protocols. These
include the IBM Eserver System p, System i (NFS only), System x and System z
(NFS only) servers. Details and current information on N5200 interoperability is
available at:
www.ibm.com/systems/storage/network/interophome.html
For details about your system's configuration limits for Fibre Channel and iSCSI,
see “Configuration limits for Fibre Channel and iSCSI” on page 63.
For more information about planning for your N series gateway system, refer to the
IBM System Storage N series Gateway Planning Guide for your version of Data
ONTAP.
N5200 G10
The N5200 Model G10 is designed to provide a single node gateway with NFS,
CIFS, FCP and iSCSI support in a 3U gateway mounted in a standard 19-inch rack.
The N5200 Model G10 gateway does not include storage in the base chassis. The
base chassis includes:
v One Intel 2.8 GHz Xeon processor
v 2 GB of ECC memory
v 512 MB of non-volatile random access memory (NVRAM)
v Four integrated 10/100/1000 Ethernet ports
v Four integrated 2-Gbps Fibre Channel ports that can be configured as targets or
initiators
v Redundant hot-plug integrated power supplies with fans
v Redundant cooling fans
v Three PCI-X expansion slots for additional Fibre Channel Host Bus Adapters
(HBAs) or Gigabit Ethernet Network Interface Cards (NICs)
v Front LCD message display
For the N5200 Model G10, the maximum number of all additional PCI-X adapters is
three.
Note: The PCI-X adapters supported by the N5200 and N5500 storage systems
are described in Appendix F, “Optional adapter cards supported by N5200
and N5500 systems,” on page 149.
The onboard 2-Gbps Fibre Channel ports can be configured as either FCP initiators
for attaching to SAN storage or as FCP targets for attaching to Fibre Channel
application hosts (attached either directly or through a Fibre Channel SAN).
The Model G10 can be upgraded to a maximum of ten 10/100/1000 Ethernet ports
via the addition of three optional dual-port fiber Gigabit Ethernet Network Interface
N5200 G20
The N5200 Model G20 is designed to provide identical function as the N5200 Model
G10, but with the addition of a second processing node and the Clustered Failover
(CFO) software feature. The Model G20 consists of two processing nodes that are
designed to provide failover and failback function, helping improve overall
availability. For the Model G20, each processing node is a 3U rack-mountable
gateway. Therefore, the Model G20 occupies a total of 6U of rack space.
For the N5200 Model G20, the maximum number of all additional PCI-X adapters is
six.
Note: The PCI-X adapters supported by the N5200 and N5500 storage systems
are described in Appendix F, “Optional adapter cards supported by N5200
and N5500 systems,” on page 149.
The onboard 2-Gbps Fibre Channel ports can be configured as either FCP initiators
for attaching to SAN storage or as FCP targets for attaching to Fibre Channel
application hosts (attached either directly or through a Fibre Channel SAN).
The physical proximity of the two processing nodes within a Model G20 (with
respect to each other) is determined by which Infiniband cluster interconnect cables
are ordered (feature numbers 1037, 1038, 1039, 1040 and 1041). Optical cables
(feature numbers 1040 and 1041) also require feature number 1042.
www.ibm.com/systems/storage/network/interophome.html
Refer to the documentation for your external storage for additional information.
The power cord features for the N5200 gateway are listed in Appendix B, “Power
cord list for N series storage systems,” on page 123.
The N5300 filers are designed to interoperate with products capable of data
transmission in the industry-standard iSCSI, CIFS, FCP and NFS protocols. These
include the IBM Eserver System p, System i (NFS only), System x and System z
(NFS only) servers. Details and current information on N5300 interoperability is
available at:
www.ibm.com/systems/storage/network/interophome.html
For details about your system's configuration limits for Fibre Channel and iSCSI,
see “Configuration limits for Fibre Channel and iSCSI” on page 63.
N5300 A10
The N5300 Model A10 is designed to provide a single-node filer with NFS, CIFS,
FCP, and iSCSI support in a 3U filer mounted in a standard 19-inch rack. The
N5300 Model A10 does not include disk storage in the base chassis. This base
chassis includes:
v Two 2.4 GHz 64-bit processors
v 4 GB of ECC memory
v 512 MB of non-volatile random access memory (NVRAM)
v Four integrated 10/100/1000 Ethernet ports
v Four integrated 4-Gbps Fibre Channel ports that can be configured as targets or
initiators
v Redundant hot-plug integrated power supplies with fans
v Redundant cooling fans
v Three PCI-Express (PCIe) expansion slots for additional Fibre Channel Host Bus
Adapters (HBAs) or Gigabit Ethernet Network Interface Cards (NICs)
v One integrated Remote LAN Management (RLM) port
v One serial console port
v Front LCD message display
All the adapter card slots of the N5300 are PCI-Express (PCIe) slots. For the
N5300 Model A10, the maximum number of all additional PCIe adapters is three. A
fourth expansion slot is used for the standard (included with the N5300) 512 MB
NVRAM adapter card.
Note: The PCIe adapters supported by the N5300 storage system are described in
Appendix G, “Optional adapter cards supported by N5300 and N5600
systems,” on page 157.
The Model A10 can be up upgraded to a maximum of 16 gigabit Ethernet ports via
the addition of three optional quad-port copper NICs (feature number 1022 or
1023).
The Model A10 may be upgraded to an N5300 Model A20. The upgrade from a
Model A10 to a Model A20 is a disruptive upgrade.
N5300 A20
The N5300 Model A20 is designed to provide identical function as the Model A10,
but with the addition of a second processing node and the Clustered Failover (CFO)
software feature. The Model A20 consists of two processing nodes that are
designed to provide takeover and failback function, helping improve overall
availability. For the Model A20, each processing node is a 3U rack-mountable filer.
Therefore, the N5300 Model A20 occupies a total of 6U of rack space.
All the adapter card slots of the N5300 are PCI-Express (PCIe) slots. For the
N5300 Model A20, the maximum number of all additional PCIe adapters is six.
Note: The PCIe adapters supported by the N5300 storage system are described in
Appendix G, “Optional adapter cards supported by N5300 and N5600
systems,” on page 157.
The Model A20 can be upgraded to a maximum of ten dual-path Fibre Channel
loops (20 4-Gbps Fibre Channel ports) via the addition of six optional Fibre Channel
HBA for Disk Attachments (feature number 1014). The ten loops will support a
maximum of 252 total disk drives.
The Model A20 can be upgraded to a maximum of 32 Gigabit Ethernet ports via the
addition of six optional quad-port copper gigabit Ethernet NICs (feature number
1022 or 1023).
Within a single EXN1000, EXN2000, or EXN4000 expansion unit, all disk drives
must be of a particular type (rotational speed/capacity). Although the original order
for expansion units may contain expansion units with no more than two different
types (rotational speed/capacity) of disk drives, later upgrades to add additional
expansion units do not have to meet this requirement.
The maximum physical storage capacity of the N5300 system varies depending on
the type of disk storage expansion unit (SATA or Fibre Channel) and the capacity of
disk drives used. Table 15 describes the maximum supported total physical storage
capacity for the N5300.
Table 15. N5300 raw storage capacity
Maximum Maximum
Disk drive storage Maximum disk physical
Disk enclosure capacity enclosures drives capacity
EXN1000 250-GB SATA disk 24 336 84 TB
drives
EXN1000 500-GB SATA disk 24 336 168 TB
drives
EXN1000 750-GB SATA disk 24 336 252 TB
drives
EXN1000 1 TB SATA disk 24 336 336 TB
drives
EXN2000 144-GB Fibre 24 336 48.384 TB
Channel disk
drives
EXN2000 300-GB Fibre 24 336 100.8 TB
Channel disk
drives
EXN4000 144-GB Fibre 24 336 48.384 TB
Channel disk
drives
EXN4000 300-GB Fibre 24 336 100.8 TB
Channel disk
drives
Dual-path Fibre Channel cabling is supported for the N5300 filer. Dual-path Fibre
Channel cabling is designed to improve reliability, availability and serviceability of
the expansion units attached to the storage controller by creating two redundant
The power cord features for the N5300 filer are listed in Appendix B, “Power cord
list for N series storage systems,” on page 123.
The Model G10 is designed to provide a single node gateway with NFS, CIFS, FCP
and iSCSI support in a 3U gateway mounted in a standard 19-inch rack. The Model
G20 provides an active/active dual-node base unit.
The N5300 gateways are designed to interoperate with products capable of data
transmission in the industry-standard iSCSI, CIFS, FCP and NFS protocols. These
include the IBM Eserver System p, System i (NFS only), System x and System z
(NFS only) servers. Details and current information on N5300 gateway
interoperability is available at:
www.ibm.com/systems/storage/network/interophome.html
For details about your system's configuration limits for Fibre Channel and iSCSI,
see “Configuration limits for Fibre Channel and iSCSI” on page 63.
For more information about planning for your N series gateway system, refer to the
IBM System Storage N series Gateway Planning Guide for your version of Data
ONTAP.
Note: The PCIe adapters supported by the N5300 and N5600 storage systems are
described in Appendix G, “Optional adapter cards supported by N5300 and
N5600 systems,” on page 157.
The Model G10 may be upgraded to a Model G20. The upgrade from a Model G10
to a Model G20 is a disruptive upgrade.
All the adapter card slots of the N5300 are PCI-Express (PCIe) slots. The maximum
number of adapters that may be added to the N5300 Model G20 is six. When
adapter cards are ordered for the Model G20 on the initial order, they must be
ordered and added in pairs, one per node, so that both nodes are populated with
the same number of each type of PCIe adapters.
Note: The PCIe adapters supported by the N5300 and N5600 storage systems are
described in Appendix G, “Optional adapter cards supported by N5300 and
N5600 systems,” on page 157.
The physical proximity of the two processing nodes within a Model G20 (with
respect to each other) is determined by which Infiniband cluster interconnect cables
are ordered (feature numbers 1037, 1038, 1039, 1040 and 1041). Optical cables
(feature numbers 1040 and 1041) also require feature number 1042.
See the Interoperability Matrix at the following Web site for supported devices for
your N5000 series gateway system.
www.ibm.com/systems/storage/network/interophome.html
Refer to the documentation for your external storage for additional information.
The power cord features for the N5300 gateway are listed in Appendix B, “Power
cord list for N series storage systems,” on page 123.
The N5500 filers are designed to interoperate with products capable of data
transmission in the industry-standard iSCSI, CIFS, FCP and NFS protocols. These
include the IBM Eserver System p, System i (NFS only), System x and System z
(NFS only) servers. Details and current information on N5500 interoperability is
available at:
www.ibm.com/systems/storage/network/interophome.html
For details about your system's configuration limits for Fibre Channel and iSCSI,
see “Configuration limits for Fibre Channel and iSCSI” on page 63.
N5500 A10
The N5500 Model A10 is designed to provide a single-node filer with NFS, CIFS,
FCP, and iSCSI support in a 3U filer mounted in a standard 19-inch rack. The
N5500 filer does not include disk storage in the base chassis. This base chassis
includes:
v Two Intel 2.8 GHz Xeon processors
v 4 GB of ECC memory
v 512 MB of non-volatile random access memory (NVRAM)
v Four integrated 10/100/1000 Ethernet ports
v Four integrated 2-Gbps Fibre Channel ports that can be configured as targets or
initiators
v Redundant hot-plug integrated power supplies with fans
v Redundant cooling fans
v Three PCI-X expansion slots for additional Fibre Channel Host Bus Adapters
(HBAs) or Gigabit Ethernet Network Interface Cards (NICs)
v Front LCD message display
For the N5500 Model A10, the maximum number of all additional PCI-X adapters is
three.
Note: The PCI-X adapters supported by the N5200 and N5500 storage systems
are described in Appendix F, “Optional adapter cards supported by N5200
and N5500 systems,” on page 149.
The N5500 Model A10 can have a maximum of four dual-path (redundant) Fibre
Channel storage loops for attaching storage expansion units (EXN1000, EXN2000,
and EXN4000). On the Model A10, in order to have the maximum number of
dual-path Fibre Channel storage loops, two additional dual-port Fibre Channel
The N5500 Model A10 can be upgraded to a maximum of ten 10/100/1000 Ethernet
ports via the addition of three optional dual-port fiber Gigabit Ethernet Network
Interface cards (NICs) (feature number 1003) or 16 10/100/100 Ethernet ports via
the addition of three optional quad-port copper Gigabit Ethernet NICs (feature
number 1007).
The Model A10 may be upgraded to an N5500 Model A20. The upgrade from a
Model A10 to a Model A20 is a disruptive upgrade.
N5500 A20
The N5500 Model A20 is designed to provide identical function as the Model A10,
but with the addition of a second processing node and the Clustered Failover (CFO)
software feature. The N5500 Model A20 consists of two processing nodes that are
designed to provide takeover and failback function, helping improve overall
availability. For the Model A20, each processing node is a 3U rack-mountable filer.
Therefore, the Model A20 occupies a total of 6U of rack space.
For the N5500 Model A20, the maximum number of all additional PCI-X adapters is
six.
Note: The PCI-X adapters supported by the N5200 and N5500 storage systems
are described in Appendix F, “Optional adapter cards supported by N5200
and N5500 systems,” on page 149.
The physical proximity of the two processing nodes within a Model A20 (with
respect to each other) is determined by which Infiniband cluster interconnect cables
are ordered (feature numbers 1037, 1038, 1039, 1040 and 1041). Optical cables
(feature numbers 1040 and 1041) also require feature number 1042.
Within a single EXN1000, EXN2000, or EXN4000 expansion unit, all disk drives
must be of a particular type (rotational speed/capacity). Although the original order
for expansion units may contain expansion units with no more than two different
types (rotational speed/capacity) of disk drives, later upgrades to add additional
expansion units do not have to meet this requirement.
The maximum physical storage capacity of the N5500 system varies depending on
the type of disk storage expansion unit (SATA or Fibre Channel) and the capacity of
disk drives used. Table 20 describes the maximum supported total physical storage
capacity for the N5500.
Table 20. N5500 raw storage capacity
Maximum Maximum
Disk drive storage Maximum disk physical
Disk enclosure capacity enclosures drives capacity
EXN1000 250-GB SATA disk 24 336 84 TB
drives
EXN1000 320-GB SATA disk 24 336 107.52 TB
drives
EXN1000 500-GB SATA disk 24 336 168 TB
drives
EXN1000 750-GB SATA disk 24 224 168 TB
drives
EXN1000 1 TB SATA disk 24 168 168 TB
drives
EXN1000 SATA storage expansion units and EXN2000 or EXN4000 Fibre Channel
storage expansion units must not share Fibre Channel loops. A maximum of six
storage expansion units (EXN1000, EXN2000, or EXN4000) are supported on a
single Fibre Channel loop.
Dual-path Fibre Channel cabling is supported for the N5500 filer. Dual-path Fibre
Channel cabling is designed to improve reliability, availability and serviceability of
the expansion units attached to the storage controller by creating two redundant
paths from each storage controller to each loop of the expansion units. For more
information about using dual-path Fibre Channel cabling, see the Installation and
Setup Instructions that came with your system.
The power cord features for the N5500 filer are listed in Appendix B, “Power cord
list for N series storage systems,” on page 123.
For details about your system's configuration limits for Fibre Channel and iSCSI,
see “Configuration limits for Fibre Channel and iSCSI” on page 63.
For more information about planning for your N series gateway system, refer to the
IBM System Storage N series Gateway Planning Guide for your version of Data
ONTAP.
For the N5500 Model G10, the maximum number of all additional PCI-X adapters is
three.
Note: The PCI-X adapters supported by the N5200 and N5500 storage systems
are described in Appendix F, “Optional adapter cards supported by N5200
and N5500 systems,” on page 149.
The onboard 2-Gbps Fibre Channel ports can be configured as either FCP initiators
for attaching to SAN storage or as FCP targets for attaching to Fibre Channel
application hosts (attached either directly or through a Fibre Channel SAN).
N5500 G20
The N5500 Model G20 is designed to provide identical function as the Model G10,
but with the addition of a second processing node and the Clustered Failover (CFO)
software feature. The N5500 Model G20 consists of two processing nodes that are
designed to provide takeover and failback function, helping improve overall
availability. For the Model G20, each processing node is a 3U rack-mountable filer.
Therefore, the Model G20 occupies a total of 6U of rack space.
For the N5500 Model G20, the maximum number of all additional PCI-X adapters is
six.
Note: The PCI-X adapters supported by the N5200 and N5500 storage systems
are described in Appendix F, “Optional adapter cards supported by N5200
and N5500 systems,” on page 149.
The onboard 2-Gbps Fibre Channel ports can be configured as either FCP initiators
for attaching to SAN storage or as FCP targets for attaching to Fibre Channel
application hosts (attached either directly or through a Fibre Channel SAN).
The physical proximity of the two processing nodes within a Model G20 (with
respect to each other) is determined by which Infiniband cluster interconnect cables
are ordered (feature numbers 1037, 1038, 1039, 1040 and 1041). Optical cables
(feature numbers 1040 and 1041) also require feature number 1042.
See the Interoperability Matrix at the following Web site for supported devices for
your N5000 series gateway system.
www.ibm.com/systems/storage/network/interophome.html
Refer to the documentation for your external storage for additional information.
The power cord features for the N5500 gateway are listed in Appendix B, “Power
cord list for N series storage systems,” on page 123.
www.ibm.com/systems/storage/network/interophome.html
For details about your system's configuration limits for Fibre Channel and iSCSI,
see “Configuration limits for Fibre Channel and iSCSI” on page 63.
N5600 A10
The N5600 Model A10 is designed to provide a single-node filer with NFS, CIFS,
FCP, and iSCSI support in a 3U filer mounted in a standard 19-inch rack. The
N5600 filer does not include disk storage in the base chassis. This base chassis
includes:
v Two 1.8 GHz 64-bit processors
v 8 GB of DDR-400 ECC memory
v 512 MB of non-volatile random access memory (NVRAM)
v Four integrated 10/100/1000 Ethernet ports
v Four integrated 4-Gbps Fibre Channel ports that can be configured as targets or
initiators
v Redundant hot-plug integrated power supplies with fans
v Redundant cooling fans
v Three PCI-Express (PCIe) expansion slots for additional Fibre Channel Host Bus
Adapters (HBAs) or Gigabit Ethernet Network Interface Cards (NICs)
v One integrated Remote LAN Management (RLM) port
v One serial console port
v Front LCD message display
For the N5600 Model A10, the maximum number of all additional PCIe adapters is
three. A fourth expansion slot is used for the standard (included with the N5600)
512 MB NVRAM adapter card.
Note: The PCIe adapters supported by the N5600 storage system are described in
Appendix G, “Optional adapter cards supported by N5300 and N5600
systems,” on page 157.
The N5600 Model A10 can be upgraded to a maximum of five dual-path Fibre
Channel loops (10 Fibre Channel ports) via the addition of three optional Fibre
Channel HBAs for Disk Attachment (feature number 1014). The five loops will
support a maximum of 420 total disk drives.
The Model A10 can be up upgraded to a maximum of 16 gigabit Ethernet ports via
the addition of three optional quad-port copper NICs (feature number 1022 or
1023).
The Model A10 may be upgraded to an N5600 Model A20. The upgrade from a
Model A10 to a Model A20 is a disruptive upgrade.
For the N5600 Model A20, the maximum number of all additional PCIe adapters is
six.
Note: The PCIe adapters supported by the N5600 storage system are described in
Appendix G, “Optional adapter cards supported by N5300 and N5600
systems,” on page 157.
The Model A20 can be upgraded to a maximum of ten dual-path Fibre Channel
loops (20 4-Gbps Fibre Channel ports) via the addition of six optional Fibre Channel
HBA for Disk Attachments (feature number 1014).
The Model A20 can be upgraded to a maximum of 32 Gigabit Ethernet ports via the
addition of six optional quad-port copper gigabit Ethernet NICs (feature number
1022 or 1023).
The physical proximity of the two processing nodes within a Model A20 (with
respect to each other) is determined by which Infiniband cluster interconnect cables
are ordered (feature numbers 1037, 1038, 1039, 1040 and 1041). Optical cables
(feature numbers 1040 and 1041) also require feature number 1042.
Within a single EXN1000, EXN2000, or EXN4000 expansion unit, all disk drives
must be of a particular type (rotational speed/capacity). Although the original order
for expansion units may contain expansion units with no more than two different
types (rotational speed/capacity) of disk drives, later upgrades to add additional
expansion units do not have to meet this requirement.
Dual-path Fibre Channel cabling is supported for the N5600 filer. Dual-path Fibre
Channel cabling is designed to improve reliability, availability and serviceability of
the expansion units attached to the storage controller by creating two redundant
paths from each storage controller to each loop of the expansion units. For more
information about using dual-path Fibre Channel cabling, see the Installation and
Setup Instructions that came with your system.
The power cord features for the N5600 filer are listed in Appendix B, “Power cord
list for N series storage systems,” on page 123.
The N5600 Model G10 is designed to provide a single node gateway with NFS,
CIFS, FCP and iSCSI support in a 3U gateway mounted in a standard 19-inch rack.
The N5600 Model G20 provides an active/active dual-node base unit.
The N5600 gateways are designed to interoperate with products capable of data
transmission in the industry-standard iSCSI, CIFS, FCP and NFS protocols. These
include the IBM Eserver System p, System i (NFS only), System x and System z
(NFS only) servers. The most current information on N5600 gateway interoperability
is available at:
www.ibm.com/systems/storage/network/interophome.html
For details about your system's configuration limits for Fibre Channel and iSCSI,
see “Configuration limits for Fibre Channel and iSCSI” on page 63.
For more information about planning for your N series gateway system, refer to the
IBM System Storage N series Gateway Planning Guide for your version of Data
ONTAP.
All the adapter card slots of the N5600 are PCI-Express (PCIe) slots. For the
N5600 Model G10, the maximum number of all additional PCIe adapters is three.
Note: The PCIe adapters supported by the N5300 and N5600 storage systems are
described in Appendix G, “Optional adapter cards supported by N5300 and
N5600 systems,” on page 157.
The Model G10 may be upgraded to a Model G20. The upgrade from a Model G10
to a Model G20 is a disruptive upgrade.
Note: The PCIe adapters supported by the N5300 and N5600 storage systems are
described in Appendix G, “Optional adapter cards supported by N5300 and
N5600 systems,” on page 157.
The physical proximity of the two processing nodes within a Model G20 (with
respect to each other) is determined by which Infiniband cluster interconnect cables
are ordered (feature numbers 1037, 1038, 1039, 1040 and 1041). Optical cables
(feature numbers 1040 and 1041) also require feature number 1042.
See the Interoperability Matrix at the following Web site for supported devices for
your N5000 series gateway system.
www.ibm.com/systems/storage/network/interophome.html
Refer to the documentation for your external storage for additional information.
The power cord features for the N5600 gateway are listed in Appendix B, “Power
cord list for N series storage systems,” on page 123.
The IBM System Storage N7700 storage controllers are designed to interoperate
with products capable of data transmission in the industry-standard iSCSI, CIFS,
FCP and NFS protocols. These include the IBM Eserver System p, System i (NFS
only), System x, and System z (NFS only) servers.
See the Interoperability Matrix at the following Web site for supported devices for
your N7000 series filer system.
www.ibm.com/systems/storage/network/interophome.html
For details about your system's configuration limits for Fibre Channel and iSCSI,
see “Configuration limits for Fibre Channel and iSCSI” on page 63.
N7700 A11
The N7700 Model A11 is designed to provide a single-node storage controller with
iSCSI support, and NFS, CIFS, and FCP support via optional features. The N7700
is a 6U storage controller that must be mounted in a standard 19-inch rack. The
N7700 storage controller does not include storage in the base chassis. The base
chassis includes:
v Two AMD 2.6 GHz 64-bit Opteron 252 processors, each with 1 MB of level 2
cache
v 16 GB of DDR-333 memory
v 512 MB of non-volatile random access memory (NVRAM)
v Six integrated Gigabit Ethernet RJ-45 ports
v Eight integrated 4-Gbps Fibre Channel ports
For the N7700 Model A11, the maximum number of additional expansion adapters
is eight (three PCI-X and five PCIe). One PCIe expansion slot is used for the
standard (included with the N7700) 512 MB NVRAM adapter card.
Note: The PCIe and PCI-X adapters supported by the N7700 and N7900 storage
systems are described in Appendix H, “Optional adapter cards supported by
N7000 series systems,” on page 163.
The N7700 Model A11 can be upgraded to a maximum of 14 dual-path (loop A and
loop B) Fibre Channel loops (28 4-Gbps Fibre Channel ports) via the addition of
four optional Fibre Channel HBAs for Disk Attachment (feature numbers 1014,
1029, or 1035). The 14 loops will support a maximum of 840 total disk drives. The
Model A11 can be upgraded to a maximum of 14 Gigabit Ethernet ports via the
addition of two optional quad-port copper Gigabit Ethernet Network Interface cards
(NICs) (feature number 1009).
The Model A11 may be upgraded to a Model A21. The upgrade from a Model A11
to a Model A21 is a disruptive upgrade.
N7700 A21
The N7700 Model A21 is designed to provide identical function as the N7700 Model
A11, but with the addition of a second processing node and the Clustered Failover
(CFO) licensed function. The N7700 Model A21 supports a maximum of 840 drives,
32 GB of DDR-333 memory, and ten backend Fibre Channel loops. The Model A21
consists of two nodes that are designed to provide failover and failback function,
helping improve overall availability. For the Model A21, each node is a 6U
rack-mountable storage controller. Therefore, the Model A21 occupies a total of 12U
of rack space.
For the N7700 Model A21, the maximum number of additional expansion adapters
is 16 (six PCI-X and 10 PCIe). Two PCIe expansion slots are used for the standard
(included with the N7700) 512 MB NVRAM adapter cards.
The N7700 Model A21 can be upgraded to a maximum of 14 dual-path (loop A and
loop B) Fibre Channel loops (28 4-Gbps Fibre Channel ports) via the addition of two
optional Fibre Channel HBAs for Disk Attachment (feature number 1014, 1029, or
1035).
The Model A21 can be upgraded to a maximum of 20 Gigabit Ethernet ports via the
addition of four optional quad-port copper Gigabit Ethernet NICs (feature number
1009).
The physical proximity of the two processing nodes within a Model A21 (with
respect to each other) is determined by which Infiniband cluster interconnect cables
are ordered (feature numbers 1037, 1038, 1039, 1040 and 1041). Optical cables
(feature numbers 1040 and 1041) also require feature number 1042.
Within a single EXN1000, EXN2000, or EXN4000 expansion unit, all disk drives
must be of a particular type (rotational speed/capacity). Although the original order
for expansion units may contain expansion units with no more than two different
types (rotational speed/capacity) of disk drives, later upgrades to add additional
expansion units do not have to meet this requirement.
The maximum raw storage capacity of the N7700 system is determined only by the
number of disk drives supported.
Table 31 describes the maximum supported total physical storage capacity for the
N7700.
Table 31. N7700 raw storage capacity
Maximum Maximum
Disk drive storage Maximum disk physical
Disk enclosure capacity enclosures drives capacity
EXN1000 250-GB SATA disk 84 840 210 TB
drives
EXN1000 500-GB SATA disk 84 840 420 TB
drives
EXN1000 750-GB SATA disk 84 840 630 TB
drives
EXN1000 1 TB SATA disk 84 840 840 TB
drives1
EXN2000 72-GB Fibre 84 840 60.48 TB
Channel disk
drives
EXN2000 144-GB Fibre 84 840 120.96 TB
Channel disk
drives
Dual-path Fibre Channel cabling is supported for the N7700 filer. Dual-path Fibre
Channel cabling is designed to improve reliability, availability and serviceability of
the expansion units attached to the storage controller by creating two redundant
paths from each storage controller to each loop of the expansion units. For more
information about using dual-path Fibre Channel cabling, see the Installation and
Setup Instructions that came with your system.
The power cord features for the N7700 filer are listed in Appendix B, “Power cord
list for N series storage systems,” on page 123.
The N7700 gateways are designed to interoperate with products capable of data
transmission in the industry-standard iSCSI, CIFS, FCP and NFS protocols. These
include the IBM Eserver System p, System i (NFS only), System x and System z
(NFS only) servers. The most current information on N7700 gateway interoperability
is available at:
www.ibm.com/systems/storage/network/interophome.html
For details about your system's configuration limits for Fibre Channel and iSCSI,
see “Configuration limits for Fibre Channel and iSCSI” on page 63.
For more information about planning for your N series gateway system, refer to the
IBM System Storage N series Gateway Planning Guide for your version of Data
ONTAP.
Note: The PCI adapters supported by the N7700 gateway are described in
Appendix H, “Optional adapter cards supported by N7000 series systems,”
on page 163.
v One serial console port
v Support for a maximum of 840 LUNs
The Model G11 may be upgraded to a Model G21. The upgrade from a Model G11
to a Model G21 is a disruptive upgrade.
The physical proximity of the two processing nodes within a Model G21 (with
respect to each other) is determined by which Infiniband cluster interconnect cables
are ordered (feature numbers 1037, 1038, 1039, 1040 and 1041). Optical cables
(feature numbers 1040 and 1041) also require feature number 1042.
See the Interoperability Matrix at the following Web site for supported devices for
your N7000 series gateway system.
www.ibm.com/systems/storage/network/interophome.html
Refer to the documentation for your external storage for additional information.
The power cord features for the N7700 gateway are listed in Appendix B, “Power
cord list for N series storage systems,” on page 123.
See the Interoperability Matrix at the following Web site for supported devices for
your N7000 series filer system.
www.ibm.com/systems/storage/network/interophome.html
For details about your system's configuration limits for Fibre Channel and iSCSI,
see “Configuration limits for Fibre Channel and iSCSI” on page 63.
N7900 A11
The N7900 Model A11 is designed to provide a single-node storage controller with
iSCSI support, and NFS, CIFS, and FCP support via optional features. The N7900
Model A11 is a 6U storage controller that must be mounted in a standard 19-inch
rack. The N7900 storage controller does not include storage in the base chassis.
The base chassis includes:
v Four AMD 2.6 GHz Opteron 885 64-bit dual-core processors, each with 1 MB of
level 2 cache
v 32 GB of DDR-333 memory
v 2 GB of non-volatile random access memory (NVRAM)
v Six integrated Gigabit Ethernet RJ-45 ports
v Eight integrated 4-Gbps Fibre Channel ports
v Dual redundant hot-plug integrated power supplies and cooling fans
v Three PCI-X expansion slots
For the N7900 Model A11, the maximum number of additional expansion adapters
is eight (three PCI-X and five PCIe). One PCIe expansion slot is used for the
standard (included with the N7900) 2 GB NVRAM adapter card.
Note: The PCIe and PCI-X adapters supported by the N7700 and N7900 storage
systems are described in Appendix H, “Optional adapter cards supported by
N7000 series systems,” on page 163.
The N7900 Model A11 can be upgraded to a maximum of 14 multi-path (loop A and
loop B) Fibre Channel loops (28 4-Gbps Fibre Channel ports) via the addition of
four optional Fibre Channel HBAs for Disk Attachment (feature numbers 1014,
1029, or 1035). The 14 loops will support a maximum of 1176 total disk drives. The
Model A11 can be upgraded to a maximum of 14 Gigabit Ethernet ports via the
addition of two optional quad-port copper Gigabit Ethernet Network Interface cards
(NICs) (feature number 1009).
The Model A11 may be upgraded to a Model A21. The upgrade from a Model A11
to a Model A21 is a disruptive upgrade.
N7900 A21
The N7900 Model A21 is designed to provide identical function as the N7900 Model
A11, but with the addition of a second processing node and the Clustered Failover
(CFO) licensed function. The N7900 Model A21 also supports a maximum of 1176
drives and 14 backend Fibre Channel loops.
The Model A21 consists of two nodes that are designed to provide failover and
failback function, helping improve overall availability. For the Model A21, each node
is a 6U rack-mountable storage controller. Therefore, the Model A21 occupies a
total of 12U of rack space.
For the N7900 Model A21, the maximum number of additional expansion adapters
is 16 (six PCI-X and 10 PCIe). Two PCIe expansion slots are used for the standard
(included with the N7900) 2 GB NVRAM adapter cards.
The N7900 Model A21 can be upgraded to a maximum of 14 dual-path (loop A and
loop B) Fibre Channel loops (28 4-Gbps Fibre Channel ports) via the addition of
four optional Fibre Channel HBAs for Disk Attachment (feature numbers 1014,
1029, or 1035). The Model A21 can be upgraded to a maximum of 20 Gigabit
Ethernet ports via the addition of four optional quad-port copper Gigabit Ethernet
NICs (feature number 1009).
The physical proximity of the two processing nodes within a Model A21 (with
respect to each other) is determined by which Infiniband cluster interconnect cables
are ordered (feature numbers 1037, 1038, 1039, 1040 and 1041). Optical cables
(feature numbers 1040 and 1041) also require feature number 1042.
Within a single EXN1000, EXN2000, or EXN4000 expansion unit, all disk drives
must be of a particular type (rotational speed/capacity). Although the original order
for expansion units may contain expansion units with no more than two different
types (rotational speed/capacity) of disk drives, later upgrades to add additional
expansion units do not have to meet this requirement.
The maximum raw storage capacity of the N7900 system is determined only by the
number of disk drives supported.
Table 36 describes the maximum supported total physical storage capacity for the
N7900.
Table 36. N7900 raw storage capacity
Maximum Maximum
Disk drive storage Maximum disk physical
Disk enclosure capacity enclosures drives capacity
EXN1000 250-GB SATA disk 84 1176 294 TB
drives
EXN1000 500-GB SATA disk 84 1176 588 TB
drives
EXN1000 750-GB SATA disk 84 1176 882 TB
drives
EXN1000 1 TB SATA disk 84 1176 1176 TB
drives1
EXN2000 72-GB Fibre 84 1176 82.67 TB
Channel disk
drives
EXN2000 144-GB Fibre 84 1176 169.34 TB
Channel disk
drives
EXN2000 300-GB Fibre 84 1176 352.80 TB
Channel disk
drives
Dual-path Fibre Channel cabling is supported for the N7900 filer. Dual-path Fibre
Channel cabling is designed to improve reliability, availability and serviceability of
the expansion units attached to the storage controller by creating two redundant
paths from each storage controller to each loop of the expansion units. For more
information about using dual-path Fibre Channel cabling, see the Installation and
Setup Instructions that came with your system.
The power cord features for the N7900 filer are listed in Appendix B, “Power cord
list for N series storage systems,” on page 123.
The N7900 gateways are designed to interoperate with products capable of data
transmission in the industry-standard iSCSI, CIFS, FCP and NFS protocols. These
include the IBM Eserver System p, System i (NFS only), System x and System z
(NFS only) servers. The most current information on N7900 gateway interoperability
is available at:
www.ibm.com/systems/storage/network/interophome.html
For details about your system's configuration limits for Fibre Channel and iSCSI,
see “Configuration limits for Fibre Channel and iSCSI” on page 63.
For more information about planning for your N series gateway system, refer to the
IBM System Storage N series Gateway Planning Guide for your version of Data
ONTAP.
Note: The PCI adapters supported by the N7900 gateway are described in
Appendix H, “Optional adapter cards supported by N7000 series systems,”
on page 163.
v Support for a maximum of 1176 LUNs
v One serial console port
The N7900 Model G11 is capable of being upgraded to an N7900 Model G21. The
upgrade from a Model G11 to a Model G21 is a disruptive upgrade.
Note: The PCIe adapters supported by the N7700 gateway are described in
Appendix H, “Optional adapter cards supported by N7000 series systems,”
on page 163.
v Infiniband (IB) cluster cable, which attaches to the NVRAM6 adapters and
connects the two processing nodes
v Two serial console ports
v Support for a maximum of 1176 LUNs
When adapter cards are ordered for the Model G21, they must be ordered and
added in pairs, one per node, so that both nodes are populated with the same
number of each type of PCI-X/PCIe adapters.
The physical proximity of the two processing nodes within a Model G21 (with
respect to each other) is determined by which Infiniband cluster interconnect cables
are ordered (feature numbers 1037, 1038, 1039, 1040 and 1041). Optical cables
(feature numbers 1040 and 1041) also require feature number 1042.
See the Interoperability Matrix at the following Web site for supported devices for
your N7000 series gateway system.
www.ibm.com/systems/storage/network/interophome.html
Refer to the documentation for your external storage for additional information.
The power cord features for the N7900 gateway are listed in Appendix B, “Power
cord list for N series storage systems,” on page 123.
The following list provides parameters and definitions for Fibre Channel and iSCSI
configuration limits described in the following sections.
Visible target ports per host (iSCSI)
The maximum number of target iSCSI Ethernet ports a host can see or
access on iSCSI attached controllers.
Visible target ports per host (Fibre Channel)
The maximum number of Fibre Channel adapters a host can see or access
on the attached Fibre Channel controllers.
LUNs per host
The maximum number of LUNs that can be mapped from the controllers to
a single host.
Note: The following configuration limits represent the maximum values that have
been tested. Do not use these limits as sizing guidelines.
Table 41. Host operating system configuration limits for iSCSI and Fibre Channel
Operating system
Parameter Windows Linux® HP-UX Solaris AIX® VMware
Visible target 16 16 16 16 16 16
ports per host
LUNs per host v 64 128 512 512 128 v 2.x=128
(Windows v 3.x=256
2000)
v 128
(Windows
2003)
Paths per LUN 8 4 8 more 16 16 v 2.x=4
possible, but
v 3.x=8
pvlinks will only
utilize 8
Max LUN size v 2 TB 2 TB 2 TB v 1023 GB v 1 TB 2 TB
Note: The
v 12 TB v 12 TB with v 12 TB with
maximum LUN
(Windows Solaris 9, AIX 5.2ML7
sizes are due
2003 or later VxVM, EFI, or later and
to restrictions
and AIX 5.3ML3
at the
appropriate or later
operating
patches
system level,
not the storage
system level.
Note: The following configuration limits represent the maximum values that have
been tested. Do not use these limits as sizing guidelines.
Note: The following configuration limits represent the maximum values that have
been tested. Do not use these limits as sizing guidelines.
Table 43. Configuration limits for active/active N7000 series and N5000 series storage
systems
Storage system
N5300 and
N5200 N5500 N5600 N7700 N7900
Parameter A20/G20 A20/G20 A20/G20 A21/G21 A21/G21
Maximum number of LUNs 1024 1024 2048 2048 2048
per controller
Maximum number of LUNs 1024 1024 2048 2048 2048
per volume
Note: The following configuration limits represent the maximum values that have
been tested. Do not use these limits as sizing guidelines.
Table 44. Configuration limits for single-controller N3300 and N3600 systems
Storage system
Parameter N3300 A10 N3600 A10
Maximum number of LUNs per controller 1024 1024
Maximum number of LUNs per volume 1024 1024
Maximum port fan-in 16 16
Maximum connected hosts per controller (Fibre 24 32
Channel)
Maximum connected hosts per controller (iSCSI) 24 32
Maximum number of igroups per controller 256 256
Note: The following configuration limits represent the maximum values that have
been tested. Do not use these limits as sizing guidelines.
Table 45. Configuration limits for active/active N3300 and N3600 systems
Storage system
Parameter N3300 A20 N3600 A20
Maximum number of LUNs per active/active 1024 1024
storage system
Maximum number of LUNs per volume 1024 1024
Maximum port fan-in 16 16
Maximum connected hosts per active/active 24 32
configuration (Fibre Channel)
Maximum connected hosts per active/active 24 32
configuration (iSCSI)
Maximum number of igroups per active/active 256 256
configuration
Maximum number of initiators per igroup 256 256
Maximum number of LUN mappings per 4096 4096
active/active configuration
Maximum length of LUN path name 255 255
Maximum LUN Size 12 TB 12 TB
Maximum Fibre Channel Queue Depth available 737 737
per port
Maximum Fibre Channel target ports per 4 4
active/active configuration
Note: The following configuration limits represent the maximum values that have
been tested. Do not use these limits as sizing guidelines.
Table 47. Configuration limits for active/active N3700 systems
Storage system
Parameter N3700 A20
Maximum number of LUNs per active/active storage system 1024
Maximum number of LUNs per volume 1024
Maximum port fan-in 16
Maximum connected hosts per active/active storage system 16
(Fibre Channel)
Maximum connected hosts per active/active storage system 32
(iSCSI)
Maximum number of igroups per active/active storage system 256
Maximum number of initiators per igroup 256
Maximum number of LUN mappings per controller 4096
Maximum length of LUN path name 255
Maximum LUN Size 6 TB
Maximum Fibre Channel Queue Depth available per port 491
Maximum Fibre Channel target ports per active/active storage 2
system
The EXN1000 SATA expansion unit is shipped with a minimum of five SATA disk
drives, up to a maximum of 14 disk drives (all of the same capacity), in 250-GB,
320-GB, 500-GB, 750-GB, or 1 TB physical capacities.
v With 250-GB SATA hard disk drives, each EXN1000 provides a maximum of 3.5
TB of physical storage capacity.
v With 320-GB SATA hard disk drives, each EXN1000 provides a maximum of 4.48
TB of physical storage capacity.
v With 500-GB SATA hard disk drives, each EXN1000 provides a maximum of 7
TB of physical storage capacity.
v With 750-GB SATA hard disk drives, each EXN1000 provides a maximum of 10.5
TB of physical storage capacity.
v With 1 TB SATA hard disk drives, each EXN1000 provides a maximum of 14 TB
of physical storage capacity.
Note: While the initial order must contain 14 drives, at least five of the drives must
be installed in the EXN1000 enclosure with drive blank covers (feature 4099)
covering the remaining drive bays.
Attention: 320-GB SATA hard disk drives are not supported for use in EXN1000s
when used in N7700 and N7900 filer configurations.
The power cord features for the EXN1000 are listed in Appendix B, “Power cord list
for N series storage systems,” on page 123.
The EXN2000 Fibre Channel expansion unit is shipped with a minimum of four
Fibre Channel disk drives, up to a maximum of 14 disk drives (all of the same
capacity). Disk drive options for the EXN2000 include 72 GB (10,000 or 15,000
rpm), 144 GB (10,000 or 15,000 rpm), and 300 GB (10,000 rpm).
Note: While the initial order must contain 14 drives, at least four of the drives must
be installed in the EXN2000 enclosure with drive blank covers (feature 4099)
covering the remaining drive bays.
The power cord features for the EXN2000 are listed in Appendix B, “Power cord list
for N series storage systems,” on page 123.
The EXN4000 is designed to provide 1-Gbps, 2-Gbps, and 4-Gbps Fibre Channel
disk expansion for the IBM System Storage N series storage controllers. EXN4000s
attached to a 2-Gbps loop must be manually configured for 2-Gbps speed.
Note: The EXN4000 4-Gbps support requires that the N series filer be running
Data ONTAP 7.2.1 or later.
The EXN4000 Fibre Channel expansion unit is shipped with a minimum of four
Fibre Channel disk drives, up to a maximum of 14 disk drives (all of the same
capacity). Disk drive options for the EXN4000 include 144 GB (10,000 or 15,000
rpm) and 300 GB (10,000 rpm or 15,000 rpm).
Note: While the initial order must contain 14 drives, at least four of the drives must
be installed in the EXN4000 enclosure with drive blank covers (feature 4099)
covering the remaining drive bays.
The power cord features for the EXN4000 are listed in Appendix B, “Power cord list
for N series storage systems,” on page 123.
Other racks may be used, provided they allow the clearances specified in
Appendix D, “Specifications for IBM and non-IBM racks,” on page 133. N series
storage systems and expansion units do not mount in all IBM racks. It is important
to check clearances when using any rack other than the IBM 7014 or IBM 2101.
Following are descriptions of the supported software protocols for the N series
storage system.
CIFS CIFS allows Microsoft® Windows servers and clients access over the IP
network using CIFS file system protocols. Microsoft Windows client access
licenses (CALs) are not required.
CIFS supports an active directory environment.
To enable CIFS support, you must purchase the appropriate feature code.
See “IBM N series host software features” on page 83.
NFS NFS allows UNIX and Linux servers and clients access over an IP network
using NFS file system protocols.
To enable NFS support, you must purchase the appropriate feature code.
See “IBM N series host software features” on page 83.
iSCSI iSCSI allows transfer of data between storage and servers in block I/O
formats (iSCSI protocol) across an IP network. iSCSI enables the creation
of IP SANs for optimizing the transfer of database traffic in IP environments.
To enable iSCSI, you must purchase the appropriate feature code. See
“IBM N series host software features” on page 83.
FCP FCP allows transfer of data between storage and servers in block I/O
Following are descriptions of some of the Data ONTAP software packages available
for the N series storage system as feature codes.
Advanced Single Instance Storage
This feature provides block-level deduplication within the entire flexible
volume on an IBM N series storage controller that has the near-line
Function feature enabled. Advanced Single Instance Storage only stores
unique data blocks in the flexible volume and creates a small amount of
additional metadata in the process. Each block of data has a digital
″signature,″ that is compared to all other signatures in the flexible volume. If
an exact byte-for-byte block match exists on the flexible volume, the
duplicate block is discarded and its disk space is reclaimed.
CFO Installed on a pair of N series storage controllers, this feature is designed to
enable the transfer of data service from an unavailable controller to the
other controller in the cluster. It is designed to deliver a robust and highly
available data service for business-critical environments.
Disk Sanitization
Disk sanitization is the process of physically obliterating data by overwriting
disks with specified byte patterns or random data in a manner that helps
prevent recovery of current data by any known recovery methods. This
feature enables you to carry out disk sanitization by using three successive
byte-overwrite patterns per cycle and a default six cycles per operation.
Notes:
1. Disk sanitization is supported for N series filers only. Disk sanitization is
not supported for gateway models.
2. After the disk sanitization feature has been installed on an N series
storage system, it cannot be uninstalled.
FlexClone™
FlexClone enables near-instant replication of data volumes/sets without
requiring additional storage space at the time of creation. FlexClone allows
an IT administrator to make a backup copy of a database and then modify
and run testing against the test (backup) database without affecting the
online database and without taking the online database offline.
iSCSI Protocol
This feature enables the iSCSI Protocol licensed function, which is
designed to provide connectivity to clients that transfer data via the iSCSI
protocol.
LockVault™ Compliance
LockVault Compliance provides records retention rules and management of
unstructured data. LockVault enables IT administrators to “lock” a Snapshot
copy in a non-erasable and non-rewriteable format for compliant retention.
Nightly Snapshot backups save only changed blocks; full backup images
are preserved.
The ComplianceJournal™ tool keeps a log of changes between Snapshot
copies.
Table 51 on page 80, Table 52 on page 81, and Table 53 on page 82 list the
supported software features for all N series models.
80
Description 2859 A10 2859 A20 2862 A10 2862 A20 2863 A10 2863 A20
Data ONTAP 6300 6350 7001 7101 7000 7100
CIFS 6302 6352 7002 7102 7002 7102
HTTP 6303 6353 7003 7103 7003 7103
NFS 6304 6354 7004 7104 7004 7104
CFO n/a 6355 n/a 7105 n/a 7105
FlexClone 6311 6361 7011 7111 7011 7111
MultiStore 6320 6370 7020 7120 7020 7120
NearStore 6322 6372 7022 7122 n/a n/a
iSCSI Protocol 6325 6375 7025 7125 7025 7125
SnapMirror/SnapVault Bundle 6329 6379 7029 7129 7029 7129
SnapMirror 6331 6381 7031 7131 7031 7131
SnapLock Compliance n/a n/a n/a n/a 7032 7132
SnapLock Enterprise n/a n/a n/a n/a 7033 7133
SnapMover n/a 6384 n/a 7134 n/a 7134
SnapRestore 6335 6385 7035 7135 7035 7135
SnapVault Primary 6336 6386 7036 7136 7036 7136
81
Table 53. N series software licensed function indicators (Models 2867 to 2869)
82
Description 2867 A11 2867 A21 2867 G11 2867 G21 2868 A10 2868 A20 2868 G10 2868 G20 2869 A10 2869 A20 2869 G10 2869 G20
Data ONTAP 7800 7850 7900 7950 6000 6050 6100 6150 6400 6450 6200 6250
CIFS 7801 7851 7901 7951 7601 6051 7701 6151 7401 7451 7501 7551
HTTP 7802 7852 7902 7952 7602 6052 7702 6152 7402 7452 7502 7552
NFS 7803 7853 7903 7953 7603 6053 7703 6153 7403 7453 7503 7553
CFO n/a 7854 n/a 7954 n/a 6054 n/a 6154 n/a 7454 n/a 7554
FlexClone 7805 7855 7905 7955 7605 6055 7705 6155 7404 7455 7504 7555
MultiStore 7806 7856 7906 7956 7606 6056 7706 6156 7405 7456 7505 7556
SnapMirror 7807 7857 7907 7957 7607 6057 7707 6157 7406 7457 7506 7557
SnapLock Compliance 7808 7858 n/a n/a 7608 6058 n/a n/a 7407 7458 n/a n/a
SnapLock Enterprise 7809 7859 7909 7959 7609 6059 7709 6159 7408 7459 7507 7559
SnapMover n/a 7860 7910 7960 n/a 6060 7710 6160 n/a 7460 7508 7560
SnapRestore 7811 7861 7911 7961 7611 6061 7711 6161 7409 7461 7509 7561
SnapVault Primary 7812 7862 7912 7962 7612 6062 7712 6162 7410 7462 7510 7562
SnapVault Secondary 7813 7863 7913 7963 7613 6063 7713 6163 7411 7463 7511 7563
LockVault Compliance 7814 7864 n/a n/a 7614 6064 n/a n/a 7412 7464 n/a n/a
LockVault Enterprise 7815 7865 7915 7965 7615 6065 7715 6165 7413 7465 7513 7565
Hardware specifications
The following sections list the hardware specifications for the following N series
storage systems and expansion units:
v “N3300 and N3600 hardware specifications”
v “N3700 hardware specifications” on page 95
v “N5000 series system hardware specifications” on page 97
v “N7000 series hardware specifications” on page 101
v “EXN1000 hardware specifications” on page 103
v “EXN2000 and EXN4000 hardware specifications” on page 105
CAUTION:
Two people are required to lift the N3300 system during installation. Three
people are required to lift the N3600 system during installation.
Table 55. N3300 and N3600 system hardware specifications
Physical characteristics
Weight 2859-A10, 2859-A20 Active/active: 66 lbs (30 kg) full
(10° C to 40° C)
Operating temperature recommended range 68° F to 77° F
(20° C to 25° C)
Nonoperating temperature range -40° F to 140° F
(-40° C to 60° C)
Relative humidity 20 to 80%
noncondensing
Recommended operating temperature relative humidity range 40 to 55%
Maximum wet bulb temperature 28° C (82° F)
2, 3
Maximum altitude 3050 m (10,000 ft.)
1, 4
Acoustic level N3300 54 dBA @ 23° C
The following tables list the maximum electrical power for the N3300 and N3600
and the electrical requirements for different configurations of the N3300 and N3600
systems.
Table 57. N3300 and N3600 maximum electrical power
System Maximum electrical power
N3300 100-240 V ac, 10-4 A per node, 50-60 Hz
N3600 100-240 V ac, 12-5 A per node, 50-60 Hz
DANGER
Three people are required to lift the N3700 during installation. Do not
remove the disk drives to reduce the weight.
(10° C to 40° C)
Operating temperature recommended range 68° F to 77° F
(20° C to 25° C)
Nonoperating temperature range -40° F to 149° F
(-40° C to 65° C)
Relative humidity 10 to 90%
noncondensing
Recommended operating temperature 40 to 55%
relative humidity range
Maximum wet bulb temperature 28° C (82° F)
2, 3
Maximum altitude 3050 m (10,000 ft.)
1, 4
Acoustic level 56.4 dBA @ 23° C
Note: Worst-case indicates a system running with one PSU and high fan speed.
Typical indicates a system running two PSUs on two circuits.
DANGER
The weight of this part or unit is between 32 and 55 kg (70.5 and 121.2 lb).
It takes three persons to safely lift this part or unit. (C010)
(10° C to 40° C)
Operating temperature recommended range 68° F to 77° F
(20° C to 25° C)
Nonoperating temperature range -40° F to 65° F
(-40° C to 65° C)
Relative humidity 5 to 95% noncondensing
Recommended operating temperature relative humidity 40 to 55%
range
Maximum wet bulb temperature 28° C (82° F)
2, 3
Maximum altitude 3050 m (10,000 ft.)
1, 4
Acoustic level 54 dBA @ 23° C
In the following tables, worst-case indicates a system running with one PSU and
high fan speed. Typical indicates a system running two PSUs on two circuits.
Table 69. N5200 electrical requirements
100 to 120V 200 to 240V -40 to -60V
Typical Typical Typical
single single single
Worst- PSU/ Worst- PSU/ Worst- PSU/
Input voltage case system case system case system
Input current 3.39 1.2/2.4 1.77 0.71/1.40 8.2 2.85/5.70
measured, A
Input power 336 118/236 329 115/229 328 113/226
measured, W
Thermal 1144 402.5/805 1122 392/783 1118 286/771
dissipation,
BTU/hr
Inrush peak, A 38 37 40 40 n/a n/a
Maximum 10 A 5A n/a n/a
electrical power
Input power 50 to 60 n/a n/a
frequency, Hz
svc00168
DANGER
The weight of this part or unit is between 32 and 55 kg (70.5 and 121.2 lb).
It takes three persons to safely lift this part or unit. (C010)
Table 73. N7000 series system physical characteristics and environmental requirements
Physical characteristics
Weight 2866-A11, 2866-G11, 2867-A11, 54.8 kg (121 lb)
2867-G11
2866-A21, 2866-G21, 2867-A21, 109.6 kg (242 lb)
2867-G21
Rack units 2866-A11, 2866-G11, 2867-A11, 6U
2867-G11
2866-A21, 2866-G21, 2867-A21, 12U
2867-G21
Height 2866-A11, 2866-G11, 2867-A11, 263 mm (10.4 in)
2867-G11
2866-A21, 2866-G21, 2867-A21, 526 mm (20.8 in)
2867-G21
Width 446 mm (17.6 in)
Depth 695 mm (27.4 in) without cable
management tray
782 mm (30.8 in) with cable
management tray
Clearance dimensions
Front-cooling All versions 6 in. (15.2 cm)
Front-maintenance All versions 25 in. (63.5 cm)
Rear-cooling All versions 12 in. (30.5 cm)
Rear-maintenance All versions 40 in. (102 cm)
In the following tables, worst-case indicates a system running with one PSU and
high fan speed. Typical indicates a system running two PSUs on two circuits.
Table 74. N7700 electrical requirements
100 to 120V 200 to 240V
Typical single Typical single
Input voltage Worst-case PSU/system Worst-case PSU/system
Input current measured, 9.26 2.75/5.4 4.6 1.4/2.8
A
Input power measured, 922 266/531 882 255/509
W
Thermal dissipation, 3144 906/1812 3008 869/1737
BTU/hr
Inrush peak, A 11.6 11.2 22.8 22.8
Maximum electrical 12 A 6A
power
Input power frequency, 50 to 60
Hz
DANGER
Three people are required to lift the EXN1000 during installation.
The following tables list the characteristics and requirements for your hardware.
Table 76. EXN1000 physical characteristics and environmental requirements
Physical characteristics
Weight With maximum 77 lbs (35 kg)
number of disk drives
Empty 50.6 lbs (23 kg)
Rack units 3U
Height 5.25 in. (13.3 cm)
Width 17.6 in. (44.8 cm)
Depth 20 in. (50.9 cm)
Clearance dimensions
Front-cooling All versions 6 in. (15.3 cm)
Front-maintenance All versions 25 in. (63.5)
Rear-cooling All versions 12 in. (30.5 cm)
Rear-maintenance All versions 12 in. (30.5 cm)
Environmental requirements
Note: Operating at the extremes of the following environmental requirements might
increase the risk of device failure.
(10° C to 40° C)
Operating temperature recommended range 68° F to 77° F
(20° C to 25° C)
Nonoperating temperature range -40° F to 149° F
(-40° C to 65° C)
Relative humidity 10 to 90%
noncondensing
Recommended operating temperature 40 to 55%
relative humidity range
Maximum wet bulb temperature 28° C (82° F)
2, 3
Maximum altitude 3050 m (10,000 ft.)
1, 4
Acoustic level 56.4 dBA @ 23° C
DANGER
Three people are required to lift the EXN2000 or EXN4000 during
installation.
Table 78. EXN2000 and EXN4000 physical characteristics and environmental requirements
Physical characteristics
Weight With maximum 77 lbs (35 kg)
number of disk drives
Empty 50.6 lbs (23 kg)
Rack units 3U
Height 5.25 in. (13.3 cm)
Width 17.6 in. (44.8 cm)
Depth 20 in. (50.9 cm)
Clearance dimensions
Front-cooling All versions 6 in. (15.3 cm)
Rear-cooling All versions 12 in. (30.5 cm)
Rear-maintenance All versions 12 in. (30.5 cm)
Environmental requirements
Note: Operating at the extremes of the following environmental requirements might
increase the risk of device failure.
Operating temperature maximum range 50° F to 104° F
(10° C to 40° C)
Operating temperature recommended range 68° F to 77° F
(20° C to 25° C)
Nonoperating temperature range -40° F to 149° F
(-40° C to 65° C)
Relative humidity 10 to 90%
noncondensing
Recommended operating temperature 40 to 55%
relative humidity range
Maximum wet bulb temperature 28° C (82° F)
2, 3
Maximum altitude 3050 m (10,000 ft.)
1, 4
Acoustic level 56.4 dBA @ 23° C
Note: Worst-case indicates a system running with one PSU and high fan speed.
Typical indicates a system running two PSUs on two circuits.
Rack considerations
The three recommended racks for the N series storage system are:
v IBM 7014 Model T00 (a 36U high rack)
v IBM 7014 Model T42 (a 42U high rack)
v IBM 2101 Model N00 (a 36U high rack)
The N series storage system can also be mounted in some other IBM and non-IBM
racks, provided the rack meets all the requirements specified in Appendix D,
“Specifications for IBM and non-IBM racks,” on page 133.
When clustering N series storage systems, the physical proximity of the cluster
nodes is determined by the Infiniband (IB) cluster interconnect cables that are
ordered. Feature codes are available for cluster cables ranging in size from 2 m to
30 m. For more information, see Chapter 2, “IBM N series hardware features,” on
page 7.
Power cords with attached plugs are provided for most ac-powered systems. The
power cords are ordered by feature code and at least one power cord must be
specified when the ordering the N series storage system. A feature code contains
two cords, one for each power supply. Power cables vary by country and are listed
in Appendix B, “Power cord list for N series storage systems,” on page 123.
Specific voltage information for NAS storage systems and expansion units is
provided in the following tables:
v N3300 storage systems:
– Table 58 on page 89
– Table 59 on page 90
– Table 60 on page 91
Electrical considerations
These topics should be considered before you install a system.
Grounding
For information about grounding your N series storage systems and expansion
units, refer to the Installation and Setup Instructions for your N series products.
To ensure proper grounding, a licensed electrician should check the grounding and
receptacles for conformance with the country electrical codes.
Lightning protection
You should install lightning protection devices when in environments such as these:
v An overhead power service supplies the primary power.
v The area is subject to electrical storms or equivalent-type power surges.
Three-phase power
If your rack power distribution uses three-phase power, consult a licensed
electrician to ensure that the loads are properly balanced.
Thermal considerations
When installed in a rack, all components, including the N series storage system,
should have front-to-back air flow. Failure to do so results in a thermal loop, where
the heated exhaust air from one unit is drawn into the air intake of another, which
further heats the air. Eventually a unit will shut down or fail to operate due to high
temperature.
Airflow
back
Racks
front
Perforated tiles
or gratings
1220 mm cold
aisle width Cold aisle
Airflow
conditioner
front
2440 mm between
Racks
Air
Hot aisle
back
Racks
Nsipg001
front
Airflow
Note: You might need to prepare and analyze several plans before choosing a final
one. If you install more than one N series storage system in more than one
installation stage, prepare a separate plan for each installation stage.
Begin with an accurate drawing of the installation area (blueprints and floor plans
are appropriate). Include the following items in your floor plan:
v Service and operational clearances.
v If the N series storage system will be on a raised floor, consider any objects that
might obstruct cable routing and the height of the raised floor.
v If the N series storage system will not be on a raised floor, consider these
factors:
– Placement of cables to minimize obstruction
– Amount of additional cable required if the cable is indirectly routed between N
series storage system (for example, along the walls or suspended from the
ceiling)
v Location of:
– Power receptacles
– Air conditioning equipment and controls
– File cabinets, desks, and other office equipment
– Room emergency power-off controls
– All entrances, walkways, exits, windows, columns, and pillars
v LAN and telephone connections.
Security
In forming your floor plan, you should consider ways to keep your N series storage
system secure. For security purposes, use these precautions:
v Choose a trusted administrator.
v Place any equipment that may disrupt operations (for example, power supplies)
in a secure location.
v Keep rack cabinet keys in a secure location.
You must plan the type of cable, cable path, and cable length. Consider not only
your current needs but also your anticipated growth and the relocation of personnel.
To assist with the installation of your system, you should note cable paths on your
floor plan.
General considerations
In preparing for cabling, consider the following:
v Where applicable, electrical and physical specifications of cables that you
currently have and plan to use with the N series storage system must be
compatible with the standards mentioned in this manual. If no standard is
specifically mentioned in this manual, the standards for the interface on that
adapter must be met.
v Lengths and paths of cables.
v Communication signal cables should be installed away from power lines or other
sources of electrical interference.
v Labeling of cables and ports you currently have in order to indicate which
devices you want attached to them.
v Electrostatic discharge (ESD) considerations. In particular, unprotected patch
panels, punch blocks, or other intermediate routing or switching devices used in
cabling can allow ESD into the network.
Note: Lightning protection must be provided on any cable that travels outside of
the building in which the system or devices are installed. Contact a cabling
vendor about providing lightning protection for those cables. Fiber-optic
cables do not require lightning protection.
Cable measuring
Accurate measuring of cables is critical to a successful and efficient installation. Do
not guess or estimate your cable lengths.
Some cable lengths are fixed. For example, the Infiniband (IB) cluster interconnect
cables are fixed length cables that restrict the physical distance of the cluster nodes
in the rack. The power cords, depending on the feature code, are also of fixed
lengths.
For the Ethernet and Fibre Channel cabling, consider the following:
1. Cabling that exits the N series storage system is typically run in a cable
management tray, included with the N series storage system.
2. Avoid sharp bends and do not route cables near rack doors to avoid crimping
cables.
3. Account for raised floor height, if appropriate.
Cable labeling
Cable labels can be used to organize your cabling. The fields on a cable label are:
Room The room number, or other information about the
physical location of the device.
Person The name of the person who uses the device.
Telephone # The nearest telephone number to the device.
Device type This could be a printer, plotter, TTY, or similar
device.
Device ID The device ID is determined when the software is
configured on the system.
Software location code The software location code is the link between the
hardware and software. This code appears in the
software configuration menus and in the hardware
diagnostic menus.
Note: One optical cable feature code order provides two cables. One SFP feature
code order provides two SFPs.
Note: N series gateways do not support the attachment of N series EXN disk
expansion units.
For the latest information on cabling requirements, see Installation and Setup
Instructions and the Hardware and Service Guide for your storage system.
Note: For information about the differences between early and late CPU module
designs, see “Understanding the differences between early and current
N3700 CPU modules” on page 14.
Note: SFP-to-SFP Fibre Channel copper cables may also be used for connections
(to a maximum of three meters) between expansion units.
You must order one cable feature code and two SFP feature codes per expansion
unit.
www.ibm.com/systems/storage/network/interophome.html
Refer to the documentation for your external storage for additional information.
AutoSupport is enabled by default with Data ONTAP 7.1.1 or later when you
configure your N series storage system for the first time.
options autosupport.enable on
AutoSupport can also be enabled via the FilerView Web browser user interface by
selecting Filer Configure AutoSupport in the left-hand navigation frame, and
then selecting Yes from the drop-down list box next to “AutoSupport Enabled.”
Note: You can disable AutoSupport at any time using the autosupport.enable
option, but you are strongly advised to leave it enabled.
Cluster considerations
The AutoSupport notification messages from a N series storage system in a cluster
are different from the AutoSupport notification messages from a standalone N series
storage system in the following ways:
v The subject line in the AutoSupport messages from a filer in a cluster reads
“Cluster notification” instead of “System notification.”
v The AutoSupport messages from a N series storage system in a cluster contain
information about the N series storage system’s partner, such as the partner
system ID and the partner host name.
v In takeover mode, if you reboot the live N series storage system, two
AutoSupport messages are sent to notify the e-mail recipients of the reboot. The
live N series storage system sends one message; the failed N series storage
system sends the other message.
v The live N series storage system sends an AutoSupport message after it
completes the takeover process.
Note: Total AC wire length = breaker to wall or ceiling outlet + extension cable or
ceiling drop.
The following tables list the recommended conductor size for 2% voltage drop for a
particular distance in feet (taken from the Radio Engineer’s Handbook).
Table 81. 110V, single phase recommended conductor sizes
110V,
single-phase 20A circuit 30A circuit 40A circuit 50A circuit
25 feet 12 AWG 10 AWG 8 AWG 8 AWG
50 feet 8 AWG 6 AWG 6 AWG 4 AWG
75 feet 6 AWG 4 AWG 4 AWG 2 AWG
The following table lists the approximate equivalent wire gauge (American Wire
Gauge (AWG) to Harmonized Cordage).
Table 83. American Wire Gage to Harmonized Cordage equivalents
AWG 8 10 12
1
Harmonized, mm-mm 4.0 2.5 1.5
1
mm-mm = millimeter squared
www.ibm.com/storage/support/nas/
Weight
Base rack 244 kg 535 lbs
Full rack¹ 816 kg 1795 lbs See “T00 and T42 rack weight distribution and floor loading” on page 137.
Install/air flow Rack airflow requirements are a function of the number and type of drawers installed (see item 5 on
page 135). Refer to the individual drawer specifications.
Service 915mm (36 in.) 915mm (36 in.) 915mm (36 in.) 915mm (36 in.)
Weight
Base rack 244 kg 535 lbs
Full rack¹ 816 kg 1795 lbs See “T00 and T42 rack weight distribution and floor loading” on page 137.
Install/air flow Rack airflow requirements are a function of the number and type of drawers installed (see item 5 on
page 135). Refer to the individual drawer specifications.
Service 915mm (36 in.) 915mm (36 in.) 915mm (36 in.) 915mm (36 in.)
Weight
Base rack 261 kg 575 lbs
Full rack¹ 930 kg 2045 lbs See “T00 and T42 rack weight distribution and floor loading” on page
137.
Service clearance Recommended minimum vertical service clearance from floor is 2439 mm or 8 feet.
All other specifications For all other technical information, see “Model T00 rack” on page 134.
80
Rear cover (3.1)
thickness
20 mm (0.8 in.)
Caster
Location
2921 mm
915 mm 915 mm (115 in.)
(36 in.) (36 in.)
Front cover
Side cover
thickness
thickness 2x
58 mm (2.4 in.)
10 mm (0.4 in.)
Cable opening
310 mm (12.2 in.)
915 mm (36 in.) x 50 mm (2 in.)
Front
Figure 5. Service clearances and caster locations for the T00 and T42 racks.
Note: Rack units are large and heavy and are not easily moved. Because
maintenance activities require access at both the front and back, extra room
needs to be allowed. The footprint shows the radius of the swinging doors on
the I/O rack. The Figure 6 shows the minimum space required.
T00 T00
or or
T42 T42
Rack Rack
Separation
25.4 mm
(1 in.)
The following table shows the necessary floor-loading specifications for the T00 and
T42 racks when it is loaded.
Rack Floor loading
Raised kg/m2 Non-raised kg/m2 Raised lbs/ft2 Non-raised lbs/ft2
7014-T00 (4) 366.7 322.7 75 66
7014-T00 (5) 734.5 690.6 150.4 141.4
7014-T00 (6) 341 297 70 61
7014-T42 (4) 403 359 82.5 73.5
7014-T42 (5) 825 781 169 160
7014-T42 (6) 341.4 297.5 70 61
Rack specifications
All racks used for N series storage system installation must conform to the
specifications in this section. Both the IBM 7014 (Model T00 and Model T42) and
the IBM 2101 Model N00 racks conform, but some other racks, including a few from
IBM do not.
v The rack or cabinet must meet the EIA Standard EIA-310-D for 19-inch racks.
The front rack opening must be 451 mm wide + 0.75 mm (17.75 in. + 0.03 in.),
and the rail-mounting holes must be 465 mm + 0.8 mm (18.3 in. + 0.03 in.) apart
on center (horizontal width between vertical columns of holes on the two
front-mounting flanges and on the two rear-mounting flanges). Rail-mounting
holes must be 7.1 mm + 0.1 mm (0.28 in. + 0.004 in.) in diameter.
Rear, No Door
Drawer Rail
494mm (19.45 in.) 719mm (28.31 in.)
Mounting Flanges
The vertical distance between mounting holes must consist of sets of 3 holes
spaced (from bottom to top) 15.9 mm (0.625 in.), 15.9 mm (0.625 in.), and 12.67
mm (0.5 in.) on center (making each 3-hole set of vertical hole spacing 44.45
mm (1.75 in.) apart on center).
12.7mm 12.7mm
15.9mm 15.9mm
EIA Hole Spacing
15.9mm 15.9mm
12.7mm 12.7mm
15.9mm 15.9mm
15.9mm 15.9mm
6.75mm min 6.75mm min
Note: Refer to the sales manual for 7014 or 2101 racks if you want to use PDUs
that are designed for 7014 or 2101 racks. The customer is responsible for
ensuring that the PDU is compatible with the rack or cabinet and assumes
responsibility for any and all agency certifications required.
v The rack or cabinet must be compatible with drawer mounting rails, including a
secure and snug fit of the rail-mounting pins and screws into the rack or cabinet
rail support hole.
Note: If the rack or cabinet has square holes, a plug-in hole adapter may be
required. The plug-in hole adapters are NOT part of the N series rail
mounting kit included with every N series machine.
The rails provided with the N series storage system have been designed and
tested to safely support the weight of your drawer or device. The rails also
provide rear tie-down brackets.
The front and rear mounting flanges in the rack or cabinet must be 719 mm (28.3
in.) apart and the internal width bounded by the mounting flanges at least 494
mm (19.45 in.), for the IBM rails to fit in your rack or cabinet (see figure, Top
View of non-IBM Rack Specifications Dimensions on page Figure 7 on page
138).
v The rack or cabinet must have stabilization feet or brackets installed both in the
front and rear of the rack, or have another means of preventing the rack or
cabinet from tipping while the drawer or device is installed or removed.
Examples of some acceptable alternatives: The rack or cabinet may be securely
bolted to the floor, ceiling or walls, or to adjacent racks or cabinets in a long and
heavy row of racks or cabinets. Refer to the Rack Installation Guide for the 7014
or 2101 and the individual drawer installation guides for additional information.
v There must be adequate front and rear service clearances (in and around the
rack or cabinet).
The rack or cabinet must have sufficient horizontal width clearance in the front
and rear to allow the drawer to be fully slid into the front and, if applicable, the
rear service access positions (typically this requires 914.4 mm (36 in.) clearance
in both the front and rear).
If present, front and rear doors must be able to open far enough to provide
unrestrained access for service or be easily removable. If doors must be
removed for service, it is the customer’s responsibility to remove them prior to
service.
v The rack or cabinet must provide adequate clearance around the rack drawer.
Note: IBM requires that mounting rails must be able to support four times the
maximum rated product weight in its worst-case position (fully extended
front and rear positions) for 1 full minute without catastrophic failure.
For the single-node models, the total number of PCIe adapters cannot exceed one.
For the dual-node models, the total number of PCIe adapters cannot exceed two.
For information about monitoring the LEDs for your optional adapter cards, refer to
the IBM System Storage N series Platform Monitoring Guide.
For a single-node model (2862-A10), the maximum number of this adapter is one.
For a dual-node model (2862-A20), the maximum number of this adapter is two.
For a single-node model (2862-A10), the maximum number of this adapter is one.
For a dual-node model (2862-A20), the maximum number of this adapter is two.
Dual-port 4-Gbps Fibre Channel HBA for disk attachment (FC 1014)
Feature code 1014 is a dual-port 4-Gbps Fibre Channel HBA. This adapter
auto-negotiates to 4, 2 and 1 Gbps. This adapter may only be used for attaching
"back-end" storage expansion units (EXN1000, EXN2000, and EXN4000). The
Fibre Channel ports on this adapter may not be used as FCP target ports.
This adapter has two small form factor (SFF) multi-mode optics with LC-style
connectors. This adapter supports the following maximum cable lengths.
For a single-node model (2862-A10), the maximum number of this adapter is one.
For a dual-node model (2862-A20), the maximum number of this adapter is two.
Dual-port 4-Gbps Fibre Channel HBA for tape attachment (FC 1015)
Feature code 1015 is a dual-port 4-Gbps Fibre Channel HBA for tape attachment.
This adapter auto-negotiates to 4, 2 and 1 Gbps.
This adapter has two SFF multi-mode optics with LC-style connectors. This adapter
supports the following maximum cable lengths.
Table 91. Dual-port 4-Gbps Fibre Channel HBA for tape (FC 1015) - maximum cable lengths
62.5 micron multi-mode
Link operating speed 50 micron multi-mode fibre fibre
1 Gbps 500 meters 300 meters
2 Gbps 300 meters 150 meters
4 Gbps 150 meters 70 meters
For a single-node model (2862-A10), the maximum number of this adapter is one.
For a dual-node model (2862-A20), the maximum number of this adapter is two.
This feature code includes a 50-micron optical loopback cable with LC connectors.
For a single-node model (2862-A10), the maximum number of this adapter is one.
For a dual-node model (2862-A20), the maximum number of this adapter is two.
For a single-node model (2862-A10), the maximum number of this adapter is one.
For a dual-node model (2862-A20), the maximum number of this adapter is two.
For a single-node model (2862-A10), the maximum number of this adapter is one.
For a dual-node model (2862-A20), the maximum number of this adapter is two.
Quad-port 4-Gbps Fibre Channel HBA for disk attachment (FC 1029)
Feature code 1029 is a PCIe quad-port 4-Gbps HBA for attaching disk expansion
units (EXN1000, EXN2000, and EXN4000) to N series storage controllers. This
adapter auto-negotiates connections of 1-Gbps, 2-Gbps, or 4-Gbps. Four small form
factor (SFF) multimode optical ports with LC connectors support the following cable
lengths:
Table 92. Quad-port 4-Gbps Fibre Channel HBA for disk attachment (FC 1029) - maximum
cable lengths
62.5 micron multi-mode
Link operating speed 50 micron multi-mode fibre fibre
1 Gbps 500 meters 300 meters
2 Gbps 300 meters 150 meters
4 Gbps 150 meters 70 meters
The ports of this adapter may not be used as FCP target ports.
For a single-node model (2862-A10), the maximum number of this adapter is one.
For a dual-node model (2862-A20), the maximum number of this adapter is two.
The following is the priority order for installing optional adapter cards into the N5200
and N5500:
1. Fibre Channel host bus adapter (HBA) cards (FC 1004, 1005, 1006, 1018,
1019, 1027, and 1034)
2. Gigabit Ethernet iSCSI target adapters (FC 1010 and 1011)
3. Ethernet Network Interface Cards (FC 1003, 1007, 1008, 1009, and 1020)
4. SCSI Dual-channel Ultra320 LVD adapter for tape attachment (FC 1016)
For information about monitoring the LEDs for your optional adapter cards, refer to
the IBM System Storage N Series Platform Monitoring Guide.
For single-node models (A10 and G10), the maximum number of this adapter is
three. For dual-node models (A20 and G20), the maximum number of this adapter
is six.
For single-node models (A10), the maximum number of this adapter is three. For
dual-node models (A20), the maximum number of this adapter is six.
For single-node models (A10), the slot priority order for installing this adapter is
slots 2, 3 and 4. For dual-node models (A20), the slot priority order for installing this
adapter is slots 2, 1 and 4.
For single-node models (A10 and G10), the maximum number of this adapter is
three. For dual-node models (A20 and G20), the maximum number of this adapter
is six.
For single-node models (A10 and G10) the slot priority order for this adapter is slots
4, 3 and 2. For dual-node models (A20 and G20) the slot priority order for this
adapter is slots 2, 1 and 4.
For single-node models (G10) the maximum number of this adapter is three. For
dual-node models (A20 and G20) the maximum number of this adapter is six.
For single-node models (G10), the slot priority order for installing this adapter is 2, 3
and 4. For dual-node models (A20 and G20), the slot priority order for installing this
adapter is 2, 1, and 4.
For single-node models (A10 and G10), the maximum number of this adapter is
two. For dual-node models (A20 and G20), the maximum number of this adapter is
four.
For single-node models (A10 and G10), the slot priority order for this adapter is
slots 3, 4 and 2. For dual-node models (A20 and G20), the slot priority order for this
adapter is slots 1, 2, and 4.
Appendix F. Optional adapter cards supported by N5200 and N5500 systems 151
Single-port 10-Gigabit Ethernet (10-GbE) adapter (optical) (FC 1008)
Feature code 1008 is a single-port 10 GbE (10GBASE-SR) fibre short-range (SR)
PCI-X adapter with a single LC duplex connector. It supports a maximum distance
of 300 meters using 850-nanometer (nm) multi-mode fibre (MMF) media.
For single-node models (A10 and G10), the maximum number of this adapter is
two. For dual-node models (A20 and G20), the maximum number of this adapter is
four.
For single-node models (A10 and G10), the slot priority order for this adapter is
slots 3, 4 and 2. For dual-node models (A20 and G20), the slot priority order for this
adapter is slots 1, 2 and 4.
This gigabit Ethernet (1000Base-T) PCI-X TOE feature provides four RJ-45
connectors. The maximum supported distance is 100 meters using Category 5, or
better, unshielded twisted pair (UTP) four-pair media.
For single-node models (A10 and G10), the maximum number of this adapter is
two. For dual-node models (A20 and G20), the maximum number of this adapter is
four.
For single-node models (A10 and G10), the slot priority order for this adapter is
slots 3, 4 and 2. For dual-node models (A20 and G20), the slot priority order for this
adapter is slots 1, 2 and 4.
For single-node models (A10 and G10), the maximum number of this adapter is
three. For dual-node models (A20 and G20), the maximum number of this adapter
is six.
For single-node models (A10 and G10), the slot priority order for installing this
adapter is slots 3, 4 and 2. For dual-node models (A20 and G20), the slot priority
order for installing this adapter is slots 1, 2 and 4.
For single-node models (A10 and G10), the maximum number of this adapter is
three. For dual-node models (A20 and G20), the maximum number of this adapter
is six.
This feature code includes two SCSI LVD two-meter cables for attaching tape
devices to this SCSI HBA. Each cable has two 68-pin VHDCI connectors, one at
each end.
For single-node models (A10 and G10) the maximum number of this adapter is
three. For dual-node models (A20 and G20) the maximum number of this adapter is
six.
For single-node models (A10 and G10), the slot priority order for installing this
adapter is slots 4, 3, and 2. For dual-node models (A20 and G20), the slot priority
order for installing this adapter is slots 2, 1 and 4.
Two SFF multi-mode optical ports with LC connectors support the following cable
lengths.
Appendix F. Optional adapter cards supported by N5200 and N5500 systems 153
Table 97. Dual-port 4-Gbps Fibre Channel target HBA (FC 1019) - Maximum cable lengths
62.5 micron multi-mode
Link operating speed 50 micron multi-mode fibre fibre
1 Gbps 500 meters 300 meters
2 Gbps 300 meters 150 meters
4 Gbps 150 meters 70 meters
For a single-node model (A10 and G10), the maximum number of this adapter is
two. For a dual-node model (A20 and G20), the maximum number of this adapter is
four.
For single-node models, the slot priority order for installing this adapter is slots 3, 4
and 2. For dual-node models, the slot priority order for installing this adapter is slots
1, 2 and 4.
For single-node models (A10 and G10), the maximum number of this adapter is
three. For dual-node models (A20 and G20), the maximum number of this adapter
is six.
For single-node models (A10 and G10), the slot priority order for installing this
adapter is slots 3, 4 and 2. For dual-node models (A20 and G20), the slot priority
order for installing this adapter is slots 1, 2 and 4.
This adapter has four small form factor (SFP) multi-mode optics with LC style
connectors. It supports the following maximum cable lengths:
Table 98. Quad-port 4-Gbps Fibre Channel HBA for disk attachment (FC 1027) - Maximum
cable lengths
62.5 micron multi-mode
Link operating speed 50 micron multi-mode fibre fibre
1 Gbps 500 meters 300 meters
2 Gbps 300 meters 150 meters
4 Gbps 150 meters 70 meters
For single-node models (A10), the maximum number of this adapter is three. For
dual-node models (A20), the maximum number of this adapter is six.
For single-node models, the maximum number of this adapter is two. For dual-node
models, the maximum number of this adapter is four.
Since this adapter is a PCI-X adapter, it may be installed in any slot not occupied
by the NVRAM card. For a single node configuration, the slot priority order for this
adapter is 3 and 4. For a dual-node configuration, the slot priority order is 1 and 4.
Appendix F. Optional adapter cards supported by N5200 and N5500 systems 155
156 IBM System Storage N series: Introduction and Planning Guide
Appendix G. Optional adapter cards supported by N5300 and
N5600 systems
IBM supports the following optional PCIe adapter cards in the N5300 and N5600.
Table 99. Optional adapter cards supported by N5300 and N5600
Feature Code Feature Code Description
1012 Dual-port Gigabit Ethernet adapter (optical)
1013 Dual-port 10/100/1000 Ethernet adapter (copper)
1014 Dual-port 4-Gbps Fibre Channel HBA for disk attachment
1015 Dual-port 4-Gbps Fibre Channel HBA for tape attachment
1017 Dual-port 4-Gbps Fibre Channel target HBA
1021 Dual-port GbE iSCSI adapter (optical)
1022 Quad-port GbE Ethernet TOE adapter (copper)
1023 Quad-port GbE Ethernet adapter (copper)
1024 Dual-port Ultra320 SCSI HBA for tape attachment
1026 Dual-port GbE iSCSI target adapter (copper)
1029 Quad-port 4-Gbps Fibre Channel HBA for disk attachment
1031 Dual-port 10 GbE Ethernet adapter
1032 Dual-port MetroCluster VI HBA (A20/G20 models only)
1033 SnapMirror over Fibre Channel HBA
1035 Quad-port 4-Gbps Fibre Channel HBA for tape and disk attachment
The following is the priority order for installing optional adapter cards into the N5300
and N5600:
1. Fibre Channel host bus adapter (HBA) cards (FC 1014, 1015, 1017, 1029,
1032, 1033, and 1035)
2. Gigabit Ethernet iSCSI adapter cards (FC 1021 and 1026)
3. Ethernet Network Interface cards (FC 1012, 1013, 1022, 1023, and 1031)
4. Dual-port Ultra320 SCSI HBA for tape attachment (FC 1024)
For the single-node models, the total number of PCIe adapters cannot exceed
three. For the dual-node models the total number of PCIe adapters cannot exceed
six.
For information about monitoring the LEDs for your optional adapter cards, refer to
the IBM System Storage N Series Platform Monitoring Guide.
Since this adapter is a PCIe adapter, it may be installed in any slot not occupied by
the NVRAM card. For a single node configuration, the slot priority order for this
adapter is 3, 4, 2. For a dual-node configuration, the slot priority order is 1, 2, 4.
Dual-port 4-Gbps Fibre Channel HBA for disk attachment (FC 1014)
Feature code 1014 is a dual-port 4-Gbps Fibre Channel HBA. This adapter
auto-negotiates to 4, 2 and 1 Gbps. This adapter may only be used for attaching
"back-end" storage expansion units (EXN1000, EXN2000, and EXN4000). The
Fibre Channel ports on this adapter may not be used as FCP target ports.
This adapter has two small form factor (SFF) multi-mode optics with LC-style
connectors. This adapter supports the following maximum cable lengths.
Table 100. Dual-port 4-Gbps Fibre Channel HBA for disk (FC 1014) - maximum cable
lengths
62.5 micron multi-mode
Link operating speed 50 micron multi-mode fibre fibre
1 Gbps 500 meters 300 meters
2 Gbps 300 meters 150 meters
4 Gbps 150 meters 70 meters
Since this adapter is a PCIe adapter, it may be installed in any slot not occupied by
the NVRAM card. For a single node configuration, the slot priority order for this
adapter is 4, 3, 2. For a dual-node configuration, the slot priority order is 2, 1, 4.
Dual-port 4-Gbps Fibre Channel HBA for tape attachment (FC 1015)
Feature code 1015 is a dual-port 4-Gbps Fibre Channel HBA for tape attachment.
This adapter auto-negotiates to 4, 2 and 1 Gbps.
This adapter has two SFF multi-mode optics with LC-style connectors. This adapter
supports the following maximum cable lengths.
Since this adapter is a PCIe adapter, it may be installed in any slot not occupied by
the NVRAM card. For a single node configuration, the slot priority order for this
adapter is 4, 3, 2. For a dual-node configuration, the slot priority order is 2, 1, 4.
This feature code includes a 50-micron optical loopback cable with LC connectors.
This adapter has two SFF multi-mode optics with LC-style connectors. This adapter
supports the following maximum cable lengths.
Table 102. Dual-port 4-Gbps Fibre Channel target HBA (FC 1017) - maximum cable lengths
62.5 micron multi-mode
Link operating speed 50 micron multi-mode fibre fibre
1 Gbps 500 meters 300 meters
2 Gbps 300 meters 150 meters
4 Gbps 150 meters 70 meters
Since this adapter is a PCIe adapter, it may be installed in any slot not occupied by
the NVRAM card. For a single node configuration, the slot priority order for this
adapter is 4, 3, 2. For a dual-node configuration, the slot priority order is 2, 1, 4.
Appendix G. Optional adapter cards supported by N5300 and N5600 systems 159
Since this adapter is a PCIe adapter, it may be installed in any slot not occupied by
the NVRAM card. For a single node configuration, the slot priority order for this
adapter is 3, 4, 2. For a dual-node configuration, the slot priority order is 1, 2, 4.
Since this adapter is a PCIe adapter, it may be installed in any slot not occupied by
the NVRAM card. For a single node configuration, the slot priority order for this
adapter is 3, 4, 2. For a dual-node configuration, the slot priority order is 1, 2, 4.
Since this adapter is a PCIe adapter, it may be installed in any slot not occupied by
the NVRAM card. For a single node configuration, the slot priority order for this
adapter is 3, 4, 2. For a dual-node configuration, the slot priority order is 1, 2, 4.
Since this adapter is a PCIe adapter, it may be installed in any slot not occupied by
the NVRAM card. For a single node configuration, the slot priority order for this
adapter is 3, 4, 2. For a dual-node configuration, the slot priority order is 2, 1, 4.
Since this adapter is a PCIe adapter, it may be installed in any slot not occupied by
the NVRAM card. For a single node configuration, the slot priority order for this
adapter is 3, 4, 2. For a dual-node configuration, the slot priority order is 1, 2, 4.
The ports of this adapter may not be used as FCP target ports.
For the single-node model (2868/2869-A10) the maximum number of this adapter is
three. For the dual-node model (2868/2869-A20) the maximum number of this
adapter is six.
This adapter is a PCIe adapter. For a single-node configuration, the slot priority
order is 3, 4, 2. For a dual-node configuration, the slot priority order is 1, 2, 4.
Since this adapter is a PCIe adapter, it may be installed in any slot not occupied by
the NVRAM card. For a single node configuration, the slot priority order for this
adapter is 3, 4, 2. For a dual-node configuration, the slot priority order is 1, 2, 4.
Appendix G. Optional adapter cards supported by N5300 and N5600 systems 161
Note: This feature requires Data ONTAP 7.2.3 or later.
For single-node models, the maximum number of this adapter is two. For dual-node
models, the maximum number of this adapter is four.
Since this adapter is a PCIe adapter, it may be installed in any slot not occupied by
the NVRAM card. For a single node configuration, the slot priority order for this
adapter is 3 and 4. For a dual-node configuration, the slot priority order is 1 and 4.
Quad-port 4-Gbps Fibre Channel HBA for tape and disk attachment
(FC 1035)
Feature code 1035 is a PCIe quad-port 4-Gbps HBA for attaching tape and disk
expansion units (EXN1000, EXN2000, and EXN4000) to N series storage
controllers. This adapter auto-negotiates connections of 1-Gbps, 2-Gbps, or 4-Gbps.
Four small form factor (SFF) multimode optical ports with LC connectors support
the following cable lengths:
Table 104. Quad-port 4-Gbps Fibre Channel HBA for tape and disk attachment (FC 1035) -
maximum cable lengths
62.5 micron multi-mode
Link operating speed 50 micron multi-mode fibre fibre
1 Gbps 500 meters 300 meters
2 Gbps 300 meters 150 meters
4 Gbps 150 meters 70 meters
The ports of this adapter may not be used as FCP target ports.
For the single-node model (2868/2869-A10) the maximum number of this adapter is
three. For the dual-node model (2868/2869-A20) the maximum number of this
adapter is six.
This adapter is a PCIe adapter. For a single-node configuration, the slot priority
order is 3, 4, 2. For a dual-node configuration, the slot priority order is 1, 2, 4.
The following is the priority order for installing optional adapter cards into the N7700
and N7900:
1. Fibre Channel host bus adapter (HBA) cards (FC 1014, 1015, 1017, 1029,
1032, 1033, 1034, 1035)
2. iSCSI target adapters (FC 1010 and 1011) and Gigabit Ethernet iSCSI target
adapters (FC 1021 and 1026)
3. Ethernet network interface cards (FC 1008, 1009, 1012, 1013, 1022, 1023, and
1031)
4. SCSI Ultra320 dual-channel tape adapters (FC 1016 and 1024)
PCI-X Adapters
All N7000 models support the following PCI-X adapters.
Table 105. Optional PCI-X Adapters
Feature Code Feature Code Description
1008 Single-port 10 Gigabit Ethernet (10 GbE) TOE (optical)
1009 Quad-port Gigabit Ethernet (GbE) TOE (copper)
1010 Dual-port Gigabit Ethernet (GbE) iSCSI target adapter (copper)
1011 Dual-port Gigabit Ethernet (GbE) iSCSI target adapter (optical)
1016 SCSI Ultra320 Dual-Channel tape HBA
1034 SnapMirror over Fibre Channel HBA (PCI-X)
For single-node models, the total number of PCI-X adapters (Feature codes 1008,
1009, 1010, 1011 and 1016) cannot exceed three. For dual-node models, the total
number of PCI-X adapters (Feature codes 1008, 1009, 1010, 1011 and 1016)
cannot exceed six. For dual-node models, adapters must be ordered and added in
pairs, one per node, so that both nodes are populated with the same number of
each type of adapters.
For a single-node model, the maximum number of this adapter is two. For a
dual-node model, the maximum number of this adapter is four.
Since this adapter is a PCI-X adapter, it may only be installed in slots 3 or 9. The
slot priority order for this adapter is slot 9, 3. For technical reasons, this TOE
adapter is not permitted in slot 4.
For a single-node model, the maximum number of this adapter is two. For a
dual-node model, the maximum number of this adapter is four.
Since this adapter is a PCI-X adapter, it may only be installed in slots 3 or 9. The
slot priority order for this adapter is slot 9, 3. For technical reasons, this TOE
adapter is not permitted in slot 4.
For a single-node model, the maximum number of this adapter is three. For a
dual-node model, the maximum number of this adapter is six.
Since this adapter is a PCI-X adapter, it may only be installed in slots 3, 4 or 9. The
slot priority order for this adapter is 3, 4, 9.
For a single-node model, the maximum number of this adapter is three. For a
dual-node model, the maximum number of this adapter is six.
Since this adapter is a PCI-X adapter, it may only be installed in slots 3, 4 or 9. The
slot priority order for this adapter is 3, 4, 9.
This feature code includes two SCSI LVD two-meter cables for attaching tape
devices to this SCSI HBA. Each cable has two 68-pin VHDCI connectors, one at
each end.
For the single-node models the maximum number of this adapter is three. For the
dual-node models the maximum number of this adapter is six.
Since this adapter is a PCI-X adapter, it may only be installed in slots 3, 4 or 9. The
slot priority order for this adapter is 4, 9, 3.
Since this adapter is a PCI-X adapter, it may only be installed in slots 3, 4 or 9. The
slot priority order for this adapter is 3, 4, 9.
For the single-node models, the total number of PCIe adapters cannot exceed five.
For the dual-node models, the total number of PCIe adapters cannot exceed ten.
For a single-node model, the maximum number of this adapter is five. For a
dual-node model, the maximum number of this adapter is ten.
For a single-node model, the maximum number of this adapter is five. For a
dual-node model, the maximum number of this adapter is ten.
Dual-port 4-Gbps Fibre Channel HBA for disk attachment (FC 1014)
Feature code 1014 is a dual-port 4-Gbps Fibre Channel HBA. This adapter
auto-negotiates to 4, 2 and 1 Gbps. This adapter may only be used for attaching
"back-end" storage expansion units (EXN1000, EXN2000, and EXN4000). The
Fibre Channel ports on this adapter may NOT be used as FCP target ports.
This adapter has two small form factor (SFF) multi-mode optics with LC-style
connectors. This adapter supports the following maximum cable lengths.
Table 107. Dual-port 4-Gbps Fibre Channel HBA for disk (FC 1014) - maximum cable
lengths
62.5 micron multi-mode
Link operating speed 50 micron multi-mode fibre fibre
1 Gbps 500 meters 300 meters
2 Gbps 300 meters 150 meters
4 Gbps 150 meters 70 meters
For a single-node model, the maximum number of this adapter is four. For a
dual-node model, the maximum number of this adapter is eight.
Dual-port 4-Gbps Fibre Channel HBA for tape attachment (FC 1015)
Feature code 1015 is a dual-port 4-Gbps Fibre Channel HBA for tape attachment.
This adapter auto-negotiates to 4, 2 and 1 Gbps.
This adapter has two SFF multi-mode optics with LC-style connectors. This adapter
supports the following maximum cable lengths.
Table 108. Dual-port 4-Gbps Fibre Channel HBA for tape (FC 1015) - maximum cable
lengths
62.5 micron multi-mode
Link operating speed 50 micron multi-mode fibre fibre
1 Gbps 500 meters 300 meters
2 Gbps 300 meters 150 meters
4 Gbps 150 meters 70 meters
For a single-node model, the maximum number of this adapter is three. For a
dual-node model, the maximum number of this adapter is six.
This feature code will include a 50-micron optical loopback cable with LC
connectors.
This adapter has two SFF multi-mode optics with LC-style connectors. This adapter
supports the following maximum cable lengths.
Table 109. Dual-port 4-Gbps Fibre Channel target HBA (FC 1017) - maximum cable lengths
62.5 micron multi-mode
Link operating speed 50 micron multi-mode fibre fibre
1 Gbps 500 meters 300 meters
2 Gbps 300 meters 150 meters
4 Gbps 150 meters 70 meters
For the single-node model the maximum number of this adapter is four. For the
dual-node model the maximum number of this adapter is eight.
For single-node models, the maximum number of this adapter is three. For
dual-node models, the maximum number of this adapter is six.
For the single-node model the maximum number of this adapter is five. For the
dual-node model the maximum number of this adapter is ten.
For a single-node model, the maximum number of this adapter is three. For a
dual-node model the maximum number of this adapter is six.
For the single-node mode the maximum number of this adapter is five. For the
dual-node model the maximum number of this adapter is ten.
Quad-port 4-Gbps Fibre Channel HBA for disk attachment (FC 1029)
Feature code 1029 is a PCIe quad-port 4-Gbps HBA for attaching disk expansion
units (EXN1000, EXN2000, and EXN4000) to N series storage controllers. This
adapter auto-negotiates connections of 1-Gbps, 2-Gbps, or 4-Gbps. Four small form
factor (SFF) multimode optical ports with LC connectors support the following cable
lengths:
Table 110. Quad-port 4-Gbps Fibre Channel HBA for disk attachment (FC 1029) - maximum
cable lengths
62.5 micron multi-mode
Link operating speed 50 micron multi-mode fibre fibre
1 Gbps 500 meters 300 meters
2 Gbps 300 meters 150 meters
4 Gbps 150 meters 70 meters
For the single-node model (A11) the maximum number of this adapter is five. For
the dual-node model (A21) the maximum number of this adapter is ten.
For the single-node model the maximum number of this adapter is five. For the
dual-node model the maximum number of this adapter is ten.
For single-node models, the maximum number of this adapter is two. For dual-node
models, the maximum number of this adapter is four.
This adapter is a PCIe adapter and it may only be installed in slots 5, 6, 7, and 8.
For a single-node configuration, the slot priority order is 5, 6, 7, 8. For a dual-node
configuration, the slot priority order is 5, 6, 7, 8.
The ports of this adapter may not be used as FCP target ports.
For the single-node model (2866/2867-A11) the maximum number of this adapter is
five. For the dual-node model (2866/2867-A21) the maximum number of this
adapter is ten.
You can access the documents listed in these tables at the following Web site:
www.ibm.com/storage/support/nas/
IBM may not offer the products, services, or features discussed in this document in
other countries. Consult your local IBM representative for information on the
products and services currently available in your area. Any reference to an IBM
product, program, or service is not intended to state or imply that only that IBM
product, program, or service may be used. Any functionally equivalent product,
program, or service that does not infringe any IBM intellectual property right may be
used instead. However, it is the user’s responsibility to evaluate and verify the
operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter
described in this document. The furnishing of this document does not give you any
license to these patents. You can send license inquiries, in writing, to:
The following paragraph does not apply to the United Kingdom or any other
country where such provisions are inconsistent with local law:
INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS
PUBLICATIONS “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER
EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR
A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or
implied warranties in certain transactions, therefore, this statement may not apply to
you.
Any references in this information to non-IBM Web sites are provided for
convenience only and do not in any manner serve as an endorsement of those
Web sites. The materials at those Web sites are not part of the materials for this
IBM product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes
appropriate without incurring any obligation to you.
Attention: In compliance with the GNU General Public License (GPL), Version 2,
June 1991, a complete machine-readable copy of the source code for the relevant
source code portions of the Remote LAN Module (RLM) Firmware that are covered
by the GPL, is available from http://now.netapp.com.
Copyrights
Copyright © 2005–2008 IBM Corporation. All rights reserved. Printed in the U.S.A.
© Copyright IBM Corp. 2005, 2008 175
References in this documentation to IBM products, programs, or services do not
imply that IBM intends to make these available in all countries in which IBM
operates. Any reference to an IBM product, program or service is not intended to
state or imply that only IBM’s product, program or service may be used. Any
functionally equivalent product, program or service that does not infringe any of
IBM’s intellectual property rights may be used instead of the IBM product, program
or service. Evaluation and verification of operation in conjunction with other
products, except those expressly designated by IBM, are the user’s responsibility.
Trademarks
The following terms are trademarks of International Business Machines Corporation
in the United States, other countries, or both:
IBM System i
IBM logo System p
AIX System Storage
Eserver System x
RS/6000 System z
Tivoli
NetApp, the Network Appliance logo, the bolt design, NetApp–the Network
Appliance Company, Data ONTAP, DataFabric, FAServer, FilerView, gFiler,
MultiStore, NearStore, NetCache, SecureShare, SnapManager, SnapMirror,
SnapMover, SnapRestore, SnapVault, SyncMirror, and WAFL are registered
trademarks of Network Appliance, Inc. in the United States, and/or other countries.
gFiler, Network Appliance, SnapCopy, SnapLock, Snapshot, and The Evolution of
Storage are trademarks of Network Appliance, Inc. in the United States and/or other
countries and registered trademarks in some other countries. ApplianceWatch,
BareMetal, Camera-to-Viewer, ComplianceClock, ComplianceJournal,
ContentDirector, ContentFabric, EdgeFiler, FlexClone, FlexVol, FPolicy, HyperSAN,
InfoFabric, LockVault, Manage ONTAP, NOW, NOW NetApp on the Web, ONTAPI,
RAID-DP, RoboCache, RoboFiler, SecureAdmin, Serving Data by Design,
SharedStorage, Simulate ONTAP, Smart SAN, SnapCache, SnapDirector,
SnapDrive, SnapFilter, SnapMigrator, SnapSuite, SnapValidator, SohoFiler, vFiler,
VFM, Virtual File Manager, VPolicy, and Web Filer are trademarks of Network
Appliance, Inc. in the United States and other countries. NetApp Availability
Assurance and NetApp ProTech Expert are service marks of Network Appliance,
Inc. in the United States.
Intel and Xeon are trademarks or registered trademarks of Intel Corporation or its
subsidiaries in the United States and other countries.
Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other
countries.
Properly shielded and grounded cables and connectors must be used in order to
meet FCC emission limits. IBM is not responsible for any radio or television
interference caused by using other than recommended cables and connectors or by
unauthorized changes or modifications to this equipment. Unauthorized changes or
modifications could void the user’s authority to operate the equipment.
This device complies with Part 15 of the FCC Rules. Operation is subject to the
following two conditions: (1) this device may not cause harmful interference, and (2)
this device must accept any interference received, including interference that may
cause undesired operation.
This product has been tested and found to comply with the limits for Class A
Information Technology Equipment according to European Standard EN 55022. The
limits for Class A equipment were derived for commercial and industrial
environments to provide reasonable protection against interference with licensed
communication equipment.
Properly shielded and grounded cables and connectors must be used in order to
reduce the potential for causing interference to radio and TV communications and
to other electrical or electronic equipment. Such cables and connectors are
Notices 177
available from IBM authorized dealers. IBM cannot accept responsibility for any
interference caused by using other than recommended cables and connectors.
"Warnung: Dieses ist eine Einrichtung der Klasse A. Diese Einrichtung kann im
Wohnbereich Funk-Störungen verursachen; in diesem Fall kann vom Betreiber
verlangt werden, angemessene Mabnahmen zu ergreifen und dafür aufzukommen."
Dieses Gerät ist berechtigt, in übereinstimmung mit dem Deutschen EMVG das
EG-Konformitätszeichen - CE - zu führen.
Verantwortlich für die Konformitätserklärung des EMVG ist die IBM Deutschland
GmbH, 70548 Stuttgart.
Notices 179
180 IBM System Storage N series: Introduction and Planning Guide
Index
Numerics configuration limits (continued)
iSCSI 63
2101 Model N00 133
N3300 storage system
7014 model T00 rack 134
active/active 68
7014 model T42 rack 135
single-controller 67
N3600 storage system
active/active 68
A single-controller 67
about this document N3700 storage system
how to send your comments xxvii active/active 69
AC power line sizes 121 single-controller 69
active/active configurations 4 N5200 storage system
adapter support 3 active/active 66
adapters single-controller 65
optional cards N5300 storage system
N3600 145 active/active 66
N5200 and N5500 149 single-controller 65
N5300 and N5600 157 N5500 storage system
N7700 and N7900 163 active/active 66
address, IBM xxvii single-controller 65
Advanced Single Instance Storage 76 N5600 storage system
attention notice xiii active/active 66
audience xxiii single-controller 65
AutoSupport 119 N7700 storage system
active/active 66
single-controller 65
C N7900 storage system
cables active/active 66
labeling 114 single-controller 65
measuring 114 connections
planning 113 expansion unit to N series filers 116
caution notices xiii expansion unit to other expansion units 116
definition xiii expansion units to an N series storage system 115
examples xiii gateway to external storage 23, 31, 38, 46, 53, 61,
CFO 76 116
CIFS protocol 75 conventions
Class A electronic emission notice 177 command xxvi
clearance dimensions 105 formatting xxvi
EXN1000 expansion unit 103 keyboard xxvii
N3300 storage system 87
N3600 storage system 87
N3700 storage system 96 D
N5200 storage system 98 danger notices xi
N5300 storage system 98 definition xi
N5500 storage system 98 example xi
N5600 storage system 98 Data ONTAP 3, 75
N7700 storage system 101 Data ONTAP 7.1 filer
N7900 storage system 101 documentation 172
clustering 4 Data ONTAP 7.1 gateway systems library 173
Infiniband (IB) cluster interconnect cables 113 Data ONTAP 7.2 filer
N3700 system setup worksheet 130 documentation 171
comments, how to send xxvii Data ONTAP 7.2 gateway systems library 173
configuration limits device carrier xxvi
Fibre Channel 63 disk sanitization 76
host operating system configuration limits for Fibre documentation
Channel 65 Data ONTAP 7.1 filer 172
host operating system configuration limits for Data ONTAP 7.1 gateway 173
iSCSI 65 Data ONTAP 7.2 filer 171
Index 183
N3600 storage system (continued) N5300 filer
electrical requirements 88 features
environmental requirements 88 hardware 25
features mixing EXN units 2
hardware 10 N5300 gateway
hardware specifications 87 features
mixing EXN units 2 hardware 29
noise emission notes 88 N5300 storage system
physical characteristics 87 active/active
power cords 123 configuration limits 66
setup worksheet 127 adapter support 4
single-controller clearance dimensions 98
configuration limits 67 electrical requirements 99
N3700 storage system environmental requirements 98
active/active hardware specifications 97
configuration limits 69 noise emission notes 99
adapter support 3 physical characteristics 97
cabling to expansion units 14 power cords 123
clearance dimensions 96 raw storage capacity 27
cluster setup worksheet 130 single-controller
differences between early and current CPU module configuration limits 65
designs 14 system setup worksheet 131
electrical requirements 97 N5500 filer
environmental requirements 96 features
features hardware 32
hardware 13 mixing EXN units 2
hardware specifications 95 N5500 gateway
load board 14 features
noise emission notes 96 hardware 36
physical characteristics 95 N5500 storage system
power cords 123 active/active
raw storage capacity 16 configuration limits 66
setup worksheet 129 adapter support 4
single-controller clearance dimensions 98
configuration limits 69 electrical requirements 100
N3700 storage system library 171 environmental requirements 98
N5000 series systems library 171 hardware specifications 97
N5200 filer noise emission notes 99
features physical characteristics 97
hardware 17 power cords 123
mixing EXN units 2 raw storage capacity 34
raw storage capacity 19 single-controller
N5200 gateway configuration limits 65
features system setup worksheet 131
hardware 22 N5600 filer
N5200 storage system features
active/active hardware 39
configuration limits 66 mixing EXN units 2
adapter support 4 N5600 gateway
clearance dimensions 98 features
electrical requirements 99 hardware 44
environmental requirements 98 N5600 storage system
hardware specifications 97 active/active
noise emission notes 99 configuration limits 66
physical characteristics 97 adapter support 4
power cords 123 clearance dimensions 98
single-controller electrical requirements 100
configuration limits 65 environmental requirements 98
system setup worksheet 131 hardware specifications 97
noise emission notes 99
physical characteristics 97
Index 185
rack safety xv third-party devices
rack specifications connection differences between early and current
for IBM products installed in a non-IBM rack 141 N3700 CPU module designs 15
general 138
IBM 2101 Model N00 133
IBM 7014 134 U
reader comment form processing xxvii United States electronic emission Class A notice 177
restrictions, usage xiv United States FCC Class A notice 177
usage restrictions xiv
S
safety W
environmental notices xi Web sites, related xxv
inspection procedure xv weight distribution, T00 rack 137
labels xi weight distribution, T42 rack 137
laser xiv worksheets
notices xi, xvi N3300 system setup 127
rack xv N3600 system setup 127
rack installation xv N3700 cluster system setup 130
rack relocation xviii N3700 system setup 129
safety labels xii N5000 series system setup 131
safety requirements for non-IBM rack 141 N7000 series system setup 132
security 111 Write Anywhere File Layout (WAFL) 3
Single Mailbox Recovery 83
site planning 1, 87
SMBR Content Analysis Wizard 83
SnapDrive 83
SnapLock Compliance 77
SnapLock Enterprise 77
SnapManager for Exchange 78, 83
SnapManager for Oracle 84
SnapManager for SAP 78
SnapManager for SharePoint 78
SnapManager for SQL 78, 83
SnapMirror 78
SnapMirror/SnapVault Bundle 78
SnapMover 78
SnapRestore 78
SnapShot 75
SnapValidator 79
SnapVault Primary 79
SnapVault Secondary 79
software features 79
software, N series storage system 75
specifications for the N series storage system 87
support for Ethernet adapters 3
supported features xxiii
SyncMirror 79
T
T00 and T42 rack caster location 135
T00 and T42 rack service clearances 135
T00 and T42 racks multiple attachments 136
T00 rack weight distribution and floor loading 137
T42 rack weight distribution and floor loading 137
tasks by document title 170
terminators
optical ports xiv
terminology xxvi
thermal considerations 109
We appreciate your comments about this publication. Please comment on specific errors or omissions, accuracy,
organization, subject matter, or completeness of this book. The comments you send should pertain to only the
information in this manual or product and the way in which the information is presented.
For technical questions and information about products and prices, please contact your IBM branch office, your IBM
business partner, or your authorized remarketer.
When you send comments to IBM, you grant IBM a nonexclusive right to use or distribute your comments in any
way it believes appropriate without incurring any obligation to you. IBM or any other organizations will only use the
personal information that you supply to contact you about the issues that you state on this form.
Comments:
Name Address
Company or Organization
_ _ _ _ _ _ _Fold
_ _ _and
_ _ _Tape
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _Please
_ _ _ _ _do
_ _not
_ _ staple
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _Fold
_ _ _and
_ _ Tape
______
NO POSTAGE
NECESSARY
IF MAILED IN THE
UNITED STATES
_________________________________________________________________________________________
Fold and Tape Please do not staple Fold and Tape
Cut or Fold
GA32-0543-11 Along Line
Printed in USA
GA32-0543-11