Está en la página 1de 31
System Installation Workbook Version 2.0 Date: September 2011

System Installation Workbook

Version 2.0 Date: September 2011

System Installation Workbook Version 2.0 Date: September 2011
WELCOME Dear Customer, Thank you for choosing a NetApp storage system and Professional Services installation.

WELCOME

Dear Customer,

Thank you for choosing a NetApp storage system and Professional Services installation.

To ensure a seamless deployment and integration into your environment, please complete the information requested in this document before our engineer arrives on site. This will ensure that as many questions as possible are answered before the day of the installation, so you can start using your system.

The first part of the document includes environmental information about our products, which may help you with your computer room planning

The second part of the workbook covers the information that the professional services engineer will need on the day of installation. Please obtain the required information and return a completed copy of this document to the engineer before they arrive.

We look forward to working with you.

Yours faithfully

(NetApp Services Engineering)

Table of contents 1 Site requirements 5 1.1 System power requirements 6 1.2 Network cabling

Table of contents

1

Site requirements

5

1.1

System power requirements

6

1.2

Network cabling requirements

7

2

7-Mode configuration details

8

2.1

Basic configuration

8

2.1.1

IFGRPs

8

2.1.2

Network interface configuration

9

2.1.3

Default gateway

9

2.1.4

Administration Host

9

2.1.5

Timezone

9

2.1.6

Language encoding for multiprotocol files

9

2.1.7

Domain Name Services (DNS) resolution

9

2.1.8

Network Information Services (NIS) resolution

10

2.1.9

Remote Management Settings (RLM/SP/BMC)

10

2.1.10

Alternate Control Path (ACP) management for SAS shelves

10

2.1.11

CIFS configuration

10

2.1.12

Configure Virtual LANs (VLANs)

11

2.1.13

AutoSupport settings

11

2.1.14

Customer/RMA details

11

2.1.15

Time synchronization

12

2.1.16

SNMP management settings

12

3

7-Mode installation and verification checklists

13

4

Cluster-Mode configuration details

17

4.1

Cluster information

17

4.1.1

Cluster

17

4.1.2

Licensing

17

4.1.3

Admin Vserver

18

4.1.4

Time

synchronization

18

4.1.5

Time zone

18

4.2

Node information

19

4.2.1

Physical port identification

19

4.2.2

Node management LIF

20

4.3

Cluster network information

21

4.3.1

Interface groups (IFGRP)

21

4.3.2

Configure Virtual LANs (VLANs)

21

4.3.3

Logical Interfaces (LIFs)

21

4.4

Intercluster network information

22

4.5

Vserver information

22

4.5.1

Creating Vserver

22

4.5.2

Creating Volumes on the Vserver

23

4.5.3

IP Network Interface on the Vserver

23

4.5.4

FCP Network Interface on the Vserver

24

4.5.5

LDAP services

24

4.5.6

NIS services

24

4.5.7

DNS services

24

4.5.8

CIFS protocol

25

4.5.9

iSCSI protocol

25

4.5.10 FCP protocol 25 4.6 Support information 26 4.6.1 Remote Management Settings (RLM/BMC/SP)

4.5.10

FCP protocol

25

4.6

Support information

26

4.6.1

Remote Management Settings (RLM/BMC/SP)

26

4.6.2

AutoSupport settings

26

4.6.3

Customer/RMA details

27

5

Cluster-Mode installation and verification checklists

28

1
1

Site requirements

Please download and read the latest version of the Site Requirements Guide:

Dimensions and weight of NetApp hardware:

       

Rack

Hardware

Dimensions

U.S.

Metric

units*

FAS6200/V6200

Height

10.32in

26.21cm

 
     

series

Width

17.61in

44.73cm

6U

Depth

24.3

in

61.72cm

Weight

125.7lbs

57kg

 
 

Height

10.32

in.

26.21

cm

 

FAS6000/V6000

Width

17.53

in.

44.52

cm

6U

series

Depth

29

in.

73.66

cm

Weight

121

lbs.

54.885

kg

 

FAS3200/V3200

Height

5.12in

13.0cm

 
     

series

Width

17.61in

44.73cm

3U

Depth

24in

61cm

Weight

74lbs

33.6kg

 
 

Height

10.75

in

27.3

cm

 

FAS3100/V3100

Width

17.73

in

45.0

cm

6U

series

Depth

24

in

60.7

cm

Weight

121lbs

54.89kg

 
 

Height

6.9

in.

17.5

cm

 

FAS2050

Width

17.6

in.

44.7

cm

4U

Depth

22.5

in.

57.2

cm

 

Weight

110

lbs.

49.895

kg

 
 

Height

3.5

in.

8.9

cm

 

FAS2020/FAS2040

Width

17.6

in.

44.7

cm

2U

Depth

22.5

in.

57.2

cm

 

Weight

60

/ 66 lbs.

27.2

/ 29.9 kg

 
 

Height

5.25

in.

13.3

cm

 

Width

17.6

in.

44.7

cm

DS14mk2 / DS14mk4

Depth

20

in.

50.85

cm

3U

 

77

lbs.

 

Weight

(loaded)

35

kg

 

Height

5.25

in.

13.3

cm

 

Width

17.6

in.

44.7

cm

DS14mk2 AT

Depth

22

in.

55.2

cm

3U

 

68

lbs.

 

Weight

(loaded)

30.8

kg

 

Height

3.4in

8.5cm

 

Width

19in

48cm

DS2246

Depth

19.1in

48.4cm

 

49lbs

 

Weight

(loaded)

22.2kg

 

Height

7 in.

17.8

cm

 

Width

19

in.

48.3

cm

DS4243

Depth

24

in.

61

cm

4U

 

110

lbs.

 

Weight

(loaded)

49.9

kg

Cisco 5010

Height

1.72

in.

4.4

cm

1U

Width

17.3

in.

43.9

cm

  Depth 30 in. 76.2 cm   Weight 35 lbs. 15.88 kg   Height 3.47
 

Depth

30

in.

76.2

cm

 

Weight

35

lbs.

15.88

kg

 

Height

3.47

in.

8.8

cm

 

Cisco 5020

Width

17.3

in.

43.9

cm

2U

Depth

30

in.

76.2

cm

 

Weight

50

lbs.

22.68

kg

 
 

Height

1.73

in.

4.39

cm

 

Cisco 2960

Width

17.5

in.

44.45

cm

1U

Depth

9.3 in.

23.62

cm

 

Weight

8 lbs.

3.63

kg

 
 

Height

78.7

in.

200 cm

 

NetApp System

Cabinet

Width

23.6

in.

60.

cm

Depth

37.4

in.

95 cm

42U

   

320 lbs

 

Weight

(empty)

145.2

kg

* 1U = 1.75 inches

NOTE: Please plan for at least 36 inches (91.4 centimeters) of clearance on both front and back of the system. This amount of space allows you to reach the back panel for cabling the system. It also allows you to slide the motherboard tray out from the back of the system when removing or installing hardware.

1.1 System power requirements

 

Amps @ 100-120V

Amps @ 200-240V

 

Hardware

Actual (worst

Actual (worst case)

P/S Volt range

case)

FAS6200/V6200 series

5

2.4

100

- 240 VAC

FAS6000/V6000 series

9.72

4.84

100

240 VAC

FAS3200/V3200 series

4.22

1.97

100- 240 VAC

FAS3100/V3100 series

9.74

4.69

100

240 VAC

FAS2050

7.51

3.73

100

240 VAC

FAS2040

4.85

2.39

100

240 VAC

FAS2020

4.94

2.45

100

240 VAC

DS14mk2/DS14mk4

     

FC w/ESH

4.52

2.23

100

240 VAC

DS14mk2 AT

3.15

1.55

100

240 VAC

DS2246

5.15

2.62

100

- 240VAC

DS4243

4.30

1.90

100

240 VAC

Cisco 5010

4.09

2.04

100

240 VAC

Cisco 5020

6.81

3.40

100

- 240VAC

Cisco 2960

1.3

0.8

100

240 VAC

NOTE: NetApp recommends using a minimum of two circuits for power supply redundancy.

Note: 42U NetApp System Cabinet Specifications

Consult your co-location facility manager or vendor documentation if installing into third party cabinets.

Cabinet specifications 20-Amp Single- 30-Amp Single- Phase 2x PDU 30-Amp Single- Phase 4x PDU Phase

Cabinet specifications

20-Amp Single-

30-Amp Single- Phase 2x PDU

30-Amp Single- Phase 4x PDU

Phase PDU

Input Voltage and Frequency

200 240 VAC, 50/60 HZ

 

# of Power Connectors

4

2

2

Power Connector Type (U.S.)

NEMA L6-20

NEMA L6-30

NEMA L6-30

Power Connector Type (Intl.)

IEC309-16A

IEC309-30A

IEC309-30A

Power Connector Type (Aus/NZ)

AS/NZS3123-20

   

Power cord length from PDU

15 ft (4.5m)

10 ft (3m)

10 ft (3m)

NetApp Base Cabinet Part Number (U.S.)

X871A-R6

X8730A-R6

X8730B-R6

NetApp Base Cabinet Part Number (Intl.)

(Same as U.S.)

X8731A-R6

X8731B-R6

1.2 Network cabling requirements

Network device

Cabling requirements

10Base-TX/100Base-TX

Cat 5/5e/6 UTP cable with RJ-45 connectors

Gigabit Ethernet

50 or 62.5 micron multimode fiber optic cable with LC connector

(Optical)

Gigabit Ethernet

Cat 5e unshielded 4 pair cable with RJ45 connector

(Copper)

Fiber Channel

50 or 62.5 micron multimode fiber optic cable with LC connector

Fast Ethernet switch ports should be configured manually for speed and duplex settings (100 Full Duplex) when possible. The use of auto-negotiation for Fast Ethernet is discouraged for setting switch ports configuration with permanent equipment.

Cu (copper) Gigabit Ethernet switch ports, on the other hand, should be left to auto-negotiate the speed duplex settings (1000 Full). With all gigabit ports, it is highly recommended to configure the switch ports for Send On and Receive On (or Full) flow control on the switch as well as the client and enable „portfast‟. Since this system is not a bridge or routing device, network connectivity will not complicate the boot process.

2 7-Mode configuration details Please work with your Professional Services representative to complete this worksheet

2 7-Mode configuration details

Please work with your Professional Services representative to complete this worksheet prior to the installation date. The requested information enables us to configure your equipment quickly and

efficiently.

Depending on the desired configuration, some fields may not be applicable.

NOTE: This worksheet does NOT replace the requirement for reading and understanding the appropriate ONTAP manuals that describe the operations of ONTAP in 7-Mode. ONTAP manuals can be found at the NetApp support site under documentation.

Customer checklist of site preparation requirements (check all that apply):

Adequate rack space for the NetApp system and disk shelves has been provided.of site preparation requirements (check all that apply): The power requirements for the NetApp system and

The power requirements for the NetApp system and disk shelves have been satisfied.for the NetApp system and disk shelves has been provided. The network patch cabling and switch

The network patch cabling and switch port configuration is complete.for the NetApp system and disk shelves have been satisfied. Company Name: Storage Controller Model: 2.1

Company Name:

Storage Controller Model:

2.1 Basic configuration

NetApp Sales Order #:

Data ONTAP ® Version:

System information

Controller 1

Controller 2

Serial Number

   

Hostname

   

Aggregate Type (32-bit or 64-bit)

   

2.1.1 IFGRPs

Interface Groups (IFGRPs) bond multiple network ports together for increased bandwidth and/or fault tolerance.

Note: For systems without an e0P port, leave one network port available for ACP connections to SAS disk shelves.

Interface details

Controller 1

Controller 2

Number of interface groups to configure

   

Names of the interface groups For example, ifgrp1, iscsi_ifgrp2

   

IFGRP type (multi, single, LACP) Multi all ports are active

ifgrp1:

 

ifgrp1:

 

Single one port active, other ports are on standby for failover

ifgrp2:

 

ifgrp2:

 
       

LACP network switch manages traffic

ifgrp3:

ifgrp3:

 

ifgrp1:

 

ifgrp1:

 

Multi-mode IFGRP load balancing style (IP, MAC, round-robin, or port based)

ifgrp2:

 

ifgrp2:

 

ifgrp3:

 

ifgrp3:

 

Number of links (network ports) in each IFGRP

ifgrp1:

 

ifgrp1:

 

ifgrp2:

 

ifgrp2:

 
 

ifgrp3:

 

ifgrp3:

 

Name of network ports in each IFGRP For

ifgrp1:

     

example,ifgrp1= e0a, e1d

ifgrp3=ifgrp1,

ifgrp2:

     

ifgrp2

ifgrp3:

     
2.1.2 Network interface configuration If you created IFGRPs, then use their names, otherwise use port

2.1.2 Network interface configuration

If you created IFGRPs, then use their names, otherwise use port names (for example, e0a)

Some controllers have an e0M interface for environments with a subnet dedicated to managing

servers.

Include the e0M settings if you have a management subnet.

Note: For systems without an e0P port, leave one network port available for ACP connections to SAS disk shelves.

Partner interface name or IP address Enable Controller Interface Network Media IP address Jumbo name
Partner
interface
name or IP
address
Enable
Controller
Interface
Network
Media
IP address
Jumbo
name
name
mask
type
frames?
2.1.3
Default gateway
Gateway details
Controller 1
Controller 2
Default Gateway IP address

2.1.4 Administration Host

(Optional) You can limit the systems or subnets authorized to mount the root volume

Host details

Controller 1

Controller 2

Admin host/subnet IP

   

2.1.5 Timezone

What time zone should the systems set their clocks to (for example, US/Pacific)?

Timezone Details

Controller 1

Controller 2

Time zone

   

Physical Location (for example, Bldg 4, Dallas)

   

2.1.6 Language encoding for multiprotocol files

The default is POSIX and only needs to be changed for systems storing files using international alphabets.

Encoding details

Controller 1

Controller 2

Language for multiprotocol files

   

2.1.7 Domain Name Services (DNS) resolution

DNS resolution

Values

DNS Domain Name

 

DNS Server IP addresses (up to 3)

 
2.1.8 Network Information Services (NIS) resolution NIS resolution Values NIS Domain Name   NIS Server

2.1.8 Network Information Services (NIS) resolution

NIS resolution

Values

NIS Domain Name

 

NIS Server IP addresses

 

2.1.9 Remote Management Settings (RLM/SP/BMC)

All systems include Remote LAN Module (RLM), Baseboard Management Controller (BMC), or a Service Processor (SP) to provide out-of-band control of the storage system. NetApp recommends configuring these interfaces for easier, secure management and troubleshooting.

RLM/BMC

Controller 1

Controller 2

IP Address

   

Network Mask

   

Gateway

   

Mail server

   

hostname

Mail server IP

   

2.1.10 Alternate Control Path (ACP) management for SAS shelves

For system models prior to the FAS3200 series, you must use an onboard NIC port to use ACP. New systems with dedicated e0P ports automatically assign IP addresses.

 

Controller 1

Controller 2

Interface name (if not using e0P)

   

Private subnet (default: 198.15.1.0)

   

Network Mask

   

2.1.11 CIFS configuration

Systems with a CIFS license will run the CIFS setup wizard immediately after the Setup wizard completes. NT4 domains will require a server account to be created before running CIFS setup. You can abort the wizard using Ctrl+C from the keyboard and run later if necessary.

Note: The installation engineer will require someone with Domain Administrator privileges to help perform this section. When CIFS is configured, a domain administrator should move the controllers out of OU=Computers into an OU for servers. This will ensure Group Policy Objects can be applied to the controllers.

CIFS configuration

Controller 1

Controller 2

 

Choose one of:

Choose one of:

Active Directory domainActive Directory domain

Active Directory domainActive Directory domain

Authentication mode

NT 4 domainAuthentication mode NT 4 domain

NT 4 domainAuthentication mode NT 4 domain

WorkgroupWorkgroup

WorkgroupWorkgroup

/etc/passwd or NIS/LDAP/etc/passwd or NIS/LDAP

/etc/passwd or NIS/LDAP/etc/passwd or NIS/LDAP

Domain name

   

NetBios name

   

Do you want the system visible via WINS (Y/N)?

   

WINS IP addresses (up to 3)

   

Multiprotocol or NTFS only?

   
2.1.12 Configure Virtual LANs (VLANs) (Optional) VLANs are used to segment network domains using 802.1Q

2.1.12 Configure Virtual LANs (VLANs)

(Optional) VLANs are used to segment network domains using 802.1Q protocol standards.

Controller name Interface name VLAN IDs to activate Enable GVRP?
Controller name
Interface name
VLAN IDs to activate
Enable GVRP?

Note: To trunk VLANs across an interface or IFGRP, you need to set "switchport mode trunk" on that interface or logical interface. This will allow 802.1q trunking, so that traffic across it is VLAN tagged. You must then create the relevant VLAN interfaces on the storage controller.

If you want a port or EtherChannel interface to be the only access port for a particular VLAN you must set "switchport mode access" on that interface. Then give the storage controller interface an IP address on that VLAN. No other information is required to VLAN tag the frames.

Reboot the controllers at this point for the settings to go into effect.

2.1.13 AutoSupport settings

AutoSupport is a „phone home‟ function to notify you and NetApp of any hardware problems, so that new hardware can be automatically delivered to solve the issue. (System must remain on a support contract and the level of responsiveness is dependent on the level of service contract: 2 hours Next Business Day.)

AutoSupport settings

Controller 1

Controller 2

Enable AutoSupport? If not, provide justification.

   

SMTP Server Name or IP

   
 

One of:

One of:

HTTPS (default)HTTPS (default)

HTTPS (default)HTTPS (default)

AutoSupport Transport

HTTPAutoSupport Transport HTTP

HTTPAutoSupport Transport HTTP

SMTPSMTP

SMTPSMTP

AutoSupport From E-Mail address

<hostname@yourdomain>

<hostname@yourdomain>

AutoSupport To E-Mail address(es)

   

2.1.14 Customer/RMA details

Verify this information by logging into the now.netapp.com website. This information is required to ensure that the Technical Support personnel can reach you and the replacement parts are sent to the correct address.

Customer/RMA

   

details

Primary contact

Secondary contact

Contact Name

   

Contact Address

   

Contact Phone

   

Contact E-mail Address

   

RMA Address

 

RMA Attention To Name

 
2.1.15 Time synchronization Time synchronization details Values Time services protocol (ntp)   Time

2.1.15 Time synchronization

Time synchronization details

Values

Time services protocol (ntp)

 

Time Servers (up to 3 internal or external hostnames or IP addresses)

 

Max time skew (<5 minutes for CIFS)

 

2.1.16 SNMP management settings

(Optional) Fill out if you have SNMP monitoring applications (for example, Operations Manager). Set by using the „snmp options‟ command.

SNMP settings

Controller 1

Controller 2

SNMP Trap Host

   

SNMP Community

   

Data Fabric Manager Server Name or IP

   
 

Choose one of:

Choose one of:

Data Fabric Manager Protocol

HTTPData Fabric Manager Protocol HTTP

HTTPData Fabric Manager Protocol HTTP

HTTPSHTTPS

HTTPSHTTPS

Data Fabric Manager Port

   
3 7-Mode installation and verification checklists The installer will perform the following checks to ensure

3 7-Mode installation and verification checklists

The installer will perform the following checks to ensure that your new systems are configured correctly and are ready to turn over to you.

Physical installation

Status

Check and verify all ordered components were delivered to the customer site.

 

Confirm the NetApp controllers are properly installed in the cabinets.

 

Confirm there is sufficient airflow and cooling in and around the NetApp system.

 

Confirm all power connections are secured adequately.

 

Confirm the racks are grounded (if not in NetApp cabinets).

 

Confirm there is sufficient power distribution to NetApp controllers & disk shelves.

 

Confirm power cables are properly arranged in the cabinet.

 

Confirm that LEDs and LCDs are displaying the correct information.

 

Confirm that cables from NetApp controllers to disk shelves and among disk shelves are not crimped or stretched.(fiber cable services loops should be bigger than your fist )

 

Confirm that fiber cables laid between cabinets are properly connected and are not prone to physical damage.

 

Confirm disk shelves IDs are set correctly.

 

Confirm that fiber channel 2Gb/4Gb loop speeds are set correctly on DS14 shelves and proper LC-LC cables are used.

 

Confirm that Ethernet cables are arranged and labeled properly.

 

Confirm all Fiber cables are arranged and labeled properly.

 

Confirm the Cluster Interconnect Cables are connected (for HA pairs).

 

Confirm there is sufficient space behind the cabinets to perform hardware maintenance.

 

Power On and Diagnostics

Status

Power up the disk shelves to ensure that the disks spin up and are initialized properly.

 

Connect the console to the serial port cable and establish a console connection using a terminal emulator like Terra Term, PuTTY or Hyperterm.

 

Note: Log all console output to a text file.

Power on the controllers.

 

Boot the controller and press Ctrl+c at the second prompt for „Special Boot Menu options‟.

 

Go to Maintenance Mode by selecting option 5.

 

Check the onboard fibre ports status:

 

*> fcadmin config

Change the port mode if necessary from targets to initiators (for SAN requirements).

Verify the cable connections to all shelves:

 

*> fcadmin device_map

*> sasadmin shelf

for SAS shelves

Verify disk ownership assignments:

 

*> disk show a

Assign disks to each node using the disk assign command if necessary.

Verify the Multipath High Availability (MPHA) cabling. Each disk must have an A and B path:

 

*> storage show disk -p

Verify the system has one root aggregate assigned:

 

*> aggr status

Follow these steps for both cluster nodes, halt and then reboot each system into Data ONTAP:

 

*> halt

LOADER> boot_ontap

Verify power and cooling are at acceptable levels: fas1> environment status Verify expansion cards are

Verify power and cooling are at acceptable levels:

fas1> environment status

Verify expansion cards are installed in the correct slots:

fas1> sysconfig -c

Verify all local and partner shelves are visible to the system:

fas1> fcadmin device_map

Verify that all disks are owned:

fas1> disk show -n

Use the WireGauge tool to verify that all the shelves are cabled correctly.

Installation and Configuration

Status

Confirm the correct version of Data ONTAP software and disk, shelf, motherboard and RLM/BMC firmware is installed on each controller

 

fas1> version b fas1> sysconfig -a

Confirm ALL controllers are named as per the customer naming standards

 

Confirm the root volume is sufficiently sized ( 250GB minimum) fas1> vol size <root volume name>

 

Confirm all the licenses are installed

 

fas1> license

Check the /etc/rc and /etc/hosts files:

 

fas1> rdfile /etc/rc

fas1> rdfile /etc/hosts

- Verify all configured Ethernet network interfaces (individual and ifgrp) are configured correctly as per the customer requirements: IP address, media type, flow control and speed.

- Confirm any interfaces not required to perform host name resolution are configured with the "-wins" option

- For clustered systems, verify they have partner interfaces for failover

Where necessary, confirm the network switches are configured to support dynamic or static multi-mode ifgrps (LACP or Etherchannel) as per customer requirement.

 

Has the customer accessed the system console using the RLM / SP / BMC?

 

Verify network connectivity and DNS resolution is configured properly:

 

fas1> ping <hostname of mail server>

Verify configured IFGRPs function properly by disconnecting one or more cables fas1> ifgrp status Pull cables fas1> ping <hostname of mail server> fas1> ifgrp status Reinsert cables

 

Confirm each controller is configured to synchronise time with a centralised source fas1> options timed fas1> timezone

 

fas1> date

Confirm that AutoSupport is configured and functioning correctly. fas1> options autosupport.doit “Test

 

Confirm the default „home‟ share is stopped from each controller (and vFiler)

 

If necessary, confirm that telnet and RSH is disabled and SSH is enabled

 

If required, confirm SNMP is configured on all controllers to the appropriate traphost

 

Download documentation pack and upload to controller(s)

 
CIFS configuration Status If necessary, run through CIFS setup and join the controllers to the

CIFS configuration

Status

If necessary, run through CIFS setup and join the controllers to the customer's Active Directory (requires an AD account with suitable permissions).

 

Confirm the NetApp controller‟s local administrator account was created while configuring the CIFS service (and the password is set appropriately).

 

Confirm the permissions to the root volume (c$) and /etc folder (etc$) are configured appropriately (that is, NOT Everyone Full Control).

 

Confirm that appropriate Windows Domain Administrators group(s) is/are member of the NetApp controller‟s local administrator group.

 

Create a share.

 

Have the customer map the share to a host, write data to it.

 

Create a Snapshot and confirm that Snapshot visibility is configured appropriately (for example, hidden to regular CIFS clients)

 

Confirm that qtrees storing CIFS data have the appropriate security style specified:

 

fas1> qtree status

Confirm that qtrees storing CIFS data have the appropriate oplockssetting.

 

NFS configuration

Status

Create a qtree and confirm the appropriate security style is specified fas1> qtree create <path> fas1> qtree status

 

Export the qtree.

 

Check the /etc/exports file and update the same with new mount entries with appropriate permissions.

 

Have the customer mount the qtree from a host and write data to it.

 

Take a Snapshot and confirm that Snapshot visibility is configured appropriately (for example, hidden to regular clients)

 

iSCSI configuration

Status

Make sure the iSCSI service is started.

 

Verify that an iSCSI host attach or support kit has been installed on the host.

 

If appropriate, verify SnapDrive has been installed on the host.

 

Create a qtree, igroup, and LUN on the system (using SnapDrive if necessary).

 

Have the customer establish an iSCSI session from the host.

 

Create a file system on the LUN, write some data to it and confirm the data is on the LUN.

 

Reboot the host and confirm that the LUN is still attached.

 

FCP configuration

Status

Make sure the FCP service is started fas1> fcp status

 

Verify an FCP host attach or support kit has been installed on the host.

 

If appropriate, verify that SnapDrive has been installed on the host.

 

Create a qtree, igroup, and LUN on the system (using SnapDrive if necessary).

 

Have the customer establish an FCP session from the host.

 

Have the customer create a file system on the LUN and, write some data to it.

 

Have the customer reboot the host and confirm the LUN is still attached.

 
Verification checklist Status Where necessary Make sure the CLUSTER license is enabled where necessary.  

Verification checklist

Status

Where necessary Make sure the CLUSTER license is enabled where necessary.

 

Verify the storage failover options on both systems in the HA pair are identical.

 

Temporarily disable AutoSupport:

 

fas1> options autosupport.enable off

Test manual Cluster Failover (in both directions) and ensure success, rectify any errors and prove network connectivity continues to function correctly during failover. fas1> cf enable fas1> cf takeover fas1> partner

 

fas2/fas1*> ifconfig a fas2/fas1*> ifgrp status fas2/fas1*> partner

fas1> cf giveback

Test Uncontrolled storage Failover (in both directions) by disconnecting one controller from power. Rectify any errors.

 

Test component failure of a PSU (Check status of LEDs and console).

 

Test component failure of a LAN cable (Interface Group Test), include ifgrp favor.

 

Test component failure of a fibre cable to disk shelf (Path Test), For Multipath HA cabling to ensure all disks have an A and B channel. Type

 

storage show disk p

Run the WireGauge tool to ensure the shelf cabling is correct.

 

When installing a new system into a new NetApp cabinet, switch off one cabinet PDU, and make sure all controllers and shelves remain powered on. Check the status of LEDs and console.

 

Insert an entry into the system log indicating installation is complete:

 

fas1> logger * * * System Install complete <installer name> <date> * * *

Backup the system configuration:

 

fas1> config dump <date>.cfg

Re-enable AutoSupport:

 

fas1> options autosupport.enable on

Post installation checklist

Status

Give new customers a brief tour of FilerView or Systems Manager to explain the basic functions of managing their new system.

 

Log onto the NOW website and give the customer a brief tour of the site. Show them how to access documentation, download software and firmware, search the Knowledge Base, and verify their RMA information.

 

Discuss training available through NetApp University with new customers.

 

Since they are the basis for most Data ONTAP functionality, have the customer explain how Snapshots work. Correct any misconceptions.

 

Create and send a Trip Report within 24 hours to the customer, partner sales team and NetApp sales team.

 

When all tasks are completed, have customer sign a Certificate of Completion.

 
4 Cluster-Mode configuration details Please work with your Professional Services representative to complete this worksheet

4 Cluster-Mode configuration details

Please work with your Professional Services representative to complete this worksheet prior to the installation date. The requested information enables us to configure your equipment quickly and

efficiently.

Depending on the desired configuration, some fields may not be applicable.

This worksheet does not replace the requirement for reading and understanding the appropriate Data ONTAP manuals that describe the operations of Data ONTAP in Cluster-Mode. Data ONTAP manuals can be found at the NetApp Support site under documentation.

Customer checklist of site preparation requirements (check all that apply):

Adequate rack space for the NetApp system and disk shelves has been provided.of site preparation requirements (check all that apply): The power requirements for the NetApp system and

The power requirements for the NetApp system and disk shelves have been satisfied.for the NetApp system and disk shelves has been provided. The network patch cabling and switch

The network patch cabling and switch port configuration is complete.for the NetApp system and disk shelves have been satisfied. Company Name : Data ONTAP ®

Company Name:

Data ONTAP ® Version:

NetApp Sales Order #:

4.1 Cluster information

It is assumed that the cluster will contain four nodes. If there are more than four nodes, replicate the appropriate section to add additional node information.

Starting from ONTAP 8.1 the 'cluster create' and 'cluster join' commands have built-in wizards. The wizard generates hostnames, IP addresses for the cluster LIF and subnet masks for the cluster LIF. It is recommended to use the cluster setup wizard while creating a new cluster or attempting to join an existing cluster.

The wizard has the following rules:

The names for the nodes in the cluster are derived from the name of the cluster. If the cluster is named clust1, the nodes will be names as clust-01, clust-02 and so on. The node name can be changed later with the cluster::system>node>modify command.

The cluster LIF will be assigned IP address in the 169.254.0.0 range with a Class B subnet (255.255.0.0) if the default is taken.

The initial cluster creating and configuration will be performed on the first node that is booted. The initial setup script will ask if the operator wants to create a cluster or join a cluster. The first node will be “create” and subsequent nodes will be “join”.

4.1.1 Cluster

The cluster base aggregate will contain the root volume for the cluster Vserver.

Cluster name

Cluster name Cluster Base Aggregate

Cluster Base Aggregate

Cluster name Cluster Base Aggregate

4.1.2 Licensing

A base license is required, but additional features also need licensing.

License Values
License
Values
4.1.3 Admin Vserver The Cluster Administration Vserver is used to manage the cluster activities. It

4.1.3 Admin Vserver

The Cluster Administration Vserver is used to manage the cluster activities. It is different from the node Vservers and is used by System Manager to access the cluster.

Type of information

Value

Cluster administrator password The password for the „admin‟ account that the cluster requires before granting cluster administrator access at the console or through a secure protocol.

 

The default rules for passwords are as follows:

A password must be at least eight characters long.

A password must contain at least one letter and one number.

Cluster management LIF IP address A unique IP address for the cluster management LIF. The cluster administrator uses this address to access the cluster admin Vserver and manage the cluster. Typically, this address should be on the data network.

 

Cluster management LIF netmask The subnet mask that defines the range of valid IP addresses on the cluster management network.

 

Cluster management LIF default gateway The IP address for the router on the cluster management network.

 

DNS domain name The name of your network's DNS domain. The domain name cannot contain an underscore (_) and must consist of alphanumeric characters. To enter multiple DNS domain names, separate each name with either a comma or a space.

 

Name server IP addresses The IP addresses of the DNS name servers. Separate each address with either a comma or a space.

 

4.1.4 Time synchronization

Time synchronization details

Values

Time services protocol (NTP)

 

Time Servers (up to 3 internal or external hostnames or IP addresses)

 

Max time skew (<5 minutes for CIFS)

 

4.1.5 Time zone

What time zone should the systems set their clocks to (for example, US/Pacific)?

Time Zone

Time Zone Location

Location

Time Zone Location
4.2 Node information Individual controllers are called nodes. Each node has a unique name. Unlike

4.2 Node information

Individual controllers are called nodes. Each node has a unique name. Unlike the cluster name, the node name can be changed after it is initially defined.

System information

Node 1

Node 2

Node 3

Node 4

Serial number

       

Node name

       

4.2.1 Physical port identification

Each port services a specific type of function or role. These roles are:

Node Management

Data Intercluster

Cluster

Node Management ports are required to maintain connection between the node to site services such as NTP and AutoSupport. Data ports are used to transfer data or communicate between the cluster and the applications. Intercluster LIFs are used to setup peer relations between clusters for replicating data between clusters. Cluster ports are specifically used to transfer data between nodes within a cluster.

Due to BURT 322675 NetApp recommends setting up an interface group for the node management LIF on each node of the cluster. The instructions below cover scenarios that have or do not have a fix for this BURT. Follow the section that is relevant to your case. Some of these instructions might diverge from the guidelines on the NetApp Support site. Check for updated versions of this document for latest information.

For versions of Data ONTAP that do not have a fix for BURT 322675, create a single-mode interface group of the following ports. Use this interface group as the port for the node management LIF. The interface group should be created before using the „cluster setup‟ wizard on the node.

For versions of Data ONTAP that have a fix for BURT 322675:

System Model

Port Grouping

FAS3040 & FAS3070

e0a and e0c

V3040 & V3070

e0a and e0c

FAS3140, FAS3160 & FAS3170

e0a and e0b

V3140, V3160 & V3170

e0a and e0b

FAS3210, FAS3240 & FAS3270

e0a and e0b

V3210, V3240 & V3270

e0a and e0b

FAS6030, FAS6040, FAS6070 & FAS6080

e0a and e0c

V6030, V6040, V6070 & V6080

e0a and e0c

FAS6210, FAS6240 & FAS6280

e0a and e0b

V6210, V6240 & V6280

e0a and e0b

Some controllers have an e0M interface for environments with a subnet dedicated to managing servers. Include the e0M settings if you have a management subnet.

Note: For systems without an e0P port, leave one network port available for ACP connections to SAS disk shelves.

Node Name IFGRP Ports MTU Port Role This table is used to define port roles.
Node Name IFGRP Ports MTU Port Role
Node Name
IFGRP
Ports
MTU
Port Role

This table is used to define port roles. If BURT 322675 is not installed, the IFGRP column should be used and the associated ports noted. If BURT 322675 is installed, omit the IFGRP column.

4.2.2 Node management LIF

Each node has a management port that is used to communicate with it.

Port or Node Name LIF Name IP Address Netmask Gateway IFGRP
Port or
Node Name
LIF Name
IP Address
Netmask
Gateway
IFGRP
4.3 Cluster network information Starting from Data ONTAP 8.1 the 'cluster create ' and 'cluster

4.3 Cluster network information

Starting from Data ONTAP 8.1 the 'cluster create' and 'cluster join' commands have built-in wizards to generate hostnames, IP addresses for the cluster LIF, and subnet masks for the cluster LIF NetApp recommends using the cluster setup wizard whenever you create a new cluster or attempt to join an existing cluster

The wizard has the following rules:

The names for the nodes in the cluster are derived from the name of the cluster. If the cluster is named cmode, the nodes will be names as cmode-01, cmode-02 and so on

The cluster LIF is assigned IP address in the 169.254.0.0 range with a Class B subnet

(255.255.0.0)

Once the cluster has been defined and the nodes are joined to the cluster, other elements can be created. These elements can be created using System Manager, Element Manager, or CLI.

4.3.1 Interface groups (IFGRP)

Interface groups bond multiple network ports together for increased bandwidth and/or fault tolerance.

Distribution IFGRP name Node Mode Ports function
Distribution
IFGRP name
Node
Mode
Ports
function

4.3.2 Configure Virtual LANs (VLANs)

(Optional) VLANs are used to segment network domains. The VLAN has a specific name that is a combination of the associated network port and the switch VLAN ID.

VLAN name Node Associated Network Port Switch VLAN ID
VLAN name
Node
Associated Network
Port
Switch VLAN ID

4.3.3 Logical Interfaces (LIFs)

Logical Interfaces are the point at which the customer interfaces with the cluster.

Routing Failover LIF name Home node Home port Netmask group group
Routing
Failover
LIF name
Home node
Home port
Netmask
group
group
4.4 Intercluster network information The intercluster ports used for cross-cluster communication. An intercluster port

4.4 Intercluster network information

The intercluster ports used for cross-cluster communication. An intercluster port should be routable to the following:

another intercluster port

data port of another cluster.

Node name Port LIF name IP address Netmask Gateway
Node name
Port
LIF name
IP address
Netmask
Gateway

4.5 Vserver information

Application access to data residing in the cluster must be done through a Vserver. Vservers can be used to support single or multiple protocols, user groups, or whatever delineation that the customer chooses. Additionally Vservers can restrict allocation of data to specific Aggregates.

To create a Vserver, you can use any of the available administrative interfaces: System Manager, Element Manager, or CLI. The Vserver Setup wizard has the following sub-wizards, which you can run after you create a Vserver:

Network setup

Storage setup

Services setup

Data access protocol setup

Use the following section as a guide to create Vservers. Replicate this section as many times as required.

4.5.1 Creating Vserver

Type of information

Value

Vserver name The name of a Vserver can contain alphanumeric characters and the following special characters: ".", "-", and "_". However, the name of a Vserver must not start with a number or a special character.

 

Protocols Protocols that you want to configure or allow on that Vserver.

 

Name Services Services that you want to configure on the Vserver

 

Aggregate name Aggregate name on which you want to create the Vserver's root volume. The default aggregate name is used if you do not specify one.

 

Language Setting Language you want the volumes to use.

 
4.5.2 Creating Volumes on the Vserver Volume name Aggregate name Volume size Junction path (NAS

4.5.2 Creating Volumes on the Vserver

Volume name Aggregate name Volume size Junction path (NAS only)
Volume name
Aggregate name
Volume size
Junction path (NAS
only)

4.5.3 IP Network Interface on the Vserver

End user applications connect to the data in the cluster only through interfaces defined to Vservers. The following table models the first 4 LIFs. Replicate the Interface columns, or the entire table if more interfaces are required.

Type of

Interface 1

Interface 2

Interface 3

Interface 4

Information

LIF name The default LIF name is used if you do not specify one.

       

IP address

       

Subnet mask

       

Home node Home node is the node on which you want to create a logical interface. The default home node is used if you do not specify one.

       

Home port Home port is the port on which you want to create a logical interface. The default home port is used if you do not specify one.

       

Routing Group

       

Protocols Protocols that can use the LIF.

       

Failover Group

       

DNS Zone

       
4.5.4 FCP Network Interface on the Vserver Type of information Value LIF name The default

4.5.4 FCP Network Interface on the Vserver

Type of information

Value

LIF name The default LIF name is used if you do not specify one.

 

Home node Home node is the node on which you want to create a logical interface. The default home node is used if you do not specify one.

 

Home port Home port is the port on which you want to create a logical interface. The default home port is used if you do not specify one

 

4.5.5

LDAP services

Type of information

Value

LDAP server IP address

 

LDAP server port number The default LDAP server port number is used if you do not specify one.

 

LDAP server minimum bind authentication level

 

Bind DN and password

 

Base DN

 

4.5.6

NIS services

Type of information

Value

NIS domain name

 

IP addresses of the NIS servers

 

4.5.7

DNS services

Type of information

Value

DNS domain name

 

IP addresses of the DNS servers

 
4.5.8 CIFS protocol   Type of information Value   Domain name     CIFS share

4.5.8 CIFS protocol

 

Type of information

Value

 

Domain name

 
 

CIFS share name The default CIFS share name is used if you do not specify one.

 

Note: You must not use characters or Unicode characters in CIFS share names. You can use alphanumeric characters and the following special characters : ".", "!", "@", "#", "$",

"%", "&", "(", ")", ",", "_", ' " , "{", "}", "~", and "-".

 

CIFS share path The default CIFS share path is used if you do not specify one.

 
 

CIFS access control list The default CIFS access control list is used if you do not specify one.

 

4.5.9

iSCSI protocol

 

Type of information

Value

 

igroup name The default igroup name is used if you do not specify one.

 
 

Names of the initiators

 
 

Operating system of the initiators

 
 

LUN names The default LUN name is used if you do not specify one.

 
 

Volume name The volume that the LUN will reside on.

 
 

LUN sizes

 

4.5.10

FCP protocol

 

Type of Information

Value

 

igroup name The default igroup name is used if you do not specify one.

 
 

WWPN World wide port number (WWPN) of the initiators.

 
 

Operating system of the initiators.

 
 

LUN names The default LUN name is used if you do not specify one.

 
 

Volume name The volume that the LUN will reside on.

 
 

LUN sizes

 
4.6 Support information 4.6.1 Remote Management Settings (RLM/BMC/SP) All systems include a Remote LAN Module

4.6 Support information

4.6.1 Remote Management Settings (RLM/BMC/SP)

All systems include a Remote LAN Module (RLM), a Baseboard Management Controller (BMC), or a Service Processor (SP) to provide out-of-band control of the storage system. NetApp recommends configuring these interfaces for easier, secure management and troubleshooting.

     

Default

Mail server

Mail server

Node name

IP address

Netmask

gateway

hostname

IP address

4.6.2 AutoSupport settings

AutoSupport is a „phone home‟ function to notify you and NetApp of any hardware problems, so that new hardware can be automatically delivered to solve the issue. (System must remain on a support contract and the level of responsiveness is dependent on the level of service contract: 2 hours Next Business Day.)

Enable AutoSupport? If not, provide justification.

     

AutoSupport to

SMTP Server Name or IP

AutoSupport

transport

AutoSupport

from e-mail

address

e-mail

address(es)

   

One of:

   
HTTPS

HTTPS

(default)

HTTP    One of:     HTTPS (default) SMTP     One of:     HTTPS

SMTP    One of:     HTTPS (default) HTTP     One of:     HTTPS

   

One of:

   
HTTPS

HTTPS

(default)

HTTPSMTP     One of:     HTTPS (default) SMTP     One of:    

SMTP    One of:     HTTPS (default) HTTP     One of:     HTTPS

   

One of:

   
HTTPS

HTTPS

(default)

HTTPSMTP     One of:     HTTPS (default) SMTP     One of:    

SMTP    One of:     HTTPS (default) HTTP     One of:     HTTPS

   

One of:

   
HTTPS

HTTPS

(default)

HTTP  One of:     HTTPS (default) HTTP SMTP     One of:     HTTPS

SMTP  One of:     HTTPS (default) HTTP SMTP     One of:     HTTPS

4.6.3 Customer/RMA details Verify this information by logging into NetApp support site: http://now.netapp.com . This

4.6.3 Customer/RMA details

Verify this information by logging into NetApp support site: http://now.netapp.com. This information is required to ensure that the Technical Support personnel can reach you and the replacement parts are sent to the correct address.

Customer/RMA

   

details

Primary contact

Secondary contact

Contact name

   

Contact address

   

Contact phone

   

Contact e-mail address

   

RMA address

 

RMA attention to name

 
5 Cluster-Mode installation and verification checklists The installer will perform the following checks to ensure

5 Cluster-Mode installation and verification checklists

The installer will perform the following checks to ensure that your new systems are configured correctly and are ready to turn over to you.

Physical installation

Status

Check and verify all ordered components were delivered to the customer site.

 

Confirm the NetApp controllers are properly installed in the cabinets.

 

Confirm there is sufficient airflow and cooling in and around the NetApp system.

 

Confirm all power connections are secured adequately.

 

Confirm the racks are grounded (if not in NetApp cabinets).

 

Confirm there is sufficient power distribution to NetApp controllers & disk shelves.

 

Confirm power cables are properly arranged in the cabinet.

 

Confirm that LEDs and LCDs are displaying the correct information.

 

Confirm that cables from NetApp controllers to disk shelves and among disk shelves are not crimped or stretched.(fiber cable services loops should be bigger than your fist )

 

Confirm that fiber cables laid between cabinets are properly connected and are not prone to physical damage.

 

Confirm disk shelves IDs are set correctly.

 

Confirm that fiber channel 2Gb/4Gb loop speeds are set correctly on DS14 shelves and proper LC-LC cables are used.

 

Confirm that Ethernet cables are arranged and labeled properly.

 

Confirm all Fiber cables are arranged and labeled properly.

 

Confirm the Cluster Interconnect Cables are connected (for HA pairs).

 

Confirm there is sufficient space behind the cabinets to perform hardware maintenance.

 

Confirm that the Cisco Nexus Cluster Interconnect switches are properly placed in the cabinet.

 

Confirm that the Cisco IP switches are properly placed in the cabinet.

 

Confirm that the Cisco FCP switches are properly placed in the cabinet.

 

Confirm that the latest “Reference Configuration File” for the Cisco Nexus switches has been installed.

 

Confirm that any VLANs required have been defined to the appropriate switches.

 

Confirm that the Ethernet cables are properly connected to the Cisco IP switches.

 

Confirm that the FCP cables are properly connected to the Cisco Fabric switches.

 
Power On and Perform Cluster Creation, Node and Vserver configuration Status Power up the disk

Power On and Perform Cluster Creation, Node and Vserver configuration

Status

Power up the disk shelves to ensure that the disks spin up and are initialized properly.

 

Connect the console to the serial port cable and establish a console connection using a terminal emulator like Terra Term, PuTTY or Hyperterm.

 

Note: Log all console output to a text file.

Power on the controllers.

 

On the first controller console, reply to the initial Cluster Setup response request with “create” to initialize the cluster and the first node.

 

On the next controller console, reply to the initial Cluster Setup response request with “join” to initialize the second node and join the cluster.

 

On each subsequent controller, perform the same task as the second controller to join them as nodes in the cluster.

 

Install System Manager 2.0 on a Windows or Linux system.

 

Use System Manager 2.0 to create the first Vservers.

 

Use the WireGauge tool to verify that all the shelves are cabled correctly and switches are properly connected.

 

Miscellaneous configuration

Status

Where necessary, confirm the network switches are configured to support dynamic or static multi-mode IFGRPs (LACP or Etherchannel) as per customer requirement.

 

Has the customer accessed the system console using the RLM / BMC / SP?

 

Verify network connectivity and DNS resolution is configured properly:

 

cluster::network> ping -node <node name> destination <hostname of DNS server>

Verify configured IFGRPs with more than one port function properly by disconnecting one or more cables

 

Confirm each node date and timezone is set correctly

 

cluster::system node date show cluster::> timezone

Display whether NTP is used in the cluster

 

cluster::system services ntp config show cluster::system service ntp server show

Confirm that AutoSupport is configured and functioning correctly.

 

cluster::system node autosupport show

Confirm that telnet and RSH is disabled and SSH is enabled

 

If required, confirm SNMP is configured on all controllers to the appropriate traphost

 

Download documentation pack and provide to customer

 
CIFS configuration (per Vserver servicing CIFS) Status Check the export policy rules to ensure that

CIFS configuration (per Vserver servicing CIFS)

Status

Check the export policy rules to ensure that the CIFS access protocol will allow access

 

cluster::vserver export-policy rule> show

If necessary, run through CIFS setup and join the controllers to the customer's Active Directory (requires an AD account with suitable permissions).

 

Confirm the NetApp controller‟s local administrator account was created while configuring the CIFS service (and the password is set appropriately).

 

Confirm the permissions to the root volume (c$) and /etc folder (etc$) are configured appropriately (that is, NOT Everyone Full Control).

 

Confirm that appropriate Windows Domain Administrators group(s) are member of the cluster‟s local administrator group.

 

Create a share.

 

Have the customer map the share to a host, write data to it.

 

Create a Snapshot and confirm that Snapshot visibility is configured appropriately (for example, hidden to regular CIFS clients)

 

Confirm that qtrees storing CIFS data have the appropriate security style specified:

 

cluster::volume> qtree show vserver <vserver> -volume <volume name> -qtree <qtree name>

Confirm that qtrees storing CIFS data have the appropriate „oplocks‟ setting.

 

Take a Snapshot and confirm that Snapshot visibility is configured appropriately (for example, hidden to regular clients)

 

NFS configuration (per Vserver servicing NFS)

Status

Create a qtree and confirm the appropriate security style is specified

 

cluster::volume> qtree create vserver <vserver> -volume <volume name> -qtree <qtree name> -security-style {unix|ntfs|mixed} cluster::volume> qtree show vserver <vserver> -volume <volume name> -qtree <qtree name>

Check the export policy rules to ensure that the NFS access protocol will allow access

 

cluster::vserver export-policy rule>show

Have the customer mount the qtree from a host and write data to it.

 

Take a Snapshot and confirm that Snapshot visibility is configured appropriately (for example, hidden to regular clients)

 

iSCSI configuration (per Vserver servicing iSCSI)

Status

Make sure the iSCSI service is started.

 

Verify that an iSCSI host attach or support kit has been installed on the host.

 

If appropriate, verify SnapDrive has been installed on the host.

 

Create a qtree, igroup, and LUN on the system (using SnapDrive if necessary).

 

Have the customer establish an iSCSI session from the host.

 

Create a file system on the LUN, write some data to it and confirm the data is on the LUN.

 

Reboot the host and confirm that the LUN is still attached.

 
FCP configuration (per Vserver servicing FCP) Status Make sure the FCP service is started  

FCP configuration (per Vserver servicing FCP)

Status

Make sure the FCP service is started

 

Verify an FCP host attach or support kit has been installed on the host.

 

If appropriate, verify that SnapDrive has been installed on the host.

 

Create a qtree, igroup, and LUN on the system (using SnapDrive if necessary).

 

Have the customer establish an FCP session from the host.

 

Have the customer create a file system on the LUN and, write some data to it.

 

Have the customer reboot the host and confirm the LUN is still attached.

 

Verification checklist

Status

Where necessary make sure the CLUSTER license is enabled where necessary.

 

Verify the cluster options on all nodes in the cluster are identical.

 

Temporarily disable AutoSupport on nodes of the cluster.

 

cluster::system node autosupport> -node <node name> -state disable

Test manual node Takeover (in both directions) and ensure success, rectify any errors and prove network connectivity continues to function correctly during failover.

 

cluster::system storage failover takeover ofnode <node> -bynode <node> cluster::system storage failover show-giveback cluster::system storage failover giveback ofnode <node> - fromnode <node> -require-partner-waiting true

Test Uncontrolled Cluster Failover (in both directions) by disconnecting one controller from power. Rectify any errors.

 

Repeat above test for all HA pairs in the cluster

 

Test component failure of a PSU (Check status of LEDs and console).

 

Test component failure of a LAN cable

 

Run the WireGauge tool to ensure the shelf cabling is correct.

 

When installing a new system into a new NetApp cabinet, switch off one cabinet PDU, and make sure all controllers and shelves remain powered on. Check the status of LEDs and console.

 

Re-enable AutoSupport:

 

cluster::system node autosupport -node <node name> -state enable

Post installation checklist

Status

Give new customers a brief tour of Systems Manager and Element Manager to explain the basic functions of managing their new cluster.

 

Log onto the NOW website and give the customer a brief tour of the site. Show them how to access documentation, download software and firmware, search the Knowledge Base, and verify their RMA information.

 

Discuss training available through NetApp University with new customers.

 

Since they are the basis for most Data ONTAP functionality, have the customer explain how Snapshots work. Correct any misconceptions.

 

Create and send a Trip Report within 24 hours to the customer, partner sales team and NetApp sales team.

 

When all tasks are completed, have customer sign a Certificate of Completion.