Documentos de Académico
Documentos de Profesional
Documentos de Cultura
Student Guide
Course ID: STRSW-ILT-D8CADM-REV03
Catalog Number: STRSW-ILT-D8CADM-REV03-SG
Content Version: 1.0
ATTENTION
The information contained in this course is intended only for training. This course contains information and activities that,
while beneficial for the purposes of training in a closed, non-production environment, can result in downtime or other
severe consequences in a production environment. This course material is not a technical reference and should not,
under any circumstances, be used in production environments. To obtain reference materials, refer to the NetApp product
documentation that is located at http://now.netapp.com/.
COPYRIGHT
2013 NetApp, Inc. All rights reserved. Printed in the U.S.A. Specifications subject to change without notice.
No part of this document covered by copyright may be reproduced in any form or by any meansgraphic, electronic, or
mechanical, including photocopying, recording, taping, or storage in an electronic retrieval systemwithout prior written
permission of NetApp, Inc.
TRADEMARK INFORMATION
NetApp, the NetApp logo, Go further, faster, AdminNODE, Akorri, ApplianceWatch, ASUP, AutoSupport, BalancePoint,
BalancePoint Predictor, Bycast, Campaign Express, ChronoSpan, ComplianceClock, ControlNODE, Cryptainer, Data
ONTAP, DataFabric, DataFort, Decru, Decru DataFort, DenseStak, Engenio, E-Stack, FAServer, FastStak, FilerView,
FlexCache, FlexClone, FlexPod, FlexScale, FlexShare, FlexVol, FPolicy, GatewayNODE, gFiler, Imagine Virtually
Anything, Infinivol, Lifetime Key Management, LockVault, Manage ONTAP, MetroCluster, MultiStore, NearStore, NetApp
Select, NetCache, NetCache, NOW (NetApp on the Web), OnCommand, ONTAPI, PerformanceStak, RAID DP,
SANscreen, SANshare, SANtricity, SecureAdmin, SecureShare, Securitis, Service Builder, Simplicity, Simulate ONTAP,
SnapCopy, SnapDirector, SnapDrive, SnapLock, SnapManager, SnapMirror, SnapMover, SnapProtect, SnapRestore,
Snapshot, SnapValidator, SnapVault, StorageGRID, StorageNODE, StoreVault, SyncMirror, Tech OnTap, VelocityStak,
vFiler, VFM, Virtual File Manager, WAFL, and XBB are trademarks or registered trademarks of NetApp, Inc. in the United
States and/or other countries.
All other brands or products are either trademarks or registered trademarks of their respective holders and should be
treated as such.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
TABLE OF CONTENTS
WELCOME ........................................................................................................................................................ 1
MODULE 1: OVERVIEW ................................................................................................................................ 1-1
MODULE 2: INSTALLATION AND CONFIGURATION ................................................................................ 2-1
MODULE 3: CLUSTER ADMINISTRATION BASICS ................................................................................... 3-1
MODULE 4: ARCHITECTURE ....................................................................................................................... 4-1
MODULE 5: PHYSICAL DATA STORAGE ................................................................................................... 5-1
MODULE 6: LOGICAL DATA STORAGE ..................................................................................................... 6-1
MODULE 7: PHYSICAL NETWORKING ....................................................................................................... 7-1
MODULE 8: LOGICAL NETWORKING ......................................................................................................... 8-1
MODULE 9: NAS PROTOCOLS .................................................................................................................... 9-1
MODULE 10: SAN PROTOCOLS ................................................................................................................ 10-1
MODULE 11: STORAGE EFFICIENCY ....................................................................................................... 11-1
MODULE 12: DATA PROTECTION: SNAPSHOT AND SNAPMIRROR COPIES ..................................... 12-1
MODULE 13: DATA PROTECTION: BACKUPS AND DISASTER RECOVERY ....................................... 13-1
MODULE 14: CLUSTER MANAGEMENT ................................................................................................... 14-1
MODULE 15: RECOMMENDED PRACTICES ............................................................................................ 15-1
APPENDIX: TECHNICAL REPORTS AND KNOWLEDGE BASE ARTICLES ........................................... A-1
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Clustered Data
ONTAP
Administration
Course ID:
STRSW-ILT-D8CADM-REV03
NetApp Confidential
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Classroom Logistics
Schedule Safety
Start time Alarm signal
Stop time Evacuation procedure
Break times Electrical safety
guidelines
Facilities
Food and drinks
Restrooms
Phones
NetApp Confidential 2
CLASSROOM LOGISTICS
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Course Objectives
1 of 2
By the end of this course, you should be able to:
Explain the primary benefits of a Data ONTAP cluster
Create a cluster
Implement role-based administration
Manage the physical and logical resources within a
cluster
Manage features to guarantee nondisruptive
operations
Discuss storage and RAID concepts
Create aggregates
List the steps that are required to enable storage
failover (SFO)
NetApp Confidential 3
COURSE OBJECTIVES: 1 OF 2
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Course Objectives
2 of 2
Create a flash pool
Build a namespace using multiple volumes
Configure FlexCache
Create an infinite volume
Identify supported cluster interconnect switches
Set up and configure SAN and NAS protocols
Configure the storage-efficiency features
Administer mirroring technology and data protection
Explain the notification capabilities of a cluster
Scale a cluster horizontally
Configure the storage QoS feature
NetApp Confidential 4
COURSE OBJECTIVES: 2 OF 2
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Course Agenda: Day 1
Morning
Module 1: Overview
Afternoon
Module 2: Installation and Configuration
Module 3: Cluster Administration Basics
NetApp Confidential 5
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Course Agenda: Day 2
Morning
Module 4: Architecture
Module 5: Physical Data Storage
Afternoon
Module 6: Logical Data Storage
Module 7: Physical Networking
NetApp Confidential 6
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Course Agenda: Day 3
Morning
Module 8: Logical Networking
Module 9: NAS Protocols
Afternoon
Module 10: SAN Protocols
NetApp Confidential 7
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Course Agenda: Day 4
Morning
Module 11: Storage Efficiency
Module 12: Data Protection: Snapshot and
SnapMirror Copies
Afternoon
Module 13: Data Protection: Backups and
Disaster Recovery
NetApp Confidential 8
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Course Agenda: Day 5
Morning
Module 14: Cluster Management
Afternoon
Module 14: Cluster Management (Continued)
Module 15: Recommended Practices
NetApp Confidential 9
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
NetApp University Information Sources
NetApp Support
http://support.netapp.com
NetApp University
http://www.netapp.com/us/services-
support/university/
NetApp Confidential 10
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Module 1
Overview
NetApp Confidential 1
MODULE 1: OVERVIEW
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Module Objectives
NetApp Confidential 2
MODULE OBJECTIVES
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Clustered Data ONTAP
NetApp Confidential 3
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Clustered Data ONTAP Highlights
NetApp Confidential 4
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Primary Reasons to Use Clustered
Data ONTAP
Scalability: performance and capacity
Flexibility: data management and movement
Transparency: namespaces, storage failover,
NAS LIF failover and migration, resource use
and balancing, nondisruptive operation
NetApp Confidential 5
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Scalability
NetApp Confidential 6
SCALABILITY
Clustered Data ONTAP solutions can scale from 1 to 24 nodes, and are mostly managed as one large system.
More importantly, to client systems, a cluster looks like a single file system. The performance of the cluster
scales linearly to multiple gigabytes per second of throughput, and capacity scales to petabytes.
Clusters are built for continuous operation; no single failure on a port, disk, card, or motherboard will cause
data to become inaccessible in a system. Clustered scaling and load balancing are both transparent.
Clusters provide a robust feature set, including data protection features such as Snapshot copies, intracluster
asynchronous mirroring, SnapVault backups, and NDMP backups.
Clusters are a fully integrated solution. This example shows a 20-node cluster that includes 10 FAS systems
with 6 disk shelves each, and 10 FAS systems with 5 disk shelves each. Each rack contains a high-availability
(HA) pair with storage failover (SFO) capabilities.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Scalability: Performance (NAS)
A B C F R A R B
B B R
C G H E F D
D E G H
NetApp Confidential 7
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Scalability: Capacity
The ability to rapidly and
seamlessly deploy new
storage or applications or
Projects
both
No required downtime A B C
A B C2 B2
A1 A2 C C3
A3
C1 B1
NetApp Confidential 8
SCALABILITY: CAPACITY
In the example on this slide, more capacity is needed for project B. Follow these steps to scale the capacity:
1. Add two nodes to make a 10-node cluster with additional disks.
2. Transparently move some volumes to the new storage.
3. Expand volume B in place.
This movement and expansion is transparent to client machines.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Flexibility: The Virtual Storage Tier
Data-Driven Real-Time Self-Managing Flash Cache
Storage-level RAID-protected
The Virtual cache
Storage Tier
PCI-e modules
Capacities of up to 2 TB
Flash Pool
A RAID-protected aggregate
A solid-state drive (SSD) tier
that is used as cache
A hard disk tier that is used as
storage
Hard Disk Storage
NetApp Confidential 9
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Transparency: Load Optimization
Optimized performance
Maximized disk use
Transparency to
Projects applications
A B C
A1 A2 A3 B1 B2 C1 C2 C3
A
B1 B A1 C2 C3
A3
C1 C B2 A2
NetApp Confidential 10
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Nondisruptive Operation
NetApp Confidential 11
NONDISRUPTIVE OPERATION
Nondisruptive operation is a key feature of Data ONTAP clustering. Three critical components of
nondisruptive operation include DataMotion for Volumes (volume move), logical interface (LIF) migration,
and SFO.
SFO is covered in Module 6: Logical Data Storage
NAS LIF Migration is covered in Module 8: Logical Networking
Volume move is covered in Module 14: Cluster Management
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Virtual Storage Servers
Virtual Storage Servers (Vservers)
Represent groupings of physical and logical resources
Are conceptually similar to vFilers
Node Vservers
Represent each physical node
Are associated with cluster LIFs, node management LIFs, and
intercluster LIFS
Administrative Vserver
Represents the physical cluster
Is associated with the cluster management LIF
Data Vservers
Are a virtual representation of a physical data server
Are associated with data LIFs
NetApp Confidential 12
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Cluster Resources
Data
Network
cmg
mg1 lif7 lif8 lif15 mg1 lif16
lif4 lif5 lif6 lif12 lif13 lif14
lif1 lif2
HA
lif3 lif9 lif10 lif11
Interconnect
cl1 cl2
cl1 cl2
Cluster
aggr1 aggr2 Interconnect aggr3 aggr4
n1aggr0 n2aggr0
n1vol0 n2vol0
Data Vservers:
vserverA
vserverB
vserverC
NetApp Confidential 13
CLUSTER RESOURCES
The example on this slide shows many of the key resources in a cluster: three types of Vservers (node, data,
and administrative), plus nodes, aggregates, volumes, and data LIFs.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Physical and Logical Elements
Physical Logical
Nodes Clusters
Disks Volumes
Aggregates Snapshot copies
Network ports Mirror relationships
FC ports Vservers
Tape devices LIFs
NetApp Confidential 14
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
vserver show (Summary View)
cluster1::> vserver show
Admin Root Name Name
Vserver Type State Volume Aggregate Service Mapping
----------- ------- --------- ---------- ---------- ------- -------
cluster1 admin - - - - -
cluster1-01 node - - - - -
cluster1-02 node - - - - -
vs1 data running vs1 aggr1a file file
NetApp Confidential 15
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Administrative Vserver
Cluster
Management
LIF cmg
NetApp Confidential 16
ADMINISTRATIVE VSERVER
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Node Vservers
Data
Network Node
Network Ports Management
mg1 mg1 LIFs
Cluster LIFs
lif1 lif2
lif1 lif2
Cluster
aggr1 aggr2 Interconnect aggr3 aggr4
Network
Ports
n1aggr0 n2aggr0
n1vol0 n2vol0
Aggregates
NetApp Confidential 17
NODE VSERVERS
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Data Vservers
Volumes
Data Vservers:
vserverA
vserverB
vserverC
NetApp Confidential 18
DATA VSERVERS
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Data Vserver Details
NetApp Confidential 19
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Putting It All Together
Data
Network
cmg
mg1 lif7 lif8 lif15 mg1 lif16
lif4 lif5 lif6 HA lif12 lif13 lif14
lif1 lif2 lif3 lif9 lif10 lif11
Interconnect
lif1 lif2
lif1 lif2
Node Vserver Node Vserver
Cluster
aggr1 aggr2 Interconnect aggr3 aggr4
n1aggr0 n2aggr0
n1vol0 n2vol0
Data Vservers:
vserverA
vserverB
vserverC
NetApp Confidential 20
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Module Summary
NetApp Confidential 21
MODULE SUMMARY
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Exercise
Module 1: Overview
Time Estimate: 10 Minutes
NetApp Confidential 22
EXERCISE
Please refer to your exercise guide.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Module 2
Installation and Configuration
NetApp Confidential 1
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Module Objectives
NetApp Confidential 2
MODULE OBJECTIVES
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Basic Steps for Setting Up a Cluster
NetApp Confidential 3
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Hardware Setup
Connect:
Controllers to disk shelves
High-availability (HA) interconnect
Controllers to the networks
Any tape devices
Controllers and disk shelves to power
NetApp Confidential 4
HARDWARE SETUP
Connect controllers to disk shelves. Verify that shelf IDs are set properly.
If required for your controller type, connect nonvolatile RAM (NVRAM) high-availability (HA) cable
between partners. The connections can be 10-GbE or InfiniBand, depending on your storage controllers.
Connect controllers to the networks.
If present, connect any tape devices. This task can be performed later.
Connect controllers and disk shelves to power.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Communication Connections
NetApp Confidential 5
COMMUNICATION CONNECTIONS
Each controller should have a console connection, which is required to get to the firmware and to get to the
Boot menu (for the setup, installation, and initialization options, for example). A remote management device
connection, although not required, is helpful in the event that you cannot get to the UI or console. Remote
management enables remote booting, the forcing of core dumps, and other actions.
Each node must have two connections to the dedicated cluster network. Each node should have at least one
data connection, although these data connections are necessary only for client access. Because the nodes are
clustered together, its possible to have a node that participates in the cluster with its storage and other
resources but doesnt field client requests. Typically, however, each node has data connections.
The cluster connections must be on a network that is dedicated to cluster traffic. The data and management
connections must be on a network that is distinct from the cluster network.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Disk Cabling
node1 node2 node3 node4
NVRAM* Interconnect
FC or SAS (simplified)
NetApp Confidential 6
DISK CABLING
A large amount of cabling must be done with a Data ONTAP cluster. Each node has NVRAM
interconnections to its HA partner. Each node has FC or SAS connections to its disk shelves and to those of
its HA partner.
In a multipath high-availability (MPHA) cabling strategy, each storage controller has multiple ways to
connect to a disk. An I/O module failure does not require a controller failover. This method is the most
resilient and preferred method of shelf cabling.
Ethernet cabling for alternate control path (ACP) requires one connection to each controller, connected in a
series through all shelves. First you connect stack to stack. Then you connect between I/O modules from top
to bottom in each stack.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Network Cabling
1 of 2
Cluster Interconnect
Cluster Interconnect
Management
Data
NetApp Confidential 7
NETWORK CABLING: 1 OF 2
For customers with strict security requirements, management ports can be connected to a network that is
separate from the data network. In that case, management ports must have a role of management, and network
failover cannot occur between data and management interfaces.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Network Cabling
2 of 2
Cluster Interconnect
Cluster Interconnect
Management
Data
Data and
Management
Network
NetApp Confidential 8
NETWORK CABLING: 2 OF 2
When you cable the network connections, consider the following:
Each node is connected to at least two distinct networks: one for management (the UI) and data access
(clients) and one for intracluster communication. NetApp supports two 10-GbE cluster connections to
each node to create redundancy and improve cluster traffic flow.
The cluster can be created without data network connections but not without cluster network connections.
Having more than one data network connection to each node creates redundancy and improves client
traffic flow.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Powering On a Node and Cluster
NetApp Confidential 9
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Firmware
NetApp Confidential 10
FIRMWARE
1. Use LOADER firmware.
2. From the console, early in the booting process, press any key to enter the firmware.
3. Use version to show the firmware version.
4. Two boot device images exist (depending on platform): flash0a and flash0b.
CompactFlash
USB flash
5. Use printenv to show the firmware environment variables.
6. Use setenv to set the firmware environment variables; for example, setenv AUTOBOOT true.
To copy flash0a to flash0b, run flash flash0a flash0b. To flash (put) a new image onto the
primary flash, you must first configure the management interface. The auto option of ifconfig can be
used if the management network has a Dynamic Host Configuration Protocol (DHCP) or BOOTP server. If it
doesnt, you must run ifconfig <interface> addr=<ip> mask=<netmask> gw=<gateway>.
After the network is configured, ensure that you can ping the IP address of the TFTP server that contains the
new flash image. To then flash the new image, run
flash tftp://<tftp_server>/<path_to_image> flash0a.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
The Setup Procedure
NetApp Confidential 11
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
The Boot Menu
Please choose one of the following:
1. Normal Boot.
2. Boot without /etc/rc (no effect in Clustered ONTAP).
3. Change password.
4. Clean configuration and initialize all disks.
5. Maintenance mode boot.
6. Update flash from backup config.
7. Install new software first.
8. Reboot node.
Selection (1-8)?
NetApp Confidential 12
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Installing the Data ONTAP
Operating System on a Node
You need:
Access to an FTP, TFTP, or HTTP server
The software image file on that server
From the boot menu, complete the following:
1. Select option 7.
2. When prompted, enter a URL to a Data ONTAP tgz image.
3. When complete, allow the system to boot.
NetApp Confidential 13
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Initializing a Node
From the Boot menu, select option 4:
Initialization clears the three disks that the system
uses for the first aggregate that it creates.
NOTE: This action requires time, depending on disk size.
Initialization creates one aggregate (for this node) and
a vol0 root volume on the aggregate.
Initialization must be run on both nodes of each HA
pair.
NetApp Confidential 14
INITIALIZING A NODE
Because all disks are initialized parallel to each other, the time that is required to initialize the disks is based
on the size of the largest disk that is attached to the node, not on the sum capacity of the disks. After the disks
are initialized, the nodes first aggregate and its vol0 volume are automatically created.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
The Cluster Setup Wizard
1 of 3
From the Boot menu of an initialized
controller:
1. Boot normally.
2. Log in as admin with no password.
3. Follow the prompts.
You can also run cluster setup from the
CLI.
NetApp Confidential 15
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
The Cluster Setup Wizard
2 of 3
The first node creates the cluster.
You need the:
Cluster name
Cluster network ports and MTU size (usually best to use
default MTU)
Cluster base license key
Cluster management interface port, IP address, network
mask, and default gateway
Node management interface port, IP address, network
mask, and default gateway
DNS domain name
IP addresses of the DNS server
NetApp Confidential 16
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
The Cluster Setup Wizard
3 of 3
Subsequent nodes join the cluster.
You need the:
Cluster network ports and MTU size
Node management interface port, IP address,
network mask, and default gateway
NetApp Confidential 17
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
The Normal Boot Sequence
1. The firmware loads the kernel from the boot device.
2. The kernel mounts the / root image from rootfs.img on
the boot device.
3. Init is loaded, and startup scripts run.
4. NVRAM kernel modules are loaded.
5. The /var partition on NVRAM is created and mounted
(restored from boot device if a backup copy exists).
6. The management gateway daemon (mgwd) is started.
7. The data module, the network module, and other
components are loaded.
8. The vol0 root volume is mounted from the local data
module.
9. The CLI is ready for use.
NetApp Confidential 18
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
System Setup Tool
The System Setup tool and simple instructions are included with every FAS2200 shipment.
NetApp Confidential 19
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
System Setup Benefits
Set up your FAS2200 three times faster.
You dont need to be a storage expert.
Defaults take the guesswork out of the setup
process.
You get NetApp best practices for optimal
performance.
Deduplication, flexible volumes, auto grow,
and storage provisioning
NetApp Confidential 20
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
System Setup Installation Requirements
System Setup:
Runs on the following systems:
Windows XP
Windows 7
Windows Server 2008 R2 x64
Requires .NET Framework 3.5 SP1
Can configure FAS2200 systems running:
Clustered Data ONTAP 8.2
Data ONTAP 8.1 7-Mode
Data ONTAP 8.1.1 7-Mode
Data ONTAP 8.1.2 7-Mode
NetApp Confidential 21
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
More Information About System Setup
support.netapp.com
Download System Setup
Access documentation
fieldportal.netapp.com
Slides
Sales FAQs
NetApp Confidential 22
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Network Time Protocol
Manually set the date, time, and time zone with
system date modify.
Kerberos is time-sensitive and typically requires the
Network Time Protocol (NTP).
NTP is disabled by default.
NTP enablement and disablement are cluster-wide.
The commands for verifying and monitoring NTP are:
system services ntp config show
system services ntp server show
NetApp Confidential 23
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
7-Mode Transition Tool
Transition
NetApp Confidential 24
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Module Summary
NetApp Confidential 25
MODULE SUMMARY
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Exercise
Module 2:
Installation and Configuration
Time Estimate: 30 minutes
NetApp Confidential 26
EXERCISE
Please refer to your exercise guide.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Module 3
Cluster Administration Basics
NetApp Confidential 1
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Module Objectives
After this module, you should be able to:
Describe and utilize the various tools to manage a
cluster
Determine which commands are available for a
command directory
Determine whether parameters are required or
optional for a command
Switch among privilege levels
Describe the Vserver administrative roles
Explore policies and job schedules
Discuss the enhanced node-locked licensing model
NetApp Confidential 2
MODULE OBJECTIVES
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 1
NetApp Confidential 3
LESSON 1
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Overview
You can manage resources within a cluster by
using the CLI or the GUI.
The CLI accesses the hierarchical command
structure.
You can access an entire cluster from a cluster
management or node management logical
interface (LIF).
A cluster management LIF can fail over to a
surviving node if its host node fails.
The three administrative privilege levels are
admin, advanced, and diagnostic.
NetApp Confidential 4
OVERVIEW
The CLI and the GUI provide access to the same information, and you can use both to manage the same
resources within a cluster.
The hierarchical command structure consists of command directories and commands. A command directory
might contain commands, more command directories, or both. In this way, command directories resemble file
system directories and file structures.
Command directories provide groupings of similar commands. For example, all commands for storage-related
actions fall somewhere within the storage command directory. Within that directory are directories for disk
commands and aggregate commands. The command directories provide the context that enables you to
use similar commands for different objects. For example, you use create commands to create all objects
and resources and delete commands to remove objects and resources, but the commands are unique
because of the context (command directory) in which the commands are used. Therefore, storage
aggregate create is different from network interface create.
The cluster login is accessible from a cluster management logical interface (LIF). You can also log in to each
node by using the node management LIF for the node.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Shells
NetApp Confidential 5
SHELLS
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
The Cluster Shell
NetApp Confidential 6
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Management LIFs
The cluster management LIF:
Is a persistent LIF to use for SSH access
Is unique within the cluster
Is assigned to a data port
Can fail over and migrate among nodes
The node management LIF:
Is unique for a node
Is assigned to a data or node-mgmt port
Can only fail over or migrate to a port on the same
node
Can access the entire cluster
NetApp Confidential 7
MANAGEMENT LIFS
Clustered Data ONTAP has one management virtual interface on each node that is called a node
management LIF. Node management LIFs do not fail over to other nodes.
Clustered Data ONTAP also includes a management LIF, the cluster management LIF, that has failover and
migration capabilities. Therefore, regardless of the state of each individual node (for example, if a node is
rebooting after an upgrade or is halted for hardware maintenance), a LIF address can always be used to
manage the cluster, and the current node location of that LIF is transparent.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Management LIFs
The Output of net int show
cluster1::> net int show
(network interface show)
Logical Status Network Current Current Is
Vserver Interface Admin/Oper Address/Mask Node Port Home
----------- ---------- ---------- ------------------ ------------- ------- ----
cluster1
cluster_mgmt up/up 192.168.239.20/24 cluster1-01 e0c true
cluster1-01
clus1 up/up 169.254.165.103/16 cluster1-01 e0a true
clus2 up/up 169.254.185.207/16 cluster1-01 e0b true
mgmt up/up 192.168.239.21/24 cluster1-01 e0c true
cluster1-02
clus1 up/up 169.254.49.175/16 cluster1-02 e0a true
clus2 up/up 169.254.126.156/16 cluster1-02 e0b true
mgmt up/up 192.168.239.22/24 cluster1-02 e0c true
vs1
vs1_lif1 up/up 192.168.239.74/24 cluster1-01 e0d true
vs1_lif2 up/up 192.168.239.75/24 cluster1-01 e0d false
9 entries were displayed.
NetApp Confidential 8
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
The Node Shell
NetApp Confidential 9
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
system node run
A single command directly from the cluster shell:
cluster1::> system node run node cluster1-02
hostname
cluster1-02
An interactive session:
cluster1::> system node run node cluster1-02
Type 'exit' or 'Ctrl-D' to return to the CLI
cluster1-02> hostname
cluster1-02
NetApp Confidential 10
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
The System Shell
The diag user can access the system shell from
within the cluster shell.
From any node, the diag user can access the
system shell on any other node.
To access the system shell, do the following:
1. Unlock the diag user and set the password:
cluster1::> security login unlock username
diag
cluster1::> sec log pass user diag
2. From the cluster shell, use the advanced command:
cluster1::*>system node systemshell
3. Can only be accessed by the "diag" user.
NetApp Confidential 11
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
OnCommand System Manager
NetApp Confidential 12
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
OnCommand System Manager Login Page
1of 3
NetApp Confidential 13
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
OnCommand System Manager Login Page
2 of 3
NetApp Confidential 14
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
OnCommand System Manager Login Page
3 of 3
NetApp Confidential 15
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
OnCommand System Manager 3.0
NetApp Confidential 16
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
OnCommand Unified Manager
NetApp Confidential 17
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 2
NetApp Confidential 18
LESSON 2
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Cluster Shell Features
1 of 2
Has a history buffer
Enables you to easily reissue commands
Enables you to retrieve commands and then
easily modify and reissue the commands
Provides context-sensitive help when you
press the question mark (?) key
Enables you to reduce the required amount of
typing and get context-sensitive assistance
when you press the Tab key
NetApp Confidential 19
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Cluster Shell Features
2 of 2
The cluster shell uses named parameters.
You can abbreviate a command directory,
command, or parameter to its shortest
unambiguous sequence of characters.
The search path enables you to run
commands out of context.
You can run queries with patterns and
wildcards.
NetApp Confidential 20
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Pressing the ? Key at the Top Level
1 of 2
cluster1::> ?
up Go up one directory
cluster> Manage clusters
dashboard> Display dashboards
event> Manage system events
exit Quit the CLI session
history Show the history of commands for this CLI session
job> Manage jobs and job schedules
lun> List LUN (logical unit of block storage) commands
man Display the on-line manual pages
network> Manage physical and virtual network connections
qos> QoS settings
redo Execute a previous command
rows Show/Set the rows for this CLI session
run Run interactive or non-interactive commands in
the node shell
security> The security directory
set Display/Set CLI session settings
sis Manage volume efficiency
snapmirror> Manage SnapMirror
statistics> Display operational statistics
NetApp Confidential 21
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Pressing the ? Key at the Top Level
2 of 2
storage> Manage physical storage, including disks,
aggregates, and failover
system> The system directory
top Go to the top-level directory
volume> Manage virtual storage, including volumes,
snapshots, and mirrors
vserver> Manage Vservers
NetApp Confidential 22
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Press ? for Commands and Directories
1 of 2
cluster1::> cluster
cluster1::cluster> ?
contact-info Manage contact information for the
cluster.
create Create a cluster
ha Manage high-availability configuration
identity Manage the cluster's attributes,
including name and serial number
join Join an existing cluster using the
specified member's IP address
modify Modify cluster node membership attributes
peer Manage cluster peer relationships
setup Setup wizard
show Display cluster node members
statistics Display cluster statistics
NetApp Confidential 23
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Press ? for Commands and Directories
2 of 2
cluster1::cluster> statistics
cluster1::cluster statistics> ?
show Display cluster-wide statistics
NetApp Confidential 24
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Press ? and the Tab Key for Parameters
1 of 2
cluster1::> storage aggregate
NetApp Confidential 25
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Press ? and the Tab Key for Parameters
2 of 2
cluster1::storage aggregate> modify -aggr aggr1a -state ?
offline
online
restricted
NetApp Confidential 26
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Changing Privilege Levels in the CLI
1 of 2
cluster1::> storage disk ?
assign Assign ownership of a disk to a system
fail Fail the file system disk
modify Modify disk attributes
option> Manage disk options
remove Remove a spare disk
removeowner Remove disk ownership
replace Initiate or stop replacing a file-system disk
set-led Turn on a disk's red LED for a number of minutes
show Display a list of disk drives and array LUNs
updatefirmware Update disk firmware
zerospares Zero non-zeroed spare disks
NetApp Confidential 27
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Changing Privilege Levels in the CLI
2 of 2
cluster1::> set advanced
NetApp Confidential 28
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
OnCommand System Manager
Dashboard Page
NetApp Confidential 29
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Storage
NetApp Confidential 30
STORAGE
Notice the expanded Storage directory in the left pane.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Storage Aggregate
NetApp Confidential 31
STORAGE AGGREGATE
Notice the Aggregates pane on the right.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Editing an Aggregate
NetApp Confidential 32
EDITING AN AGGREGATE
If you right-click an aggregate and select Edit, the Edit Aggregate dialog box appears.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
The Edit Aggregate Dialog Box
NetApp Confidential 33
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 3
NetApp Confidential 34
LESSON 3
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Data Vserver-Scoped Roles
vsadmin
vsadmin-protocol
vsadmin-readonly
vsadmin-volume
NetApp Confidential 35
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Data Vserver-Scoped Roles
vsadmin
This role is the superuser role for a Vserver. A Vserver
administrator with this role has the following capabilities:
Manages its own user account, local password, and public
key
Manages volumes, quotas, qtrees, Snapshot copies,
FlexCache devices, and files
Manages LUNs
Configures protocols
Configures services
Monitors jobs
Monitors network connections and network interfaces
Monitors the health of a Vserver
NetApp Confidential 36
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Data Vserver-Scoped Roles
vsadmin-protocol
A Vserver administrator with this role has the
following capabilities:
Configures protocols
Configures services
Manages LUNs
Monitors network interfaces
Monitors the health of a Vserver
NetApp Confidential 37
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Data Vserver-Scoped Roles
vsadmin-readonly
A Vserver administrator with this role has the
following capabilities:
Monitors the health of a Vserver
Monitors network interfaces
Views volumes and LUNs
Views services and protocols
NetApp Confidential 38
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Data Vserver-Scoped Roles
vsadmin-volume
A Vserver administrator with this role has the
following capabilities:
Manages volumes, quotas, qtrees, Snapshot
copies, FlexCache devices, and files
Manages LUNs
Configures protocols
Configures services
Monitors network interfaces
Monitors the health of a Vserver
NetApp Confidential 39
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Cluster-Scoped Roles
admin
readonly
none
NetApp Confidential 40
CLUSTER-SCOPED ROLES
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Cluster-Scoped Roles
admin
Grants all possible capabilities
Is a cluster superuser
NetApp Confidential 41
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Cluster-Scoped Roles
readonly and None
A Cluster administrator with the role of
readonly can grant read-only capabilities.
A Cluster administrator with the role of None
cannot grant capabilities.
NetApp Confidential 42
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Policy-Based Storage Services
NetApp Confidential 43
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Policy Example
policyA fwall_policy1
Rule1 criteria1 192.168.1.0/24 ssh
Rule2 criteria2 192.168.1.0/24 http
policyB fwall_policy2
Rule3 criteria3 Rule3 criteria3
Rule1 criteria1
property 192.168.21.0/24 ssh
property
Rule2 criteria2
property 192.168.22.0/24 ssh
property
propertyRule3 criteria3 property
192.169.23.0/24 ssh
property allow
property
property
NetApp Confidential 44
POLICY EXAMPLE
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Job Schedules
Job schedules can be used:
Globally (by all Vservers)
For functions that can be automated
SnapShot, SnapMirror, and SnapVault, for example
Note the following job schedule syntax:
@:00,:05,:10...:55 means every five minutes on the
five-minute marks
@2 means daily at 2:00 a.m.
@0:10 means daily at 12:10 a.m.
@:05 means hourly at five minutes after the hour
NetApp Confidential 45
JOB SCHEDULES
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
The job schedule show Command
cluster1::> job schedule show
Name Type Description
----------- --------- ------------------------------------------------
5min cron @:00,:05,:10,:15,:20,:25,:30,:35,:40,:45,:50,:55
8hour cron @2:15,10:15,18:15
daily cron @0:10
hourly cron @:05
weekly cron Sun@0:15
5 entries were displayed.
NetApp Confidential 46
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Revised Licensing Model
Proof-of-sale is recorded as a
license entitlement record
License keys are now also linked
to the controller serial number
License keys are locked to nodes
License keys have been
lengthened to 28 characters
Nondisruptive upgrades from
Data ONTAP 8.1 to 8.2 do not
require new license keys
NetApp Confidential 47
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Implications of License Key Format Changes
NetApp Confidential 48
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
License Commands
cluster1::> license ?
(system license)
add Add one or more licenses
clean-up Remove unnecessary licenses
delete Delete a license
show Display licenses
status> Display license status
NetApp Confidential 49
LICENSE COMMANDS
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
System Manager License Page
NetApp Confidential 50
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Module Summary
Now that you have completed this module, you should
be able to:
Describe and utilize the various tools to manage a
cluster
Determine which commands are available for a
command directory
Determine whether parameters are required or
optional for a command
Switch among privilege levels
Describe the Vserver administrative roles
Explore policies and job schedules
Discuss the enhanced node-locked licensing model
NetApp Confidential 51
MODULE SUMMARY
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Exercise
Module 3: Cluster Administration
Basics
Time Estimate: 45 minutes
NetApp Confidential 52
EXERCISE
Please refer to your exercise guide.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Module 4
Architecture
NetApp Confidential 1
MODULE 4: ARCHITECTURE
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Module Objectives
NetApp Confidential 2
MODULE OBJECTIVES
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 1
NetApp Confidential 3
LESSON 1
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Components
NetApp Confidential 4
COMPONENTS
The modules refer to separate software state machines that are accessed only by well defined APIs. Every
node contains a network module, a SCSI module, and a data module. Any network or SCSI module in the
cluster can talk to any data module in the cluster.
The network module and the SCSI module translate client requests into Spin Network Protocol (SpinNP)
requests and vice versa. The data module, which contains the WAFL (Write Anywhere File Layout) file
system, manages SpinNP requests. The cluster session manager (CSM) is the SpinNP layer between the
network, SCSI, and data modules. The SpinNP protocol is another form of RPC interface. It is used as the
primary intranode traffic mechanism for file operations among network, SCSI, and data modules.
The members of each replicated database (RDB) unit on every node in the cluster are in constant
communication with each other to remain synchronized. The RDB communication is like the heartbeat of
each node. If the heartbeat cannot be detected by the other members of the unit, the unit corrects itself in a
manner that is discussed later in this course. The four RDB units on each node are the blocks configuration
and Operations Manager (BCOM), the volume location database (VLDB), VifMgr, and management.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Single Node Components (Illustrated)
Node
Network and
Client Access (Data) SCSI
modules
Management M-Host
Data
module
RDB Units:
Mgwd
VLDB
Data Vserver
VifMgr
Root Volume
BCOM
Vol0
Root
Vol1
Vol2
NetApp Confidential 5
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
The Network Module
NetApp Confidential 6
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
The SCSI Module
NetApp Confidential 7
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
The Data Module
NetApp Confidential 8
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
The CSM
NetApp Confidential 9
THE CSM
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
The Path of a Local Write Request
Node1 Node2
Network and Network and
Requests SCSI SCSI
Responses
modules modules
NAS and SAN
Clients
CSM CSM
Data Data
module module
Vol0 Vol0
Root Root Vol3
Vol1 Vol4
Vol 1
Vol2
NetApp Confidential 10
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
The Path of a Remote Write Request
Node1 Node2
Network and Network and
Requests SCSI SCSI
Responses
modules modules
NAS and SAN
Clients
CSM CSM
Data Data
module module
Vol0 Vol0
Root Root Vol3
Vol1 Vol4
Vol 1
Vol2
NetApp Confidential 11
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Clustered Data ONTAP Modules
NAS
Network
The network module:
WAFL
Module
SAN
RAID Is called the N-blade
Storage
Module N
V Provides NAS protocols
R
The SCSI module:
Cluster Interconnect
A
Network M
WAFL
Module
SAN
RAID Is called the SCSI-blade
Storage
Module
Provides SAN protocols
Network
The data module:
WAFL
Module
SAN
RAID Is called the D-blade
Storage
N
Module
V Provides storage access
R
A
to shelves (WAFL file
Network
Module M
WAFL system, RAID
RAID
SAN
Module Storage subsystems, and storage
shelves subsystems)
NetApp Confidential 12
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Data ONTAP Architecture
CSM
Cluster Traffic
Clients
To HA partner
NVRAM
Physical
Memory
Management
NetApp Confidential 13
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
The Vol0 Volume
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Data Vservers
1 of 2
Formerly known as cluster Vservers
Are virtual entities within a cluster
Can coexist with other cluster data Vservers
in the same cluster
Are independent of nodes
Are independent of aggregates
Contain all the volumes of their namespaces
NetApp Confidential 15
DATA VSERVERS: 1 OF 2
Think of a cluster as a group of hardware elements (nodes, disk shelves, and more). A data Vserver is a
logical piece of that cluster, but a Vserver is not a subset or partitioning of the nodes. A Vserver is more
flexible and dynamic. Every Vserver can use all the hardware in the cluster, and all at the same time.
Example: A storage provider has one cluster and two customers: ABC Company and XYZ Company. A
Vserver can be created for each company. The attributes that are related to specific Vservers (volumes, LIFs,
mirror relationships, and others) can be managed separately, while the same hardware resources can be used
for both. One company can have its own NFS server, while the other can have its own NFS, CIFS, and iSCSI
servers.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Data Vservers
2 of 2
Represent unique namespaces
Can and should have multiple data logical
interfaces (LIFs), each of which is associated
with one Vserver
Can and do have multiple volumes, each of
which is associated with one Vserver
NetApp Confidential 16
DATA VSERVERS: 2 OF 2
A one-to-many relationship exists between a Vserver and its volumes. The same is true for a Vserver and its
data LIFs. Data Vservers can have many volumes and many data LIFs, but those volumes and LIFs are
associated only with this one data Vserver.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Building a Namespace with Volumes and
Junctions
R A Data ONTAP Cluster
A B C F
D E G H
A
B R
C G H E F D
NetApp Confidential 17
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Vservers, Namespaces, and Volumes
Volume Volume
Volume
Volume
Volume
Volume
NetApp Confidential 18
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Namespaces
NetApp Confidential 19
NAMESPACES
A namespace is a file system. A namespace is the external, client-facing representation of a Vserver. A
namespace consists of volumes that are joined together through junctions. Each Vserver has one namespace,
and the volumes in one Vserver cannot be seen by clients that are accessing the namespace of another
Vserver.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
The Data Vserver Root Volume
NetApp Confidential 20
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 2
NetApp Confidential 21
LESSON 2
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
The RDB
The RDB is the key to maintaining high-
performance consistency in a distributed
environment.
The RDB maintains data that supports the cluster,
not the user data in the namespace.
Operations are transactional (atomic): entire
transactions are either committed or rolled back.
Four RDB units exist: the volume location
database (VLDB), management, VifMgr, and
blocks configuration and operations manager
(BCOM).
NetApp Confidential 22
THE RDB
The RDB units do not contain user data. The RDB units contain data that helps to manage the cluster. These
databases are replicated; that is, each node has its own copy of the database, and that database is always
synchronized with the databases on the other nodes in the cluster. RDB database reads are performed locally
on each node, but an RDB write is performed to one master RDB database, and then those changes are
replicated to the other databases throughout the cluster. When reads of an RDB database are performed, those
reads can be fulfilled locally without the need to send requests over the cluster interconnects.
The RDB is transactional in that the RDB guarantees that when data is written to a database, either it all gets
written successfully or it all gets rolled back. No partial or inconsistent database writes are committed.
Four RDB units (the VLDB, management, VifMgr, and BCOM) exist in every cluster, which means that four
RDB unit databases exist on every node in the cluster.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Management Gateway
NetApp Confidential 23
MANAGEMENT GATEWAY
The management RDB unit contains information that is needed by the management gateway daemon (mgwd)
process on each node. The kind of management data that is stored in the RDB is written infrequently and read
frequently. The management process on a given node can query the other nodes at run time to retrieve a great
deal of information, but some information is stored locally on each node, in the management RDB database.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Volume Location Database
NetApp Confidential 24
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
VIF Manager
Runs as vifmgr
Stores and monitors LIF configuration
Stores and administers LIF failover policies
NetApp Confidential 25
VIF MANAGER
The VifMgr is responsible for creating and monitoring NFS, CIFS, and iSCSI LIFs. It also handles automatic
NAS LIF failover and manual migration of NAS LIFs to other network ports and nodes.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Blocks Configuration and Operations
Management
Runs as bcomd
Stores LUN map definitions
Stores initiator groups (igroups)
NetApp Confidential 26
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
The RDB: Details
1 of 2
Each RDB unit has its own replication ring.
For each of the units, one node is the master
and the other nodes are secondaries.
The master node for each unit might be
different than the master nodes for the other
units.
Writes for an RDB unit go to its master and
are then propagated to the secondaries
through the cluster interconnect.
NetApp Confidential 27
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
The RDB: Details
2 of 2
An RDB unit is considered to be healthy only
when it is in quorum (when a master can be
elected).
In quorum means that a simple majority of
nodes are communicating with each other.
When the quorum is lost or regained, the master
might change.
If a master has communication issues, a new
master is elected by the members of the unit.
One node has a tie-breaking ability (epsilon) for
all RDB units.
NetApp Confidential 28
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
RDB Databases
node1
NetApp Confidential 29
RDB DATABASES
This slide shows a four-node cluster. The four databases that are shown for each node are the four RDB units
(management, VLDB, VifMgr, and BCOM). Each unit consists of four distributed databases. Each node has
one local database for each RDB unit.
The databases that are shown on this slide with dark borders are the masters. Note that the master of any
particular RDB unit is independent of the master of the other RDB units.
The node that is shown on this slide with a dark border has epsilon (the tie-breaking ability).
On each node, all the RDB databases are stored in the vol0 volume.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Quorum
1 of 2
A quorum is a simple majority of connected, healthy, and
eligible nodes.
Two RDB quorum concepts exist: a cluster-wide quorum
and an individual RDB unit that is in or out of quorum.
RDB units never go out of quorum as a whole; only local
units (processes) do.
When an RDB unit goes out of quorum, reads from the
RDB unit can still occur, but changes to the RDB unit
cannot.
Example: If the VLDB goes out of quorum, during the brief
time that the database is out, no volumes can be created,
deleted, or moved; however, access to the volumes from
clients is not affected.
NetApp Confidential 30
QUORUM: 1 OF 2
A master can be elected only when a majority of local RDB units are connected and healthy for a particular
RDB unit on an eligible node. A master is elected when each local unit agrees on the first reachable healthy
node in the RDB site list. A healthy node is one that is connected, can communicate with the other nodes,
has CPU cycles, and has reasonable I/O.
The master of a given unit can change. For example, when the node that is the master for the management
unit is booted, a new management master must be elected by the remaining members of the management unit.
A local unit goes out of quorum when cluster communication is interrupted for a few seconds, for example,
because of a booting or a cluster interconnect hiccup that lasts for a few seconds. Because the RDB units
always work to monitor and maintain a good state, the local unit comes back in quorum automatically. When
a local unit goes out of quorum and then comes back into quorum, the RDB unit is synchronized again. Note
that the VLDB process on a node might go out of quorum although the VifMgr process on that same node has
no problem.
When a unit goes out of quorum, reads from that unit can be performed, but writes to that unit cannot. That
restriction is enforced so that no changes to that unit happen during the time that a master is not agreed upon.
In addition to the example above, if the VifMgr goes out of quorum, access to LIFs is not affected, but no LIF
failover can occur.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Quorum
2 of 2
The members of each RDB unit vote to
determine which node will be their master;
each unit elects its own master.
Each master might change when a local unit
goes out of and into quorum.
Before you take a node down for an extended
period of time, you should mark it as ineligible
(so the node doesnt factor into quorum):
cluster1::> system node modify node
<node> -eligibility false
NetApp Confidential 31
QUORUM: 2 OF 2
Marking a node as ineligible (by using the cluster modify command) means that the node no longer
affects RDB quorum or voting. If you mark the epsilon node as ineligible, epsilon is automatically given to
another node.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
The Epsilon Node
NetApp Confidential 32
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Which Cluster Is In Quorum?
4+
3
2+
2
NetApp Confidential 33
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Two-Node Clusters
NetApp Confidential 34
TWO-NODE CLUSTERS
From Ron Kownacki, author of the RDB:
Basically, quorum majority doesnt work well when down to two nodes and theres a failure, so RDB is
essentially locking the fact that quorum is no longer being used and enabling a single replica to be artificially
writable during that outage.
The reason we require a quorum (a majority) is so that all committed data is durable: if you successfully
write to a majority, you know that any future majority will contain at least one instance that has seen the
change, so the update is durable. If we didnt always require a majority, we could silently lose committed
data. So in two nodes, the node with epsilon is a majority and the other is a minorityso you would only
have one-directional failover (need the majority). So epsilon gives you a way to get majorities where you
normally wouldnt have them, but it only gives unidirectional failover because its static.
In two-node (high-availability mode), we try to get bidirectional failover. To do this, we remove the
configuration epsilon and make both nodes equaland form majorities artificially in the failover cases. So
quorum is two nodes available out of the total of two nodes in the cluster (no epsilon involved), but if theres
a failover, you artificially designate the survivor as the majority (and lock that fact). However, that means you
cant fail over the other way until both nodes are available, they sync up, and drop the lockotherwise you
would be discarding data.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Putting It All Together
Node
Network and
Client Access (Data) SCSI
modules
Management M-Host
Data
module
RDB Units:
Mgwd
VLDB
VifMgr
Data Vserver BCOM
Vol0
Root Volume Root
Vol1
Vol2
NetApp Confidential 35
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Module Summary
NetApp Confidential 36
MODULE SUMMARY
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Exercise
Module 4: Architecture
Time Estimate: 15 Minutes
NetApp Confidential 37
EXERCISE
Please refer to your exercise guide.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Module 5
Physical Data Storage
NetApp Confidential 1
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Module Objectives
After this module, you should be able to:
Draw the connections from a high-availability (HA) pair of
controllers to the disk shelves
Discuss storage and RAID concepts
Create aggregates
List the steps that are required to enable storage failover
(SFO)
Explain and enable two-node HA mode for two-node
clusters
Create a flash pool
NetApp Confidential 2
MODULE OBJECTIVES
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 1
NetApp Confidential 3
LESSON 1
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
The FAS3270 System
Hardware Diagram
HA c0b HA c0a FC 0c Management e0M Data Cluster X1107A Dual-
e1b e1a Port 10-GbE Card
Data e0a e2b
X1107A
LINK/ACT
LINK/ACT
1 Chelsio
Communications
c0a 0c e0a
X1107A
LINK/ACT
LINK/ACT
0a 0b 2 Chelsio
Communications
LNK LNK
c0b 0d e0b
X1139A
10GbEE CNA
3 5
SAN LAN
SAN LAN
PORT 1
PORT 2
4 6
SAS 0a SAS 0b Data e3a and e3b FC 0d Data e0b ACP* Console Cluster
e2a
X1139A Dual-
Port UTA Card
*alternate control path
NetApp Confidential 4
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
The FAS62x0 System
Hardware Diagram
3
0 e0a e0b
e0c e0d e0e e0f 0a 0b 0c 0d
7 13
8 14
9 15
10 16
Management: e0M 1Gb Data: e0a and e0b 10-Gb Cluster: e0c, e0e 10-Gb Data: e0d, e0f FC: 0a, 0b, 0c, and 0d Console
NetApp Confidential 5
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
A Typical Disk Shelf with SAS Connection
ACP ACP SAS SAS ACP ACP SAS SAS
Circle Square Circle Square Circle Square Circle Square
IOM6 IOM6
LNK LNK LNK LNK
A
IOM6 B IOM6
DC AC DC AC
1
x2 x2
2 2 2
IOM6 IOM6
A B
NetApp Confidential 6
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
An HA Pair
SAS Storage Configuration
X1107A
X1107A
LINK/ACT
LINK/ACT
LINK/ACT
LINK/ACT
1 Chelsio Chelsio
Communications 1 Communications
X1107A
X1107A
LINK/ACT
LINK/ACT
LINK/ACT
LINK/ACT
0a 0b 2 Chelsio
Communications 0a 0b 2 Chelsio
Communications
LNK LNK LNK LNK
Controller 1 Controller 2
D
A
D
A
B
X2065A
X1139A
X2065A
10GbEE CNA
3 5 3 5
SAN LAN
SAN LAN
PORT 1
PORT 2
4 6 4 6
Shelf 1 Shelf 1
IOM6 B IOM6 IOM6 B IOM6
DC AC DC AC DC AC DC AC
1 1
x2 x2 x2 x2
2 2 2 2 2 2
Shelf 2 Shelf 2
IOM6 B IOM6 IOM6 B IOM6
DC AC DC AC DC AC DC AC
1 1
x2 x2 x2 x2
2 2 2 2 2 2
Stack 1 Stack 2
Starting with shelf ID 10 Starting with shelf ID 20
SAS
ACP
VTIC
NetApp Confidential 7
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
HA Interconnect Links
InfiniBand links connect the two nodes of each
HA pair.
InfiniBand for FAS6000 and V6000 series
Dedicated 10-Gb links for FAS3200 and V3200 series
The HA links are used to mirror nonvolatile RAM
(NVRAM).
The HA links provide a channel for certain types of
communication traffic between the nodes in a pair:
Failover
Disk firmware
Heartbeats
Version information
NetApp Confidential 8
HA INTERCONNECT LINKS
InfiniBand links connect the two nodes of each HA pair for all models except the FAS and V-Series 32x0
series model controllers. FAS and V-Series 32x0 model controllers use a dedicated 10-GbE link, internal or
external, depending on the model and enclosure. Visit the NetApp Support site to see the appropriate
hardware configuration guide for your model storage controller.
The types of traffic that flow over the HA interconnect links are:
Failover: The directives are related to performing storage failover (SFO) between the two nodes,
regardless of whether the failover is:
Negotiated (planned and in response to an administrator request)
Not negotiated (unplanned and in response to an improper system shutdown or booting)
Disk firmware: Nodes in an HA pair coordinate the update of disk firmware. While one node is updating
the firmware, the other node must not perform any I/O to that disk.
Heartbeats: Regular messages demonstrate availability.
Version information: The two nodes in an HA pair must be kept at the same major and minor revision
levels for all software components.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Disks
Every disk, volume, and aggregate in an HA pair is
assigned a home and is owned by that node.
Designated disks in the HA pair serve as mailbox disks:
A mailbox disk provides persistent storage for information
about the SFO state, including some specific replicated
database (RDB) data when in two-node HA mode.
Each node of an HA pair designates two disks in the first
RAID group in the root aggregate as mailbox disks .
Attempts are made to write SFO state information to all
mailbox disks for configuration and status changes.
Quorum techniques are used to guarantee that at least three
of the four mailbox disks must be available for SFO.
NetApp Confidential 9
DISKS
Each node of an HA pair designates two disks in the first RAID group in the root aggregate as the mailbox
disks. The first mailbox disk is always the first data disk in RAID group RG0. The second mailbox disk is
always the first parity disk in RG0. The mroot disks are generally the mailbox disks.
Each disk, and therefore each aggregate and volume that is built upon the disk, can be owned by one of the
two nodes in the HA pair at any given time. This form of software ownership is made persistent by writing
the information onto the disk itself. The ability to write disk ownership information is protected by the use of
persistent reservations. Persistent reservations can be removed from disks by power-cycling the shelves or by
selecting maintenance mode while in boot mode and then issuing manual commands. If the node that owns
the disks is running in normal mode, the node reasserts its persistent reservations every 30 seconds. Changes
in disk ownership are managed automatically by normal SFO operations, although commands exist to
manipulate disk ownership manually if necessary.
Each node in an HA pair can perform reads from any disk to which the node is connected, even if the disk
isnt that disks owner; however, only the node that is marked as that disks current owner is allowed to write
to that disk.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Disk Names
The system assigns the disk ID automatically based on
the node name, slot and port number, and either the
loop ID (FC-AL) or the shelf ID and bay number (SAS).
cluster1::> disk show -instance
Disk: cluster1-01:0c.18
Container Type: aggregate
Owner/Home: cluster1-01 / cluster1-01
...
NetApp Confidential 10
DISK NAMES
Disks are numbered in all storage systems. Disk numbering enables you to:
Interpret messages displayed on your screen, such as command output or error messages
Quickly locate a disk that is associated with a displayed message
Disks are numbered based on a combination of their node name, slot number, and port number, and either the
loop ID for FC-AL-attached shelves or the shelf ID and bay number for SAS-attached shelves.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Disk Names: Slot and Port
The slot and port designate where an adapter is located
on the host storage controller.
3
6 0c
0 e0a e0b
e0c e0d e0e e0f 0a 0b 0c 0d
7 13
8 14
9 15
10 16
FAS62x0
NetApp Confidential 11
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Disk Names: Shelf ID and Bay (SAS)
The shelf ID and bay designate the specific shelf and
bay number where the disk is located.
Bay 0 3.0TB
Bay1 3.0TB
Bay 2 3.0TB
Bay 3 3.0TB
DS4486
Bay 4 3.0TB
Bay 5 3.0TB
Bay 6 3.0TB
Bay 7 3.0TB
Bay 20 3.0TB
Bay 21 3.0TB
Bay 22 3.0TB
Bay 23 3.0TB
DS4486
Shelf ID
NetApp Confidential 12
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Disk Ownership
Software disk ownership is made persistent by writing
the ownership information onto the disk.
The ability to write disk ownership information is
protected by the use of persistent reservations.
Changes in disk ownership are managed
automatically by normal SFO operations, although
commands exist to manipulate disk ownership
manually if necessary.
It is possible for disks to be unowned.
NetApp Confidential 13
DISK OWNERSHIP
A disks data contents are not destroyed when the disk is marked as unowned; only the disks ownership
information is erased. Unowned disks that reside on an FC-AL, where the owned disks exist, have ownership
information applied automatically to guarantee that all disks on the same loop have the same owner.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 2
NetApp Confidential 14
LESSON 2
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Write Requests
The Data ONTAP operating system receives write
requests through multiple protocols:
CIFS
NFS
Fibre Channel (FC)
iSCSI
HTTP
Write requests are buffered into:
System memory
Nonvolatile RAM (NVRAM)
NetApp Confidential 15
WRITE REQUESTS
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Write Request Data Flow: Write Buffer
Network
Network Protocols NVLOG N
Stack NVLOG V
Memory Buffer /
NVLOG R
SAN Cache
Service
NVLOG A
RS-232
NVLOG M
NVRAM Full
SAN Host NFS WAFL
HBA Service
CIFS
UNIX NIC Service
Client
RAID
Windows
Client
Storage
NetApp Confidential 16
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Consistency Point
A CP is a completely self-consistent image of a
file system.
Creating a CP is equivalent to capturing the
structure of a file system at a moment in time.
When a CP is created, designated data is written
to a disk, and a new root inode is chosen.
A CP can be created for many reasons, including:
Half of the NVRAM card is full.
Ten seconds have elapsed.
A Snapshot copy has been created.
The system has been halted.
NetApp Confidential 17
CONSISTENCY POINT
A consistency point (CP) is a completely self-consistent image of the entire file system. A CP is not created
until data has been written to disk and a new root inode has been chosen.
Although CPs are created for many reasons, a few of the major reasons are:
Half of the nonvolatile RAM (NVRAM) card is full.
Ten seconds have elapsed.
A Snapshot copy has been created.
The system has been halted.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
CPs in the Data ONTAP Operating System
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Write Request Data Flow: WAFL to RAID
Network
Network Protocols NVLOG N
Stack NVLOG V
Memory Buffer /
NVLOG R
SAN Cache
Service
NVLOG A
RS-232
NVLOG M
NVRAM Full
SAN Host NFS WAFL
HBA Service
CIFS
UNIX NIC Service
Client
RAID
Windows
Client
Storage
NetApp Confidential 19
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
CPs from the WAFL File System to RAID
The RAID layer calculates the parity of the data:
To protect it from one or more disk failures
To protect stripes of data
The RAID layer calculates checksums, which are
stored using the block or zone method.
If a data disk fails, the missing information can be
calculated from parity.
The storage system can be configured in one of two
ways:
RAID 4: The system can recover from one disk failure in
the RAID group.
RAID-DP: The system can recover from up to two disk
failures in the RAID group.
NetApp Confidential 20
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Write Request Data Flow:
RAID to Storage
Network
Network Protocols NVLOG N
Stack NVLOG V
Memory Buffer /
NVLOG R
SAN Cache
Service
NVLOG A
RS-232
NVLOG M
NVRAM Full
SAN Host NFS WAFL
HBA Service
CIFS
UNIX NIC Service
Client
RAID
4 KB
Windows
Client Checksum
computed Storage
NetApp Confidential 21
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
CPs from RAID to Storage
NetApp Confidential 22
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Write Request Data Flow: Storage Writes
Network
Network Protocols NVLOG N
Stack NVLOG V
Memory Buffer /
NVLOG R
SAN Cache
Service
NVLOG A
RS-232
NVLOG M
NVRAM Full
SAN Host NFS WAFL
HBA Service
CIFS
UNIX NIC Service
Client
RAID
Windows
Client
Storage
NetApp Confidential 23
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
NVRAM
NetApp Confidential 24
NVRAM
NVRAM is best viewed as a log. This log stores a subset of incoming file actions.
When a request comes in, two things happen:
The request is logged to NVRAM. NVRAM is not read during normal processing. It is simply a log of
requests for action (including any data necessary, such as the contents of a write request).
The request is acted upon. The storage system's main memory is used for processing requests. Buffers are
read from the network and from the disk and processed according to the directions that came in as CIFS
or NFS requests. NVRAM holds the instructions that are necessary if the same actions need to be
repeated.
If the storage system does not crash, the NVRAM is eventually flushed without ever being read back. If the
storage system crashes, the data from NVRAM is processed as if the storage system were receiving those
same CIFS or NFS requests again. The same response is made by the storage system for each request in
NVRAM, just as if it had come in through the network.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Read Requests
NetApp Confidential 25
READ REQUESTS
The Data ONTAP operating system includes several built-in, read-ahead algorithms. These algorithms are
based on patterns of usage. The algorithms help ensure that the read-ahead cache is used efficiently.
The response to a read request is composed of four steps:
1. The network layer receives an incoming read request. (Read requests are not logged to NVRAM.)
2. The WAFL file system looks for the requested data in the read cache:
If it locates the data, it returns the data immediately to the requesting client.
If it does not locate the data, it initiates a read request from the disk.
3. Requested blocks and intelligently chosen read-ahead data are sent to cache.
4. The requested data is sent to the requesting client.
NOTE: In the read process, cache is system memory.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Read Request Data Flow: Read from Disk
Network
Network Protocols N
Stack V
Memory Buffer /
R
SAN Cache
Service
A
Console
M
CIFS
UNIX NIC Service
Client
RAID
Windows
Client
Storage
NetApp Confidential 26
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Read Request Data Flow: Cache
Network
Network Protocols N
Stack V
Memory Buffer /
R
SAN Cache
Service
A
RS-232
M
CIFS
UNIX NIC Service
Client
RAID
Windows
Client
Storage
NetApp Confidential 27
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 3
NetApp Confidential 28
LESSON 3
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
RAID Groups
NetApp Confidential 29
RAID GROUPS
A RAID group includes several disks that are linked together in a storage system. Although there are different
implementations of RAID, Data ONTAP supports only RAID 4 and RAID-DP. To understand how to manage
disks and volumes, it is important to first understand the concept of RAID.
Data ONTAP classifies disks as one of four types for RAID: data, hot spare, parity, or double-parity. The
RAID disk type is determined by how RAID is using a disk.
Data disk: A data disk is part of a RAID group and stores data on behalf of the client.
Hot spare disk: A hot spare disk does not hold usable data but is available to be added to a RAID group in an
aggregate. Any functioning disk that is not assigned to an aggregate, but is assigned to a system, functions as
a hot spare disk.
Parity disk: A parity disk stores data reconstruction within a RAID group.
Double-parity disk: A double-parity disk stores double-parity information within RAID groups if NetApp
RAID software, double-parity (RAID-DP) is enabled.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
RAID 4 Technology
RAID 4 protects against data loss that results from a
single-disk failure in a RAID group.
A RAID 4 group requires a minimum of three disks:
Two data disks
One parity disk
NetApp Confidential 30
RAID 4 TECHNOLOGY
RAID 4 protects against data loss due to a single-disk failure within a RAID group.
Each RAID 4 group contains the following:
Two or more data disks
One parity disk (assigned to the largest disk in the RAID group)
Using RAID 4, if one disk block goes bad, the parity disk in that disk's RAID group is used to recalculate the
data in the failed block, and then the block is mapped to a new location on the disk. If an entire disk fails, the
parity disk prevents any data from being lost. When the failed disk is replaced, the parity disk is used to
automatically recalculate its contents. This is sometimes referred to as row parity.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
RAID-DP Technology
RAID-DP protects against data loss that results from
double-disk failures in a RAID group.
A RAID-DP group requires a minimum of five disks for
clustered Data ONTAP 8.2 and later:
Three data disks
One parity disk
One double-parity disk
NetApp Confidential 31
RAID-DP TECHNOLOGY
RAID-DP technology protects against data loss due to a double-disk failure within a RAID group.
Each RAID-DP group contains the following:
Three data disks
One parity disk
One double-parity disk
RAID-DP employs the traditional RAID 4 horizontal row parity. However, in RAID-DP, a diagonal parity
stripe is calculated and committed to the disks when the row parity is written.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
RAID Group Size
RAID-DP
NetApp Minimum Maximum Default
Platform Group Size Group Size Group Size
All storage systems
5 16 14
(with SATA disks)
All storage systems
5 28 16
(with FC or SAS disks)
RAID 4
NetApp Minimum Maximum Default
Platform Group Size Group Size Group Size
All storage systems
3 7 7
(with SATA)
All storage systems
3 14 8
(with FC or SAS)
NetApp Confidential 32
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Aggregates
Aggregates:
Are the same as with the Data ONTAP 7G operating
system
Have storage containers that consist of disks
Can use RAID 4 or RAID-DP technology
Contain volumes
Can be taken over by their nodes HA partner
Can be grown by adding disks
32-bit and 64-bit aggregates are supported.
Nondisruptive, in-place aggregate expansions are
available from 32-bit aggregates to 64-bit aggregates.
NetApp Confidential 33
AGGREGATES
In the Data ONTAP 8.1 operating system and later releases , nondisruptive, in-place aggregate expansions are
available from 32-bit aggregates to 64-bit aggregates. During the conversion, the volumes on the aggregate
remain online and continue to serve data.
For clustered Data ONTAP, storage administrators can initiate expansion through the cluster shell by enabling
the diagnostic mode and then running the storage aggregate 64bit-upgrade start command.
The expansion runs in the background but can affect overall cluster performance.
After an aggregate is converted to 64-bit, you can grow the aggregate beyond 16 TB by adding disks through
the storage aggregate add-disks command.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
The storage aggregate show Command
cluster1::> storage aggregate show
Aggregate Size Available Used% State #Vols Nodes RAID Status
--------- -------- --------- ----- ------- ----- ---------- -----------
cluster1-01_aggr0
56.76GB 2.59GB 95% online 1 cluster1-01 raid_dp
cluster1-01_aggr2
113.5GB 113.2GB 0% online 1 cluster1-01 raid4
cluster1-01_aggr3
56.76GB 56.70GB 0% online 3 cluster1-01 raid_dp
cluster1-02_aggr0
56.76GB 2.59GB 95% online 1 cluster1-02 raid_dp
cluster1-02_aggr1
113.5GB 113.4GB 0% online 4 cluster1-02 raid_dp
cluster1-02_aggr2
113.5GB 113.5GB 0% online 0 cluster1-02 raid4
6 entries were displayed.
NetApp Confidential 34
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
In-Place 32-Bit-to-64-Bit Aggregate Expansion
Features:
You can expand 32-bit aggregates to 64-bit
aggregates.
You can expand while an aggregate is online and
serving data.
Considerations:
64-bit aggregates consume more space than 32-bit
aggregates do.
The process works in the background but affects
performance.
NetApp Confidential 35
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 4
NetApp Confidential 36
LESSON 4
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
SFO
1 of 2
Two nodes are connected as an HA pair.
Each node is a fully functioning node in the larger
cluster.
Clusters can consist of heterogeneous hardware,
but both nodes of an HA pair must be the same
controller model.
SFO can be enabled from either node in the pair.
SFO takeover can be initiated from any node in
the cluster.
A manual storage takeover forces a booting of
the node that is taken over.
NetApp Confidential 37
SFO: 1 OF 2
Enabling SFO is done within pairs, regardless of how many nodes are in the cluster. For SFO, the HA pairs
must be of the same model; for example, two FAS32x0 systems or two FAS62x0 systems. The cluster itself
can contain a mixture of models, but each HA pair must be homogeneous. The version of the Data ONTAP
operating system must be the same on both nodes of the HA pair, except for the short period of time during
which the pair is upgraded. During that time, one of the nodes is booted with a later version than its partners
version, with the partner to follow shortly. The nonvolatile RAM (NVRAM) cards must be installed in the
nodes. Two interconnect cables are required to connect the NVRAM cards (except for FAS and V-Series 32x0
models with single-enclosure HA).
Remember that this cluster is not simply the pairing of machines for failover; this cluster is the Data ONTAP
cluster.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
SFO
2 of 2
Automatic giveback is enabled by default for
2 Node clusters.
Both nodes of an HA pair must be booted before
SFO can be enabled for the pair.
NetApp Confidential 38
SFO: 2 OF 2
According to the High-Availability Configuration Guide for Clustered ONTAP 8.2:
If your cluster consists of a single HA pair:
Cluster high availability (HA) is activated automatically when you enable storage failover on clusters that
consist of two nodes, and you should be aware that automatic giveback is enabled by default. On clusters that
consist of more than two nodes, automatic giveback is disabled by default, and cluster HA is disabled
automatically.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
HA Pairs
A high availability (HA) pair contains two nodes
whose controllers are directly connected through
an HA interconnect.
A node can take over its partner's storage to
provide continued data service if the partner goes
down.
HA pairs are components of the cluster, but only
the nodes in the HA pair can take over each
other's storage.
Single-node clusters are supported in Data
ONTAP 8.2, but non-HA nodes are not supported
in clusters that have two or more nodes.
NetApp Confidential 39
HA PAIRS
HA pair controllers are connected to each other through an HA interconnect. This allows one node to serve
data that resides on the disks of its failed partner node. Each node continually monitors its partner, mirroring
the data for each others nonvolatile memory (NVRAM or NVMEM). The interconnect is internal and
requires no external cabling if both controllers are in the same chassis.
HA pairs are components of the cluster, and both nodes in the HA pair are connected to other nodes in the
cluster through the data and cluster networks. But only the nodes in the HA pair can take over each other's
storage. Non-HA nodes are not supported in a cluster that contains two or more nodes. Although single-node
clusters are supported, joining two single-node clusters to create one cluster is not supported, unless you wipe
clean one of the single-node clusters and join it to the other to create a two-node cluster that consists of an HA
pair.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
HA Policy: CFO and SFO
Aggregates are automatically assigned an HA policy.
Root aggregates (aggr0) are always assigned CFO
(controller failover) policy.
Aggr0 is given back at the start of the giveback process to
allow the taken-over system to boot.
Data aggregates are assigned SFO (storage failover)
policy.
Data aggregates are given back one at a time during the
giveback process, after the taken-over system boots.
The HA policy of an aggregate cannot be changed
from SFO to CFO in normal operation.
Hardware-assisted takeover can be used to speed up
the takeover process.
Do not store data volumes on aggr0
NetApp Confidential 40
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Ownership of aggr0 During Failover
cluster1::> aggr show -aggregate aggr0
Aggregate: aggr0
Checksum Style: block
Number Of Disks: 3
Nodes: cluster1-02
Disks: cluster1-02:1b.16,
cluster1-02:1b.17,
cluster1-02:1b.18
Free Space Reallocation: off
HA Policy: cfo
Space Reserved for Snapshot Copies: -
Hybrid Enabled: false
Available Size: 5.57GB
Checksum Enabled: true
Checksum Status: active
Has Mroot Volume: false
Has Partner Node Mroot Volume: true
Home ID: 1579305252
Home Name: cluster1-01
NetApp Confidential 41
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Unplanned Event
Node 1 and 2 own their Node 1
root and data aggr0
aggregates.
aggr1
Cluster Interconnect
Node 1 fails.
aggr2
Node 2 takes over root
and data aggregates.
H
A
Node 2 aggr0
aggr3
NetApp Confidential 42
UNPLANNED EVENT
Clustered ONTAP 8.2 performs takeovers a little differently than past versions. Prior to 8.2, an unplanned
event (e.g node failure) and a planned event (manual takeover initiated by an administrator) followed the
same process. In clustered ONTAP 8.2, planned events use a different process.
When a node fails, an unplanned event or automatic takeover is initiated (8.2 and prior). Ownership of data
aggregates is changed to the HA partner. After the ownership is changed, the partner can read and write to the
volumes on the partners data aggregates. Ownership of aggr0 disks remain with the failed node, but the
partner takes over control of the aggregate which can be mounted from the partner for diagnostic purposes.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Giveback
Automatic or manual Node 1
giveback is initiated aggr0
with storage
failover aggr1
Cluster Interconnect
giveback
aggr2
command.
Aggr0 is given back to
H
node 1 to boot the A
node.
Node 2 aggr0
Data aggregate
giveback occurs one aggr3
aggregate at a time.
NetApp Confidential 43
GIVEBACK
Giveback is initiated by the storage failover giveback command or by automatic giveback if the
system is configured for it. The node must have access to its root volume on aggr0 to fully boot. The CFO HA
policy ensures that aggr0 is given back immediately to the allow the node to boot.
After the node has fully booted, the partner node returns ownership of the data aggregates one at a time until
giveback is complete. You can monitor the progress of the giveback with the storage failover
show-giveback command. I/O resumes for each aggregate when giveback is complete for that aggregate,
thereby reducing the overall outage window of each aggregate.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Aggregate Relocation
Aggregate relocation (ARL) moves the
ownership of storage aggregates within the
HA pair.
This occurs automatically during manually
initiated takeover and giveback operations to
reduce downtime during maintenance.
ARL cannot move ownership of the root
aggregate.
To avoid ARL, use the -bypass-
optimization parameter with the storage
failover takeover command.
NetApp Confidential 44
AGGREGATE RELOCATION
Aggregate relocation operations take advantage of the HA configuration to move the ownership of storage
aggregates within the HA pair. Aggregate relocation occurs automatically during manually initiated takeover
and giveback operations to reduce downtime during maintenance. Aggregate relocation can be initiated
manually for load balancing. Aggregate relocation cannot move ownership of the root aggregate.
During a manually initiated takeover, before the target controller is taken over, ownership of each aggregate
that belongs to the target controller is moved to the partner controller one aggregate at a time. When giveback
is initiated, the ownership is automatically moved back to the original node. To suppress aggregate relocation
during the takeover, use the -bypass-optimization parameter with the storage failover takeover command.
The aggregate relocation requires additional steps if the aggregate is currently used by an infinite volume with
SnapDiff enabled.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Planned Event in 8.2 with ARL
Node 1 and 2 own Node 1
their root and data aggr0
aggregates
aggr1
Cluster Interconnect
Manual takeover is STOP
initiated using the aggr2
storage
failover
takeover H
A
command
Node 2 aggr0
Data aggregates
change ownership aggr3
to node 2 one at a
time
NetApp Confidential 45
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
5. SMB 3.0 sessions established to shares with the Continuous Availability property set can
reconnect to the disconnected shares after a takeover event. If your site uses SMB 3.0 connections to
Microsoft Hyper-V, and the Continuous Availability property is set on the associated shares,
takeover will be nondisruptive for those sessions.
If the node that is performing the takeover panics within 60 seconds of initiating takeover, the following
events occur:
The node that panicked reboots.
After it reboots, the node performs self-recovery operations and is no longer in takeover mode.
Failover is disabled.
If the node still owns some of the partner's aggregates, after enabling storage failover, return these
aggregates to the partner by using the storage failover giveback command.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Giveback in 8.2 with ARL
Manual giveback Node 1
aggr0
is initiated with
the storage aggr1
Cluster Interconnect
STOP
failover
aggr2
giveback
command.
H
Aggr0 is given A
NetApp Confidential 46
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
HA Best Practices
Do not use the root aggregate for storing data.
Follow recommended limits for volumes,
Snapshot copies, and LUNs to reduce the
takeover or giveback time.
Use LIFs with defined failover policies to provide
redundancy and improve availability of network
communication.
Avoid using the -only-cfo-aggregates
parameter with the storage failover
giveback command.
Use the Config Advisor tool to help ensure that
failovers are successful, and test failover
routinely.
NetApp Confidential 47
HA BEST PRACTICES
See the Clustered Data ONTAP Logical Storage Management Guide for current information on storage
limits.
Find Config Advisor here: http://support.netapp.com/NOW/download/tools/config_advisor/
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Storage Failover Event Summary
HA Event Event Description
Unplanned Event All aggregates failover to partner node in
parallel
Planned Event (cDOT All aggregates failover to partner node in
8.1) parallel
Planned Event (cDOT Each aggregate is failed over serially, the root
8.2) aggregate is failed over once all user data
containing aggregates are failed over to the
partner node
Giveback Root aggregate is given back first; once a
node is assimilated back into the cluster each
data containing aggregate is given back
serially to the partner node
NetApp Confidential 48
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
The storage failover show Command
cluster1::> storage failover show
Takeover InterConn
Node Partner Enabled Possible Up State
-------------- -------------- ------- -------- --------- --------------
cluster1-01 cluster1-02 true true true connected
cluster1-02 cluster1-01 true true true connected
2 entries were displayed.
NetApp Confidential 49
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Two-Node HA
Is an additional configuration step for two-node
clusters only
Must be configured to enable the cluster to
operate properly when one of the two nodes is
down
Is needed because of the way that the RDB units
maintain quorum. (RDB units operate differently
when only two nodes exist compared to when
more than two nodes exist.)
Must be enabled for SFO to work properly in a
two-node cluster
NetApp Confidential 50
TWO-NODE HA
For clusters of only two nodes, the replicated database (RDB) units rely on the disks to maintain quorum
within the cluster in the case that a node is booted or goes down. This process is enabled by configuring the
two-node HA mechanism. Because of the reliance on the disks, SFO enablement and automatic giveback are
also required by two-node HA and are configured automatically when two-node HA is enabled. For clusters
that are larger than two nodes, quorum can be maintained without using the disks. Do not enable two-node
HA for clusters that are larger than two nodes. When expanding a cluster beyond two nodes, the HA state
must be changed manually. Nodes cannot be added while HA is enabled.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
For More Information
Clustered Data ONTAP 8.2 High-Availability Configuration Guide
TR-3450: High-Availability Overview and Best Practices
NetApp Confidential 51
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 5
NetApp Confidential 52
LESSON 5
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
NetApp Virtual Storage Tier
Flash Cache Flash Pool
What is it? What is it?
A controller-based PCIe card Storage-level, RAID-protected
A plug and play device cache (specific to aggregates)
NetApp Confidential 53
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Flash Cache
Is a 256-GB, 512-GB, or 1-TB PCIe module
Is a plug-and-play device (no required configuration)
Is for Data ONTAP 8.0.2 clusters or later
Supports all protocols
Acts as an extention to the WAFL buffer cache, and saves evicted
buffers
Shared by all volumes on a node
http://www.netapp.com/us/products/storage-systems/flash-cache/.
NetApp Confidential 54
FLASH CACHE
Flash Cache intelligent caching intelligent caching is a solution that combines software and hardware within
NetApp storage controllers to increase system performance without increasing the disk count. The Flash
Cache plug-and-play PCIe module requires no configuration to use the default settings, which are
recommended for most workloads. The original Flash Cache module is available in 256-GB, 51-GB, or 1-TB
capacities and accelerates performance on all supported Data ONTAP client protocols. The Flash Cache
controller-based solution is available to all volumes that are hosted on the controller.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Flash Cache 2
NetApp Confidential 55
FLASH CACHE 2
Flash Cache 2 is the second generation of Flash Cache performance accelerators. The new architecture of
Flash Cache 2 accelerators enables them to provide even higher throughput. Flash Cache 2 accelerators
provide 512-GB, 1-TB, and 2-TB densities.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Flash Cache Benefits
Benefits:
Increase I/O throughput by up to 75%
Use up to 75% fewer disks without compromising performance
Increase email users by up to 67% without adding disks
Key Points:
Use for random read-intensive workloads (databases, email, file
services)
Reduce latency by a factor of 10 or greater compared to hard disks
Increase I/O throughput and eliminate performance bottlenecks
Lower costsuse SATA disks with Flash Cache for important workloads
Save power, cooling, and rack space by using fewer, larger disks
NetApp Confidential 56
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Flash Pool
Flash Pool is an aggregate-level
read and write cache.
Like Flash Cache, Flash Pool
uses 4-KB block granularity and
Performance Capacity real-time caching.
+ Flash Pool is not a replacement
HDD for Flash Cache
Flash Pool Cache remains populated and
available during SFO events.
Random overwrite data is cached.
NetApp Confidential 57
FLASH POOL
WAFL (Write Anywhere File Layout) aggregates are built with disks of the same type: SATA, hard disks,
and FC and SAS hard disks or solid-state drives (SSDs). Flash pools allow the mixing of SSDs and hard disks
within the same aggregate. The SSD tier aggregate is used as a cache and doesnt contribute to usable space.
When an aggregate is converted to hybrid, the usable space in the aggregate does not change. The disks that a
hybrid aggregate consists of are treated like any disks in a NetApp storage array, and any class of disk can be
added on demand, subject to best practices around data, such as parity ratios and RAID types.
Flash pools provide:
Improved cost performance with fewer spindles, less rack space, and lower power and cooling
requirements
Highly available storage with a simple administrative model
Improved cost-to-performance and cost-to-capacity ratios compared to those of an SSD and SATA
combination with pure FC SAS
Predictable and better degraded mode operation across controller failures and with takeover and giveback
Automatic, dynamic, policy-based placement of data on appropriate tiers of storage (hard disks or SSDs)
at WAFL-block granularity for either data or system metadata
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Flash Pool Components
A flash pool is an aggregate with:
One or more hard disk RAID groups
An SSD RAID group
Only one type of hard disk can be
used per flash pool:
High capacity (SATA)
Performance (SAS)
SSDs cache random data
Previously written data (overwrites)
Read data expired from main memory
Existing aggregates can be non-
disruptively converted to flash pools.
NetApp Confidential 58
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Blocks in the SSD Tier
NetApp Confidential 59
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Enabling Flash Pools
1. Turn on the hybrid-enabled option.
2. Add a new RAID group with SSDs.
NetApp Confidential 60
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Flash Pools: Additional Considerations
Available space
Flash Cache
HA takeover and giveback
Volume move
Volume SnapMirror relationships
Aggregate Snapshot copies
Data compression
V-Series
RAID4 for SSD tier
NetApp Confidential 61
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
When using Data ONTAP 8.2 or a subsequent release, the RAID policies for the SSD RAID group and HDD
RAID groups in a Flash Pool aggregate are independent. That means an SSD RAID group could be RAID 4
protected, while the HDD RAID groups in the same Flash Pool aggregate use RAID-DP protection.
Nevertheless, the added protection of RAID-DP makes it a best practice to use RAID-DP for SSD RAID
groups as well. An uncorrectable error in an SSD RAID group that is configured with RAID 4 and has
experienced the failure of one SSD will result in the entire Flash Pool aggregate being taken offline. And it
could also cause a loss of data that is cached in write cache. Therefore, NetApp recommends using RAID-DP
protection for SSD RAID groups and HDD RAID groups.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Module Summary
Now that you have completed this module, you should be
able to:
Draw the connections from an HA pair of controllers to the
disk shelves
Discuss storage and RAID concepts
Create aggregates
List the steps that are required to enable SFO
Explain and enable two-node HA mode for two-node
clusters
Create a flash pool
NetApp Confidential 62
MODULE SUMMARY
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Exercise
Module 5: Physical Data Storage
Time Estimate: 45 minutes
NetApp Confidential 63
EXERCISE
Please refer to your exercise guide.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Module 6
Logical Data Storage
NetApp Confidential 1
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Module Objectives
NetApp Confidential 2
MODULE OBJECTIVES
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 1
NetApp Confidential 3
LESSON 1
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Virtual Servers
NetApp Confidential 4
VIRTUAL SERVERS
A data virtual storage server (Vserver) connects volumes, logical interfaces (LIFs), and other elements for a
namespace. No volumes can be created until a data Vserver exists with which to associate the volumes.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
The vserver show Command
Summary View
cluster1::> vserver show
Admin Root Name Name
Vserver Type State Volume Aggregate Service Mapping
----------- ------- --------- ---------- ---------- ------- -------
cluster1 admin - - - - -
cluster1-01 node - - - - -
cluster1-02 node - - - - -
vs7 data running vs7 aggr1b file file
4 entries were displayed.
NetApp Confidential 5
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Volumes
Flexible volumes in clustered Data ONTAP are the
same as in Data ONTAP 7G or 7-Mode.
Any single volume can exist within only a single data
Vserver.
Volumes are joined together through junctions to
create the namespace of a Vserver.
Volumes are the unit of data management: Volumes
can be moved, copied, mirrored, backed up, or copied
by using Snapshot copies.
Data ONTAP 7-Mode volumes cannot be used in
clustered Data ONTAP systems, and vice versa.
NetApp Confidential 6
VOLUMES
Clustered Data ONTAP flexible volumes are functionally equivalent to flexible volumes in the Data ONTAP
7-Mode and the Data ONTAP 7G operating system. However, clustered Data ONTAP systems use flexible
volumes differently than Data ONTAP 7-Mode and Data ONTAP 7G systems do. Because Data ONTAP
clusters are inherently flexible (particularly because of the volume move capability), volumes are deployed as
freely as UNIX directories and Windows folders are deployed to separate logical groups of data.
Volumes can be created and deleted, mounted and unmounted, moved around, and backed up as needed. To
take advantage of this flexibility, cluster deployments typically use many more volumes than traditional Data
ONTAP 7G deployments use. In a high-availability ( HA) pair, aggregate and volume limits apply to each
node individually, so the overall limit for the pair is effectively doubled.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Volumes
Sizes and Limits
Volume Volume
Aggregat Aggregate
Memory Limit Memory Limit
Platform e Size Platform Size Limit
(GB) (per (GB) (per
Limit (TB) (TB)
node) node)
NetApp Confidential 7
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
The volume show Command
Summary View
cluster1::> volume show
Vserver Volume Aggregate State Type Size Available Used%
--------- ------------ ------------ ---------- ---- ---------- ---------- -----
cluster1-01
vol0 aggr0 online RW 851.5MB 514.8MB 39%
cluster1-02
vol0 aggr0_cluster1_02_0
online RW 851.5MB 587.1MB 31%
vs7 vs7 aggr1b online RW 20MB 18.88MB 5%
vs7 vs7_vol1 aggr1b online RW 400MB 379.8MB 5%
4 entries were displayed.
NetApp Confidential 8
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Junctions
NetApp Confidential 9
JUNCTIONS
Junctions are conceptually similar to UNIX mountpoints. In UNIX, a disk can be divided into partitions, and
then those partitions can be mounted at multiple places relative to the root of the local file system, including
in a hierarchical manner. Likewise, the flexible volumes in a Data ONTAP cluster can be mounted at junction
points within other volumes to form a single namespace that is distributed throughout the cluster. Although
junctions appear as directories, junctions have the basic functionality of symbolic links.
A volume is not visible in the namespace of its Vserver until the volume is mounted within the namespace.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
The volume show Command
Instance View 1 of 3
cluster1::> volume show -vserver vs7 -volume vs7_vol1
NetApp Confidential 10
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
The volume show Command
Instance View 2 of 3
Junction Path: /vol1
Junction Path Source: RW_volume
Junction Active: true
Parent Volume: vs7root
Comment:
Available Size: 18.88GB
Total User-Visible Size: 19GB
Used Size: 120MB
Used Percentage: 5%
Autosize Enabled (for flexvols only): false
Maximum Autosize (for flexvols only): 23.91GB
Autosize Increment (for flexvols only): 1020KB
Total Files (for user-visible data): 566
Files Used (for user-visible data): 96
Space Guarantee Style: volume
NetApp Confidential 11
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
The volume show Command
Instance View 3 of 3
Space Guarantee in Effect: true
Space Reserved for Snapshots: 5%
Snapshot Reserve Used: 63%
Snapshot Policy: default
Creation Time: Tue Oct 11 14:34:35 2011
Clone Volume: false
NVFAIL Option: off
Is File System Size Fixed: false
Extent Option: off
Reserved Space for Overwrites: 0B
Fractional Reserve: 100%
Snapshot Cloning Dependency: off
Primary Space Management Strategy: volume_grow
Read Reallocation Option: off
Block Type: 64-bit
NetApp Confidential 12
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Mounting a Volume
NetApp Confidential 13
MOUNTING A VOLUME
When volumes are created by using the volume create command, a junction path is usually specified.
The junction path is optional; a volume can be created and not mounted into the namespace. To put a volume
without a junction path into use, you must use the volume mount command to assign a junction path to the
volume.
When you unmount a volume, you take the volume out of the namespace. An unmounted volume is
inaccessible to NFS and CIFS clients but is still online and can be mirrored, backed up, moved, and so on.
You can then mount the volume again to the same location or a different location in the namespace and in
relation to other volumes. For example, you can unmount a volume from one parent volume and then mount
the volume to another parent volume.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Volumes, Junctions, and Namespaces
1 of 3
Volume: root:
Junction path (relative to the root): /
NFS mount command:
mount <hostname>:/ /mnt/vserver1
NFS path: /mnt/vserver1
Volume: smith:
Junction path: /user/smith
User: a directory in the root volume in this example, not a
junction
NFS path: /mnt/vserver1/user/smith
NetApp Confidential 14
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Volumes, Junctions, and Namespaces
2 of 3
cluster1::>
cluster1::>
cluster1::>
cluster1::> mount
mount
mount
mount vserver
vserver
vserver
vserver vs1
vs1 vol
vs1 volvs1 vol
vol smith
smith_mp3
smith_jpg acct
client% mkdir /user/smith/media
/user
junction-path
junction-path
junction-path
junction-path /user/smith/media/music
/user/smith/media/photos
/user/smith
/acct
root
user
smith_mp3 smith_jpg
/user/smith/media/music /user/smith/media/photos
NetApp Confidential 15
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Volumes, Junctions, and Namespaces
3 of 3
Volume: smith_mp3:
Junction path: /user/smith/music
NFS path:
/mnt/vserver1/user/smith/media/music
CIFS path (with a share that is called root_share):
\\<data_ip>\root_share\user\smith\media\
music
NetApp Confidential 16
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
User and Group Quotas
qtree3
qtree2
qtree1
NetApp Confidential 17
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Quota Policies
vserver1
quota policy - unassigned
quota policy - assigned quota policy
quota rule--unassigned
vol1
quota policy
quota rule unassigned
vol1
quota rule vol1 quota
quota rule-unassigned
policy vol1
vol2
quota rule
quota rule vol2
quota rule vol2 quota rule vol1
quota rule vol2
vol3
quota
quota rule vol3
quota rule vol3 quotarule
quota rule vol2
vol3
quota
quota
quota quota
quota
quota
quota
quota quota
quota
quota
quota quota
NetApp Confidential 18
QUOTA POLICIES
Quotas are defined by quota rules. Quota rules are collected in the quota policy of a Vserver and are specific
to a volume. A quota rule has no effect on the volume until the quota rule is activated.
A quota policy is a collection of quota rules for all of the volumes of a Vserver. Quota policies are not shared
among Vservers. A Vserver can have up to five quota policies, which enables you to have backup copies of
quota policies. One quota policy is assigned to a Vserver at any given time.
A quota is the actual restriction that the Data ONTAP operating system enforces, the actual tracking that the
system performs, or the actual threshold that triggers the system to send a warning message. A quota rule
always results in at least one quota and might result in many additional derived quotas.
Activation is the process of triggering the Data ONTAP operating system to create enforced quotas from the
current set of quota rules in the assigned quota policy. Activation occurs on a volume-by-volume basis. The
first time that quotas are activated on a volume is called initialization. Subsequent activation of quotas on the
same volume is called either re-initialization or resizing, depending on the scope of the changes.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 2
NetApp Confidential 19
LESSON 2
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
FlexCache Volumes
A A
F1 F2 F1 F2
a b c d e f d' e f
g h i j k l
m n o p q r m n q r
s t u v w x F3 s t u y F3
y z 1 2 3 4 y z
NetApp Confidential 20
FLEXCACHE VOLUMES
A FlexCache volume is a sparsely-populated volume on a cluster node, that is backed by a FlexVol volume. It
is usually created on a different node within the cluster. A FlexCache volume provides access to data in the
origin volume without requiring that all the data be in the sparse volume. You can use only FlexVol
volumes to create FlexCache volumes. However, many of the regular FlexVol volume features are not
supported on FlexCache volumes, such as Snapshot copy creation, deduplication, compression, FlexClone
volume creation, volume move, and volume copy. You can use FlexCache volumes to speed up access to
data, or to offload traffic from heavily accessed volumes. FlexCache volumes help improve performance,
especially when clients need to access the same data repeatedly, because the data can be served directly
without having to access the source. Therefore, you can use FlexCache volumes to handle system workloads
that are read-intensive. Cache consistency techniques help in ensuring that the data that is served by the
FlexCache volumes remains consistent with the data in the origin volumes.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Benefits of FlexCache
NetApp Confidential 21
BENEFITS OF FLEXCACHE
Use FlexCache to accelerate performance:
Scale application performance easily
Decrease latency at remote sites
Simplify data management
Single vendor:
No rip and replace
Common and simple storage management
Use FlexCache to reduce TCO:
Eliminate overhead of full replication
Reduce hardware costs, power, and cooling
Adjust automatically to changing workloads
For considerations and limitations when using FlexCache, consult the Clustered Data ONTAP 8.2 Logical
Storage Management Guide.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Supported Protocols
NFS
NFSv3
NFSv4
CIFS
SMB 1.0
SMB 2.x
SMB 3.0
NetApp Confidential 22
SUPPORTED PROTOCOLS
FlexCache volumes support client access using the following protocols: NFSv3, NFSv4.0, and CIFS (SMB
1.0, 2.x, and 3.0).
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Reasons to Deploy FlexCache
A. Decrease latency
B. Increase IOPs
C. Balance resources
A A A A
NetApp Confidential 23
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Types of Volumes
A Origin volume
FlexCache volume
A
A A A A
NetApp Confidential 24
TYPES OF VOLUMES
Two types of volume relevant to FlexCache are the origin volume and the FlexCache volume. The origin
volume is a FlexVol volume that is the primary copy of the volume. A FlexCache volume maps to a single
origin volume, so files can partially exist on the FlexCache volume, based on use patterns, but are seen by the
client as an entire file.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
FlexCache Configuration
NetApp Confidential 25
FLEXCACHE CONFIGURATION
You use the volume flexcache commands to create, delete, and display information about FlexCache
volumes on all nodes in the cluster, or to create, modify, and delete cache policies. You can use the volume
family of commands to perform many of the same operations on individual volumes.
Create a FlexCache volume on all the nodes spanned by a Vserver in a cluster:
volume flexcache create
Display information about all FlexCache volumes in the cluster: flexcache show
Create a FlexCache volume on a single node: volume create
Create a cache policy: volume flexcache cache-policy create
Display the cache policies for all Vservers: volume flexcache cache-policy show
Apply a cache policy to a single volume: volume modify
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Cache Policies
Define properties of FlexCache volumes:
Staleness of data
Timeout for unused cache delegation
Enable the FlexCache volume to serve read
requests that are local to the origin volume
::> vol flexcache cache-policy create
vserver vs1 policy vs1uselocal
-prefer-local-cache true
NetApp Confidential 26
CACHE POLICIES
A cache policy is a set of parameters that help you define properties of FlexCache volumes, such as the extent
of staleness of data in FlexCache volumes, the time after which an unused delegation is returned to the origin,
and the parameter that enables the FlexCache volume to serve read requests from a node that also has the
origin volume. Cache policies are defined for the Vserver that contains the volumes. You can use the default
cache policy or configure your own cache policies and apply them to FlexCache volumes in a Vserver.
Every Vserver has a default cache policy. The default cache policy is a special cache policy that is created and
deleted along with the Vserver. FlexCache volumes use the default cache policy when no other cache policies
are present. The default cache policy can be modified but not deleted.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 3
NetApp Confidential 27
LESSON 3
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Infinite Volumes
Definition
Applications Applications OnCommand System Manager or Cluster Shell
NetApp Confidential 28
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Infinite Volumes
Constituent View
NAS client access to a single external mountpoint
No client access
directly to the data
constituents
Infinite Volume
Node-1 Node-2
NetApp Confidential 29
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Redirector and Data Files
Data Constituents
/NS/
/NS/bak/ /NS/tmp/
Data File Data File
/NS/bak/img /NS/tmp/file
Redirector
Redirector
Namespace Constituent
NetApp Confidential 30
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Resiliency
The Loss of a Data Constituent
Data Constituents
/NS/
/NS/bak/ /NS/tmp/ data file
Data File
/NS/bak/img /NS/tmp/file
Redirector
Redirector
Namespace Constituent
NetApp Confidential 31
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Resiliency
The Loss of a Namespace Constituent
Data Constituents
Namespace Constituent
If the host of the namespace constituent and its SFO partner are both down:
Namespace and directory operations do not work.
File operations on recently accessed files continue to work.
If only the host fails and the SFO partner takes over, access to the namespace,
and the infinite volume, functions normally.
NetApp Confidential 32
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Infinite Volumes
Feature Summary
Manageability NFSv3, NFSv4.1, CIFS
A single namespace Configuration
A simplified setup through A single container of more than 20
OnCommand System Manager 2.1 PB (raw)
Management through cluster shell, Support for up to 2 billion files
similar to that of a FlexVol volume
Up to 10 nodes
Constituent management with
diagnostic privilege Reliability: SFO
Automatic capacity balancing at Flash Cache
file-creation time Efficiency
Data protection Deduplication
Snapshot copies Compression
SnapMirror (intercluster) software
NFS-mounted tape backups
NetApp Confidential 33
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Infinite Volumes
New 8.2 Features
Sharing of a cluster
Multiple Vservers
Sharing of aggregates with FlexVol volumes
Unified security style
Data protection
Namespace mirror constituents
Fan-out and bidirectional mirror relationships
Multiple hardware platforms (not supported for the
FAS2000 series)
NetApp Confidential 34
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Infinite Volumes
Unsupported Features
SMB 2.x and SMB 3 FAS2000 series platforms
File movement across data constituents SMB 1.0 or NFSv4.1 on active filesystems of read-
only volumes
Single-node clusters
NetApp Confidential 35
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Infinite Volumes
Limitations
Clusters of up to 10 nodes in 5 high-availability (HA) pairs
One infinite volume per Vserver
One infinite volume constituent per aggregate
Total raw capacity of approximately 21 PB in a 10-node cluster:
Each aggregate has 2 RAID-DP groups; each group has 18 data and
2 parity disks; each disk is 3-TB SATA.
One constituent exists per aggregate or 175 total constituents.
The maximum useable capacity is approximately 13.15 PB.
Support for up to 2 billion data files:
The namespace constituent can have up to 2 billion redirector files.
Each data constituent can have up to 100 million data files.
A maximum file size of 16 TB
NetApp Confidential 36
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Infinite Volumes
Example
cluster1::> aggr create -aggregate aggr1 -diskcount 70
cluster1::> aggr create -aggregate aggr2 -diskcount 70
cluster1::> vserver create -vserver vs0 -rootvolume vs0_root -is-repository true ...
Cluster1::> set advanced
cluster1::*> volume create -vserver vs0 -volume repo_vol -junction-path /NS -size 768GB
NetApp Confidential 37
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Module Summary
NetApp Confidential 38
MODULE SUMMARY
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Exercise
Module 6: Logical Data Storage
Time Estimate: 90 minutes
NetApp Confidential 39
EXERCISE
Please refer to your exercise guide.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Module 7
Physical Networking
NetApp Confidential 1
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Module Objectives
NetApp Confidential 2
MODULE OBJECTIVES
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Network Ports
Physical network ports exist on a controller, with
corresponding network port definitions in the Data
ONTAP operating system:
Node-management ports (by default, one for each node)
Cluster ports (by default, two for each node)
Data ports (by default, two for each node)
Intercluster ports (by default, none)
The defaults might not be the optimal configuration for
your particular installation.
FC SAN environments use host bus adapter (HBA) ports
as data ports.
NetApp Confidential 3
NETWORK PORTS
Clustered Data ONTAP distinguishes between physical network ports and logical interfaces (LIFs). Each port
has a role that is associated with the port by default, although that situation can be changed through the UI.
The role of each network port should align with the network to which the port is connected.
Node-management ports are for administrators to connect to the node or cluster; for example, through Secure
Shell (SSH) or a web browser.
Cluster ports are strictly for intracluster traffic.
Data ports are for NAS and SAN client access and for the cluster management LIF.
Intercluster ports are used to communicate with another cluster.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
The Network Ports of a Node
X1107A
LINK/ACT
LINK/ACT
1 Chelsio
Communications
c0a 0c e0a
X1107A
LINK/ACT
LINK/ACT
0a 0b 2 Chelsio
Communications
LNK LNK
c0b 0d e0b
X1139A
10GbEE CNA
3 5
SAN LAN
SAN LAN
PORT 1
PORT 2
4 6
NetApp Confidential 4
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
The Roles of Network Ports
NetApp Confidential 5
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
The network port show Command
cluster1::> net port show
(network port show)
Auto-Negot Duplex Speed (Mbps)
Node Port Role Link MTU Admin/Oper Admin/Oper Admin/Oper
------ ------ ------------ ---- ----- ----------- ---------- ------------
cluster1-01
e0a cluster up 9000 true/true full/full auto/1000
e0b cluster up 9000 true/true full/full auto/1000
e0c data up 1500 true/true full/full auto/1000
e0d data up 1500 true/true full/full auto/1000
e1a node-mgmt up 1500 true/true full/full auto/1000
e1b data down 1500 true/true full/half auto/10
cluster1-02
e0a cluster up 9000 true/true full/full auto/1000
e0b cluster up 9000 true/true full/full auto/1000
e0c data up 1500 true/true full/full auto/1000
e0d data up 1500 true/true full/full auto/1000
e1a node-mgmt up 1500 true/true full/full auto/1000
e1b data down 1500 true/true full/half auto/10
12 entries were displayed.
NetApp Confidential 6
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
The network fcp adapter show
Command
cluster1::> network fcp adapter show
Connection Host
Node Adapter Established Port Address
------------ ------- ----------- ------------
cluster1-01 0c ptp 4b0038
cluster1-01 3a ptp 4b0036
cluster1-01 3b loop 0
cluster1-01 4a ptp 4b0037
cluster1-01 4b loop 0
cluster1-02 0c ptp 4b0061
cluster1-02 3a ptp 4b0060
cluster1-02 3b loop 0
cluster1-02 4a ptp 4b005f
cluster1-02 4b loop 0
12 entries were displayed.
NetApp Confidential 7
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Modifying Network Port Attributes
cluster1::> net port modify ?
(network port modify)
[-node] <nodename> Node
[-port] {<netport>|<ifgrp>} Port
[[-role] {cluster|data|node-mgmt|intercluster|cluster-mgmt}]
Role
[ -mtu <integer> ] MTU
[ -autonegotiate-admin {true|false} ]
Auto-Negotiation Administrative
[ -duplex-admin {auto|half|full} ] Duplex Mode Administrative
[ -speed-admin {auto|10|100|1000|10000} ]
Speed Administrative
[ -flowcontrol-admin {none|receive|send|full} ]
Flow Control Administrative
NetApp Confidential 8
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
The Interface Group
NetApp Confidential 9
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
VLANs
NetApp Confidential 10
VLANS
A port can be subdivided into multiple VLANs. Each VLAN has a unique tag that is communicated in the
header of every packet. The switch must be configured to support VLANs and the tags that are in use. In
clustered Data ONTAP, a VLAN's ID is configured into the name. So VLAN "e0a-25" is a VLAN with tag 25
configured on physical port e0a. VLANs that share a base port can belong to the same or different IP spaces,
and it follows that the base port could be in a different IP space than its VLANs.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
VLANs and Interface Groups
vlan vlan
ifgrp
port port
NetApp Confidential 11
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Cluster Network Standardization
Approach
This configuration is standard for cluster interconnect
switches in clustered Data ONTAP configurations.
New clusters require the standard switch configurations for
the cluster and management network.
Benefits
This solution is engineered by NetApp.
Using this solution guarantees that best practices for
networking design are followed:
Dual-cluster interconnect switches for redundancy
Sufficient Inter-Switch Link ( ISL) bandwidth
Standard hardware, software, and configurations
Faster problem resolution (using known configurations)
NetApp Confidential 12
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
The NetApp Cluster Interconnect and
Optional Cluster Management Switch
Whats New
A lower-cost solution for eight-node or smaller cluster sizes
Support that starts in the Data ONTAP 8.1.1 operating system
Sixteen ports of 10-GbE cluster or GbE management
NetApp Confidential 13
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Cluster Interconnect Requirements
Cluster interconnect
NetApp CN1610
Cisco Nexus 5596 (New with Data ONTAP 8.1.2)
Wire-rate 10-GbE connectivity between storage controllers
A 1 x 10-GbE connection from each node to each switch
(2 ports per node total)
Interswitch bandwidth: four ports per switch with CN1610; eight
ports per switch on Cisco Nexus 5010 and 5020
Cluster management switch for:
Management connections for storage controllers and shelves
NetApp CN1601
Cisco Catalyst 2960
NetApp Confidential 14
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Cluster Configuration Overview
2 to 8 Nodes 2 to 24 Nodes
Interconnect
Cluster
2 NetApp CN1610 2 Cisco Nexus 5596
16 x 10-Gbps Ethernet 48 x 10-GbE ports:
ports enhanced (SFP+): Eight ports are used for
Four ports are used for ISLs.
Inter-Switch Links (ISLs). 2 rack units each
1 rack unit each
Network
Management
NetApp Confidential 15
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Configuration Overview
Ability to Be Supported
Maximum
Function Switch Configured in a Network Interface
Nodes
NetApp Cabinet Cards (NICs)
Cluster X1117A-R6 X1107A-
NetApp CN1610 8 Yes R6 X1008A-R6
interconnect
X1117A-R6
Cluster
Cisco Nexus 5596 24 Yes X1107A-R6
interconnect X1008A-R6
Management NetApp CN1601 16 Yes On-board ports only
NetApp Confidential 16
CONFIGURATION OVERVIEW
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Network Cabling
Cluster Interconect
Cisco Nexus 5010
SLOT2
STAT
PS1
PS2
CONSOLE
L1 L2 MGMTO MGMT1 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 10 19 20
SLOT2
STAT
PS1
PS2
CONSOLE
L1 L2 MGMTO MGMT1 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 10 19 20
X1107A
X1107A
LINK/ACT
LINK/ACT
LINK/ACT
LINK/ACT
LINK/ACT
LINK/ACT
1 1 1
X1107A
Chelsio Chelsio Chelsio
LINK/ACT
LINK/ACT
Communications Communications Communications 1 Chelsio
Communications
c0a 0c e0a c0a 0c e0a c0a 0c e0a c0a 0c e0a
X1107A
X1107A
X1107A
LINK/ACT
LINK/ACT
LINK/ACT
LINK/ACT
LINK/ACT
LINK/ACT
2 2 2
X1107A
0a 0b Chelsio 0a 0b Chelsio 0a 0b Chelsio
LINK/ACT
LINK/ACT
Communications Communications Communications 0a 0b 2 Chelsio
Communications
LNK LNK LNK LNK LNK LNK
LNK LNK
3 5 3 5 3 5 3 5
4 6 4 6 4 6 4 6
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
Catalyst 2960-S Series PoE+ 10G
CONSOLE M
G Cisco Nexus 5010
SLOT2
M
T
SYST 1X 11X 13X 23X B
A
RPS
STAT
S
PS1
E
MSTR T
STAT
DPLX
SPED
STCK
PoE
PS2
2X 12X 14X 24X
MODE CONSOLE
1 SFP+ 2 L1 L2 MGMTO MGMT1 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 10 19 20
NetApp Confidential 17
NETWORK CABLING
This slide shows a four-node cluster. Typically, two distinct networks exist for a cluster. The cluster traffic
must always be on its own network, but the management and data traffic can coexist on a network.
Two cluster connections to each node are required for redundancy and improved cluster traffic flow.
For proper configuration of the NetApp CN1601 and CN1610 switches, refer to the CN1601 and CN1610
Switch Setup and Configuration Guide.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Single-Node Clusters
Not supported on Single-node clusters:
Cluster interconnect
Cluster ports
Cluster logical interfaces (LIFs)
High availability
Nondisruptive operations
Nondisruptive upgrades
Storage failover
Single-node clusters differ from one-node clusters
NetApp Confidential 18
SINGLE-NODE CLUSTERS
Before Data ONTAP 8.2, each cluster required two cluster ports and two cluster LIFs. The Single-Node
Cluster feature eliminates the requirement for cluster LIFs in one-node configurations. The cluster ports are
free to be configured as additional data ports.
You can create a single-node cluster with the cluster setup wizard. Creating a single-node cluster from the
cluster setup wizard results in a node without cluster LIFs. The ports that would otherwise be created as
cluster ports are instead created as data ports. The node is configured as non-high availability ( non-HA). A
single-node cluster is the only supported cluster configuration without an HA partner.
Note that with a single-node cluster, some operations are disruptive. For example, because there is no HA
partner, there is no storage failover. When the single node reboots on a panic or during an upgrade, there is a
temporary interruption of service.
A single-node cluster is different from a one-node cluster. A single-node cluster is a one-node cluster that
does not have cluster LIFs configured, and therefore has no connection to the cluster interconnect used in
multinode clusters. A one-node cluster is attached to the cluster interconnect, with the expectation of growing
the cluster beyond one node. A one-node cluster (with cluster LIFs) is not a supported configuration.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Switchless Two-Node Clusters
Cluster
Interconnect
HA Interconnect
NetApp Confidential 19
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Switchless Two-Node Clusters: Cluster
Setup Wizard
NetApp Confidential 20
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
IPv6 Support
NetApp Confidential 21
IPV6 SUPPORT
The IPv6 standard replaces IPv4. IPv6 has a 128-bit address space, which relieves the exhaustion of IPv4
addresses. IPv6 also has other features that make it a rich and complex protocol to deploy and manage.
Clustered Data ONTAP 8.2 extends IPv6 support to cluster data network protocols, including the NFS, CIFS,
and iSCSI, and cluster management network protocols, including SSH, Telnet, RSH, and SNMP. Clustered
Data ONTAP 8.2 also supports NDMP, DNS, and NIS protocols. Clustered Data ONTAP 8.2 does not
support IPv6 on the cluster interconnect or for intercluster mirroring traffic.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Module Summary
Now that you have completed this module, you
should be able to:
Draw the connections of the network cables from
the three networks to a controller
Explain port roles
Create an interface group
Configure VLAN tagged ports
Identify supported cluster interconnect switches
Discuss switchless two-node clusters and single-
node clusters
NetApp Confidential 22
MODULE SUMMARY
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Exercise
Module 7: Physical Networking
Time Estimate: 20 minutes
NetApp Confidential 23
EXERCISE
Please refer to your exercise guide.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Module 8
Logical Networking
NetApp Confidential 1
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Module Objectives
NetApp Confidential 2
MODULE OBJECTIVES
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
LIF Characteristics
An IP address or World Wide Port Name (WWPN) is
associated with a LIF.
One node-management LIF exists per node. It can fail over
to other data or node-management ports on the same
node.
One cluster-management LIF exists per cluster. It can fail
over or migrate throughout the cluster.
Two cluster LIFs exist per node. They can fail over or
migrate only within their node.
Multiple data LIFs are allowed per data port.
The are client-facing (NFS, CIFS, iSCSI, and Fibre Channel
access).
NAS data LIFs can migrate or fail over throughout the cluster.
NetApp Confidential 3
LIF CHARACTERISTICS
Each logical interface (LIF) has an associated role and must be assigned to the correct type of network port.
Data LIFs can have a many-to-one relationship with network ports: Many data IP addresses can be assigned
to a single network port. If the port becomes overburdened, NAS data LIFs can be transparently migrated to
different ports or different nodes. Clients know the data LIF IP address but do not know which node or port is
hosting the LIF. If a NAS data LIF is migrated, the client might unknowingly be contacting a different node.
The NFS mountpoint or CIFS share is unchanged.
A node can have a maximum of 128 LIFs, regardless of the type of LIF.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
The network interface show Command
1 of 2
cluster1::> net int show
(network interface show)
Logical Status Network Current Current Is
Vserver Interface Admin/Oper Address/Mask Node Port Home
----------- ---------- ---------- ------------------ ------------- ------- ----
cluster1
cluster_mgmt up/up 192.168.239.20/24 cluster1-01 e0M true
cluster1-01
clus1 up/up 169.254.165.103/16 cluster1-01 e1a true
clus2 up/up 169.254.185.207/16 cluster1-01 e2a true
mgmt up/up 192.168.239.21/24 cluster1-01 e0a true
cluster1-02
clus1 up/up 169.254.49.175/16 cluster1-02 e1a true
clus2 up/up 169.254.126.156/16 cluster1-02 e2a true
mgmt up/up 192.168.239.22/24 cluster1-02 e0a true
vs7
vs7_lif1 up/up 192.168.239.74/24 cluster1-01 e3a true
vs7_lif2 up/up 192.168.239.75/24 cluster1-01 e3b false
NetApp Confidential 4
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
The network interface show Command
2 of 2
Logical Status Network Current Current Is
Vserver Interface Admin/Oper Address/Mask Node Port Home
----------- ---------- ---------- ------------------ ------------- ------- ----
vs7
vs7_lif1 up/up 192.168.239.74/24 cluster1-01 e3a true
vs7_lif2 up/up 192.168.239.75/24 cluster1-01 e3b false
vs7_fclif1 up/up 20:0f:00:a0:98:13:d5:d4
cluster1-01 0c true
vs7_fclif2 up/up 20:10:00:a0:98:13:d5:d4
cluster1-01 0d true
vs7_fclif3 up/up 20:14:00:a0:98:13:d5:d4
cluster1-02 0c true
vs7_fclif4 up/up 20:12:00:a0:98:13:d5:d4
cluster1-02 0d true
13 entries were displayed.
NetApp Confidential 5
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Network Ports and Data LIFs
192.168.1.55 (vs1_d2)
192.168.1.51 (vs1_d1)
192.168.1.56 (vs2_d2)
192.168.1.52 (vs2_d1)
192.168.1.57 (vs2_d3)
21:00:00:2b:34:26:a6:54 (vs1_d4)
192.168.1.53 (vs1_d3)
192.168.1.54 (vs3_d1)
node1 node2
NetApp Confidential 6
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
LIF Roles and Compatible Ports
NetApp Confidential 7
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
LIF Static Routes
Are defined paths between LIFs and specific
destination IP addresses through gateways
Can improve the efficiency of network traffic that
travels through complicated networks
Have preferences that are associated with them:
When multiple routes are available, the metric
specifies the preference order of the route to use.
Are defined within routing groups
Are created or chosen automatically when a LIF
is created
NetApp Confidential 8
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
The network routing-groups show
Command 1 of 2
cluster1::> network routing-groups show
Routing
Vserver Group Subnet Role Metric
--------- --------- --------------- ------------ -------
cluster1
c192.168.81.0/24
192.168.81.0/24 cluster-mgmt 20
cluster1-01
c169.254.0.0/16
169.254.0.0/16 cluster 30
i192.168.81.0/24
192.168.81.0/24 intercluster 40
n192.168.81.0/24
192.168.81.0/24 node-mgmt 10
NetApp Confidential 9
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
The network routing-groups show
Command 2 of 2
Routing
Vserver Group Subnet Role Metric
--------- --------- --------------- ------------ -------
cluster1-02
c169.254.0.0/16
169.254.0.0/16 cluster 30
i192.168.81.0/24
192.168.81.0/24 intercluster 40
n192.168.81.0/24
192.168.81.0/24 node-mgmt 10
vs1
d192.168.81.0/24
192.168.81.0/24 data 20
vs2
d192.168.81.0/24
192.168.81.0/24 data 20
9 entries were displayed.
NetApp Confidential 10
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
The network routing-groups
route show Command
cluster1::> network routing-groups route show
Routing
Vserver Group Destination Gateway Metric
--------- --------- --------------- --------------- ------
cluster1
c192.168.81.0/24
0.0.0.0/0 192.168.81.1 20
cluster1-01
n192.168.81.0/24
0.0.0.0/0 192.168.81.1 10
cluster1-02
n192.168.81.0/24
0.0.0.0/0 192.168.81.1 10
vs1
d192.168.81.0/24
0.0.0.0/0 192.168.81.1 20
vs2
...
5 entries were displayed.
NetApp Confidential 11
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
NAS Data LIF Failover and Migration
NetApp Confidential 12
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
NAS Data LIF Failover and Migration
Limits
Node-management LIFs cannot fail over or
migrate to a port on a different node.
Cluster-management LIFs and NAS data LIFs
can fail over and migrate across ports and
nodes.
Cluster LIFs can fail over and migrate only
across ports on the same node.
Data LIFs are bound to a Vserver and do not
fail over or migrate between Vservers.
SAN data LIFs never fail over or migrate.
NetApp Confidential 13
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
LIF Failover Groups
data1
e0c e0d e0c e0d e0c e0d e0c e0d
NetApp Confidential 14
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Types of Failover Groups
System-defined
User-defined
Cluster-wide
NetApp Confidential 15
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
LIF Roles and Failover Groups
LIF Roles Failover Group Failover Target Role Failover Target Nodes
System-defined (default)
Data Data Home node or any node
User-defined
NetApp Confidential 16
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Failover Policies
nextavail
priority
disabled
NetApp Confidential 17
FAILOVER POLICIES
nextavail (default): Enables a LIF to fail over to the next available port, preferring a port on the current node.
In some instances, a LIF configured with the nextavail failover policy selects a failover port on a remote node,
even though a failover port is available on the local node. No outages will be seen in the cluster, because the
LIFs continue to be hosted on valid failover ports.
priority: Given the list of failover targets, if the home port goes down then select the next port from the list in
order, always starting with the first port in the list.
disabled: Disables (prevents) a LIF from failing over.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Creating and Deleting Failover Groups
Creating or adding a port to a failover group:
cluster1::> net int failover-groups create failover-group
customfailover1 node cluster1-02 port e0d
NetApp Confidential 18
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Enabling and Disabling Failover of a LIF
NetApp Confidential 19
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
The network interface show
Command
cluster1::> net int show -vserver vs2 -lif vs2_lif1
Vserver Name: vs2
Logical Interface Name: vs2_lif1
Role: data
Data Protocol: nfs, cifs
Home Node: cluster1-02
Home Port: e0d
Current Node: cluster1-02
Current Port: e0d
Operational Status: up
Extended Status: -
Is Home: true
Network Address: 192.168.81.32
Netmask: 255.255.255.0
IPv4 Link Local: -
Bits in the Netmask: 24
Routing Group Name: d192.168.81.0/24
Administrative Status: up
Failover Policy: nextavail
Firewall Policy: data
Auto Revert: false
Fully Qualified DNS Zone Name: none
DNS Query Listen Enable: false
Failover Group Name: customfailover1
NetApp Confidential 20
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
The network interface failover-
group show Command
cluster1::> net int failover-groups show
(network interface failover-groups show)
Failover
Group Node Port
------------------- ----------------- ----------
clusterwide
cluster1-02 e0c
cluster1-02 e0d
cluster1-02 e0e
cluster1-01 a0a
cluster1-01 e0c
customfailover1
cluster1-02 e0c
cluster1-01 e0c
NetApp Confidential 21
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
NAS Load Balancing
NetApp Confidential 22
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
DNS Load-Balancing Characteristics
Uses internal DNS zones that contain multiple data
IP addresses (data LIFs):
The actual data LIF that is used for an NFS mount is
chosen at NFS mount time.
NAS data LIFs can be automatically migrated among
nodes to maintain a balanced load.
Is based on LIF weights:
Weight can be manually or automatically set (based on the
current load in the cluster).
Provides balanced cluster-wide data LIFs
NetApp Confidential 23
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
DNS Load-Balancing Commands
Assigning a weight to a LIF by using the network
interface modify command:
cluster1::> net int modify vserver vs2 lif data1 lb-weight 7
NetApp Confidential 24
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Automatic LIF Rebalancing
LIFs are automatically migrated to a less-utilized port.
Migration allows even distribution of the current load.
LIFs are migrated based on the weights.
Automatic LIF rebalancing is available only under the
advanced privilege level of operation.
NetApp Confidential 25
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Automatic LIF Rebalancing Commands
NetApp Confidential 26
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Verifying the LIF Rebalancing Setting: The
network interface show Command
cluster1::*> network interface show lif data1 instance
NetApp Confidential 27
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Module Summary
NetApp Confidential 28
MODULE SUMMARY
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Exercise
Module 8: Logical Networking
Time Estimate: 45 minutes
NetApp Confidential 29
EXERCISE
Please refer to your exercise guide.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Module 10
SAN Protocols
NetApp Confidential 1
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Module Objectives
NetApp Confidential 2
MODULE OBJECTIVES
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 1
NetApp Confidential 3
LESSON 1
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Unified Storage
NFS iSCSI
Corporate CIFS
LAN FCoE
FC
NAS SAN
NetApp Confidential 4
UNIFIED STORAGE
A SAN is a block-based storage system that uses FC, Fibre Channel over Ethernet (FCoE), and iSCSI
protocols to make data available over the network. Starting with the Data ONTAP 8.1 operating system,
clustered Data ONTAP systems began supporting SANs on clusters of up to four nodes. In the Data ONTAP
8.2 operating system, SAN is supported in clusters of up to eight nodes.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
SAN Protocol Support
Either FC or IP can be used to implement a SAN:
FC:
Uses FC protocol to communicate FC SAN is
covered in
Physical Data FC Frame SCSI-3 SAN Scaling
and
Uses FCoE to communicate Architecting.
NetApp Confidential 5
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Scalable SAN Enhancements
Windows Red Hat VMware ESX HPUX Solaris Windows Red Hat VMware ESX HPUX Solaris AIX
NetApp Confidential 6
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Ports
Application
Initiator File System
TCP/IP Driver iSCSI Driver SCSI Driver FC Driver
SAN Services
Target TCP/IP Driver iSCSI Driver WAFL File System FC Driver
IP LUN FC
SAN SAN
NetApp Confidential 7
PORTS
Data is communicated over ports. In an Ethernet SAN, the data is communicated over Ethernet ports. In an
FC SAN, the data is communicated over FC ports. For FCoE, the initiator has a converged network adapter
(CNA), and the target has a unified target adapter (UTA).
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Nodes and Portals in iSCSI
Application
Initiator File System
SCSI Driver
Data Vserver
SAN Services
Target
WAFL File System
IP
LUN
SAN
NetApp Confidential 8
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Connectivity Between the Initiator and
the Target
Application
Initiator File System
SCSI Driver
Switch
SAN Services
Target WAFL File System
IP LUN
SAN
NetApp Confidential 9
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Direct and Indirect Paths
1 of 3
MPIO
ALUA
Direct
LUN
NetApp Confidential 10
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Direct and Indirect Paths
2 of 3
MPIO
ALUA
Indirect
LUN
NetApp Confidential 11
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Direct and Indirect Paths
3 of 3
MPIO
ALUA
LUN
NetApp Confidential 12
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Path Priority Selection
NetApp Confidential 13
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 2
NetApp Confidential 14
LESSON 2
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Clustered Data ONTAP Support
Clustered Data ONTAP 8.1 and later versions support iSCSI.
To configure iSCSI by using NetApp System Manager or
the CLI:
1. Add the iSCSI licenses for the cluster.
2. Create or designate an aggregate for the root volume of a Vserver.
3. Create or designate a Vserver for iSCSI.
4. Enable iSCSI traffic for the Vserver.
5. Create iSCSI logical interfaces (LIFs).
6. Create an initiator group (igroup)
7. Create and bind port sets.
8. Create or designate an aggregate and volume for a LUN.
9. Create a LUN.
10.Map the LUN to the appropriate igroup.
NetApp Confidential 15
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Licensing iSCSI
The cluster must have the iSCSI license installed.
Install the license by using:
The Cluster Setup Wizard
NetApp System Manager
The CLI
NetApp Confidential 16
LICENSING ISCSI
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Identifying an Aggregate for Vserver Use
If needed, create an aggregate:
cluster1::> storage aggregate create -aggregate
aggr_iscsi_2 -node cluster1-02 -diskcount 7
Verify the aggregate:
cluster1::> aggr show
Aggregate Size Available Used% State #Vols Nodes RAID Status
--------- -------- --------- ----- ------- ------ ------ ----- ------
aggr0 900MB 43.54MB 95% online 1 cluster1-01 raid_dp, normal
aggr0_scaling_02_0
900MB 43.55MB 95% online 1 cluster1-02 raid_dp, normal
aggr_iscsi_1 4.39GB 4.25GB 3% online 2 cluster1-01 raid_dp, normal
aggr_iscsi_2 4.39GB 4.39GB 0% online 0 cluster1-02 raid_dp, normal
4 entries were displayed.
NetApp Confidential 17
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Creating an iSCSI-Enabled Vserver
Create a Vserver:
cluster1::> vserver create -vserver vsISCSI2 -rootvolume
vsISCSI2_root -aggregate aggr_iscsi_2 -ns-switch file
-nm-switch file -rootvolume-security-style ntfs
NetApp Confidential 18
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Creating iSCSI LIFs
Create an iSCSI LIF:
cluster1::> network interface create -vserver vsISCSI2 -lif i2LIF1 -
role data -data-protocol iscsi -home-node cluster1-01 -home-port e0c -
address 192.168.239.40 -netmask 255.255.255.0 -status-admin up
NetApp Confidential 19
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
iSCSI LIFs Considerations
NetApp Confidential 20
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Creating Port Sets
Create a port set:
cluster1::> lun portset create -vserver vsISCSI2
-portset portset_iscsi2 -protocol iscsi -port-name i2LIF1
i2LIF2 i2LIF3 i2LIF4
NetApp Confidential 21
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 3
NetApp Confidential 22
LESSON 3
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Windows Native Multipath I/O
Windows Server can be configured to support multipath
I/O (MPIO).
Right-click
Features, and
then select Add
Feature. Multipath I/O
added
NetApp Confidential 23
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Device-Specific Modules
A device-specific module (DSM) is a driver that plugs
into an MPIO framework.
Windows MPIO supports:
Windows DSM 3.5 or later
NetApp Host Utilities Kit for Windows
Use the Interoperability Matrix Tool to verify the
recommended version.
NetApp Confidential 24
DEVICE-SPECIFIC MODULES
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Host Utilities
NetApp Confidential 25
HOST UTILITIES
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
The iSCSI Software Initiator: Discovery
Click here to
configure.
NetApp Confidential 26
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
The iSCSI Software Initiator: Connection
Click here to
The Vserver is enable
discovered. multipath.
Click here to
connect.
Click here to
accept the
connection
method.
NetApp Confidential 27
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
The iSCSI Software Initiator:
Favorite Targets
NetApp Confidential 28
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Creating an igroup
Create an igroup:
cluster1::> lun igroup create -vserver vsISCSI2 -igroup
ig_myWin2 -protocol iscsi -ostype windows -initiator iqn.1991-
05.com.microsoft:win-frtp2qb78mr portset portset_iscsi2
Verify an igroup:
cluster1::> igroup show
Vserver Igroup Protocol OS Type Initiators
--------- -------- -------- -------- -------------------------
vsISCSI2 ig_myWin2 iscsi windows iqn.1991-
05.com.microsoft:win-frtp2qb78mr
NetApp Confidential 29
CREATING AN IGROUP
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Verifying Connectivity
1 of 2
Verify the target portal groups:
cluster1::> vserver iscsi tpgroup show -vserver vsISCSI2
TPGroup TPGroup Logical
Vserver Name Tag Interface
--------- ---------------- ------- ----------
vsISCSI2 i2LIF1 1032 i2LIF1
vsISCSI2 i2LIF2 1033 i2LIF2
vsISCSI2 i2LIF3 1034 i2LIF3
vsISCSI2 i2LIF4 1035 i2LIF4
4 entries were displayed.
NetApp Confidential 30
VERIFYING CONNECTIVITY: 1 OF 2
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Verifying Connectivity
2 of 2
Verify sessions:
cluster1::> vserver iscsi session show -vserver vsISCSI2
Verify connections:
cluster1::> vserver iscsi connection show -vserver vsISCSI2
Tpgroup Conn Local Remote TCP Recv
Vserver Name TSIH ID Address Address Size
------------ --------- ----- ----- --------------- ----------- --------
vsISCSI2 i2LIF1 5 1 192.168.239.40 192.168.239.145 13140
NetApp Confidential 31
VERIFYING CONNECTIVITY: 2 OF 2
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Creating a Volume
Create a volume:
cluster1::> vol create -vserver vsISCSI2 -volume vol1
-aggregate aggr_iscsi_2 -size 150MB -state online -type RW
-policy default -security-style ntfs
Verify a volume:
cluster1::> vol show
Vserver Volume Aggregate State Type Size Available Used%
--------- ---------- --------- -------- ---- ------- --------- -----
vsISCSI2 vol1 aggr_iscsi_2 online RW 150MB 142.4MB 5%
NetApp Confidential 32
CREATING A VOLUME
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Creating a LUN
Create a fully provisioned LUN:
cluster1::> lun create -vserver vsISCSI2 -volume vol1
-lun lun_vsISCSI2_1 -size 50MB
-ostype windows_2008 -space-reserve enable
Verify a LUN:
cluster1::> lun show -vserver vsISCSI2
Vserver Volume Qtree LUN State Mapped Type Size
--------- ------ ----- ------------ ------ -------- -------- -------
vsISCSI2 vol1 "" lun_vsISCSI2_1 online unmapped windows_2008 54.91MB
NetApp Confidential 33
CREATING A LUN
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Mapping a LUN
Map a LUN to an igroup:
cluster1::> lun map -vserver vsISCSI2 -volume vol1
-lun lun_vsISCSI2_1 -igroup ig_myWin2
Verify mapping:
cluster1::> lun show -instance /vol/vol1/lun_vsISCSI2_1
Vserver Name: vsISCSI2
LUN Path: /vol/vol1/lun_vsISCSI2_1
OS Type: windows_2008
Space Reservation: enabled
Serial Number: BGMc1]-hUDrf
Comment:
Space Reservations Honored: true
Space Allocation: disabled
State: online
LUN UUID: 9d426342-cf8d-11e0-90b1-123478563412
Mapped: mapped
Block Size: 512B
NetApp Confidential 34
MAPPING A LUN
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 4
NetApp Confidential 35
LESSON 4
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Scanning for a New LUN
Select Disk
Management.
NetApp Confidential 36
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Initializing a New LUN
The LUN
appears.
The LUN
is offline.
Right-click,
and then select
Initialize.
NetApp Confidential 37
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Provisioning a New LUN
The wizard
launches.
Right-click, and
then select New
Simple Volume.
NetApp Confidential 38
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
The Volume Size and Mount Options
NetApp Confidential 39
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
The Format and Summary Pages
NetApp Confidential 40
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Additional SAN Resources
The SAN Implementation instructor-led course:
Implementation details for when you use Windows, vSphere,
and Linux as initiators
Information about SnapDrive for Windows and SnapDrive for
UNIX
The SAN Scaling and Architecting instructor-led course:
Details about FC and FCoE implementation
Steps for troubleshooting:
LIF failure
Storage failover
Volume move
Take both courses and prepare for the NCIE-SAN
certification exams.
NetApp Confidential 41
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Module Summary
Now that you have completed this module, you should
be able to:
Explain the differences between the supported SAN
protocols
Identify the components that implement scalable SAN
on a cluster in a clustered Data ONTAP environment
Configure iSCSI on a cluster and create
a LUN
Configure a Windows iSCSI initiator
Create a portset and an igroup
NetApp Confidential 42
MODULE SUMMARY
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Exercise
Module 10: SAN Protocols
Estimated Time: 45 minutes
NetApp Confidential 43
EXERCISE
Please refer to your exercise guide.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Module 11
Storage Efficiency
NetApp Confidential 1
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Module Objectives
NetApp Confidential 2
MODULE OBJECTIVES
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Storage Efficiency Features
Deduplication Compression
NetApp Confidential 3
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Thin Provisioning
Typical: 40% Use NetApp: More than 70% Use
Buy 50% Less Storage
waste
Save 50% in Power, Cooling,
App 3 8 spindles and Space
waste Shared
App 2 6 spindles capacity
App 3
12 spindles
waste
App 1
NetApp Confidential 4
THIN PROVISIONING
If you compare the NetApp storage use approach to the competitions approach, you find one feature that
stands out. Flexible dynamic provisioning with FlexVol technology provides high storage use rates and
enables customers to increase capacity without the need to physically reposition or repurpose storage devices.
NetApp thin provisioning enables users to oversubscribe data volumes, which results in high use models. You
can think of this approach as just-in-time storage.
To manage thin provisioning on a cluster, use the volume command.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Deduplication
NetApp Confidential 5
DEDUPLICATION
Deduplication improves physical storage-space efficiency by eliminating redundant data blocks within a
FlexVol volume. Deduplication works at the block level on an active file system and uses the Write
Anywhere File Layout (WAFL ) block-sharing mechanism. Each block of data has a digital signature that is
compared with all of the other blocks within the data volume. If an exact match is identified, the duplicate
block is discarded, and a data pointer is modified so that the storage system references the copy of the data
object that is stored on disk. The deduplication feature works well with datasets that have large quantities of
duplicated data or white space. You can configure deduplication operations to run automatically or according
to a schedule. You can run deduplication on new data or existing data on any FlexVol volume.
The deduplication feature enables you to reduce storage costs by reducing the actual amount of data that is
stored over time. For example, if you create a 100-GB full backup one night and 5 GB of data changes the
next day, the second nightly backup needs to store only the 5 GB of changed data. This approach amounts to a
95% spatial reduction on the second backup. In operational environments, deduplication of full backups can
save more than 90% of the required space, and deduplication of incremental backups saves an average of 30%
of the space. In nonbackup scenarios, such as the creation of virtual machine images, you can save 40% of the
space. To estimate your own savings, visit the NetApp deduplication calculator at http://www.secalc.com.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
The Benefits of Deduplication
Can reduce space consumption by 20 times or
greater for backups (TR3966)
Is integrated with the Data ONTAP operating system:
General-purpose volume deduplication
Identification and removal of redundant data blocks
Is application-agnostic:
Primary storage
Backup data
Archival data
Runs as a background process and is transparent
to clients
NetApp Confidential 6
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Data Compression
Data Compression
NetApp Confidential 7
DATA COMPRESSION
Data compression enables you to reduce the physical capacity that is required to store data on a cluster by
compressing data blocks within a FlexVol volume. Data compression is available only on FlexVol volumes
that are created on 64-bit aggregates. Data compression optimizes the storage space and bandwidth that is
required to replicate data during volume operations, such as moving volumes and performing SnapMirror
transfers. You can compress standard data files, virtual disks, and LUNs, but not file system internal files, NT
streams, or metadata.
To manage compression on a cluster, use the volume efficiency command.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Characteristics of Data Compression
Inline compression
Parallelism is increased.
Path length is decreased.
Latency is increased.
Postprocess compression
Uncompressed data is compressed during idle time.
Only previously uncompressed blocks are compressed.
Compression is done before deduplication.
Data ONTAP 8.2 and later can detect incompressible data
before wasting cycles.
NetApp Confidential 8
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Cloning
Aggregate
Aggregate
Aggregate
vol1
vol1
vol1 vol1clone
vol1clone
vol1 clone
Data Blocks
Data Blocks
cluster1::> volume clone create -vserver vs1 -flexclone vol1clone -parent-volume vol1
NetApp Confidential 9
CLONING
A FlexClone volume is a point-in-time, space-efficient, writable copy of the parent volume. The FlexClone
volume is a fully functional stand-alone volume. Changes that are made to the parent volume after the
FlexClone volume is created are not reflected in the FlexClone volume, and changes to the FlexClone volume
are not reflected in the parent volume.
FlexClone volumes are created in the same virtual server (Vserver) and aggregate as the parent volume, and
FlexClone volumes share common blocks with the parent volume. While a FlexClone copy of a volume
exists, the parent volume cannot be deleted or moved to another aggregate. You can sever the connection
between the parent and the FlexClone volume by executing a split operation.
A FlexClone split causes the FlexClone volume to use its own disk space, but the FlexClone split enables you
to delete the parent volume and to move the parent or the FlexClone volume to another aggregate.
To manage cloning on a cluster, use the volume clone command.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Module Summary
NetApp Confidential 10
MODULE SUMMARY
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Exercise
Module 11: Storage Efficiency
Time Estimate: 60 minutes
NetApp Confidential 11
EXERCISE
Please refer to your exercise guide.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Module 12
Data Protection: Snapshot and
SnapMirror Copies
NetApp Confidential 1
12-1 Clustered Data ONTAP Administration: Data Protection: Snapshot and SnapMirror Copies
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Module Objectives
After this module, you should be able to:
Create a Snapshot copy of a volume and create
Snapshot policies
Create load-sharing (LS) and data-protection (DP)
mirror copies
Manually and automatically replicate mirror copies
Promote an LS mirror copy to replace its read/write
volume
Restore a Snapshot copy to be a read/write volume
Configure Vserver and cluster peering for data
protection
NetApp Confidential 2
MODULE OBJECTIVES
12-2 Clustered Data ONTAP Administration: Data Protection: Snapshot and SnapMirror Copies
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Data-Protection Methods
Snapshot copies
Mirror copies for data protection and load sharing
SnapVault backup copies
Tape backups through third-party software
Restores:
volume snapshot restore
LS mirrors: snapmirror promote
DP mirrors: snapmirror resync
Vault backups: snapmirror restore
NDMP restore
NetApp Confidential 3
DATA-PROTECTION METHODS
A customers data-protection plan is likely to use all of the methods of protecting data that are shown here.
12-3 Clustered Data ONTAP Administration: Data Protection: Snapshot and SnapMirror Copies
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Disaster Recovery
NetApp Confidential 4
DISASTER RECOVERY
No native tape backup or restore commands are currently available in clustered Data ONTAP. All tape
backups and restores are performed through third-party NDMP applications.
12-4 Clustered Data ONTAP Administration: Data Protection: Snapshot and SnapMirror Copies
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 1
NetApp Confidential 5
LESSON 1
12-5 Clustered Data ONTAP Administration: Data Protection: Snapshot and SnapMirror Copies
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Snapshot Technology
A Snapshot copy is a read-only image of the active
file system at a point in time.
The benefits of Snapshot technology are:
Nearly instantaneous application data backups
Fast recovery of data that is lost due to:
Accidental data deletion
Accidental data corruption
Snapshot technology is the foundation for these
NetApp products:
SnapRestore SnapManager
SnapDrive SnapMirror
FlexClone SnapVault
NetApp Confidential 6
SNAPSHOT TECHNOLOGY
Snapshot technology is a key element in the implementation of the WAFL (Write Anywhere File Layout) file
system:
A Snapshot copy is a read-only, space-efficient, point-in-time image of data in a volume or aggregate.
A Snapshot copy is only a picture of the file system, and it does not contain any data file content.
Snapshot copies are used for backup and error recovery.
The Data ONTAP operating system automatically creates and deletes Snapshot copies of data in volumes to
support commands that are related to Snapshot technology.
12-6 Clustered Data ONTAP Administration: Data Protection: Snapshot and SnapMirror Copies
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Volume Snapshot Functionality
Snapshot copies can be created:
Manually
Automatically based on a schedule defined by Snapshot
policies
A user can restore files and directories through a client:
UNIX: .snapshot directory (visibility set at the volume)
Windows: ~snapshot directory (visibility set at the share)
A cluster administrator can restore an entire volume with
SnapRestore:
Restores an entire volume (or an individual file)
Command: volume snapshot restore
Requires the SnapRestore license
NetApp Confidential 7
12-7 Clustered Data ONTAP Administration: Data Protection: Snapshot and SnapMirror Copies
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Data ONTAP Snapshot Copy
PROD SNAP 1 SNAP 2
Production: Active File System
A A A
Prod
B B B
C C C
WRITE WRITE D D D
A B C D E F F E EE E E
FF F F
F
S1 S2
SNAP #1 SNAP #2
NetApp Confidential 8
12-8 Clustered Data ONTAP Administration: Data Protection: Snapshot and SnapMirror Copies
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Restore from a Snapshot
PROD SNAP
PROD1 SNAP 2
Production: Active File System
A A A
Prod
B B B
C C C
D D D
A B C D E F F E E
E E E
F
F F F
F
Prod
S1 S2
Production:
SNAP #1 SNAP #2
Active
File System
NetApp Confidential 9
12-9 Clustered Data ONTAP Administration: Data Protection: Snapshot and SnapMirror Copies
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
CLI: Snapshot Copy Creation
To manually create Snapshot copies:
cluster1::> volume snapshot create -vserver vs0
-volume vol3 -snapshot vol3_snapshot
NetApp Confidential 10
12-10 Clustered Data ONTAP Administration: Data Protection: Snapshot and SnapMirror Copies
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Snapshot Disk Consumption
Snapshot Reserve Aggregate Space
NetApp Confidential 11
12-11 Clustered Data ONTAP Administration: Data Protection: Snapshot and SnapMirror Copies
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
The volume snapshot show Command
netappu::> volume snap show -vserver vs7 -volume vs7_vol1
---Blocks---
Vserver Volume Snapshot Size Total% Used%
-------- ------- ---------------------------------- ------------ ------ -----
vs7 vs7_vol1
weekly.2011-09-22_0015 88KB 0% 37%
5min.2011-09-23_1120 76KB 0% 34%
5min.2011-09-23_1125 72KB 0% 33%
5min.2011-09-23_1130 92KB 0% 38%
weekly.2011-09-29_0015 56KB 0% 27%
daily.2011-10-02_0010 56KB 0% 27%
daily.2011-10-03_0010 52KB 0% 26%
hourly.2011-10-03_0605 52KB 0% 26%
hourly.2011-10-03_0705 52KB 0% 26%
hourly.2011-10-03_0805 52KB 0% 26%
hourly.2011-10-03_0905 52KB 0% 26%
hourly.2011-10-03_1005 52KB 0% 26%
hourly.2011-10-03_1105 52KB 0% 26%
13 entries were displayed.
NetApp Confidential 12
12-12 Clustered Data ONTAP Administration: Data Protection: Snapshot and SnapMirror Copies
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Snapshot Policies
Created at the cluster level
Assigned at the volume level
Can be created with the CLI or OnCommand System manager
netappu::> volume snapshot policy show
Number Of Is
Name Schedules Enabled Comment
----------------- ---------- ------- ----------------------------------------
-
default 3 true Default policy with hourly, daily &
weekly schedules.
Schedule: hourly Count: 6 Prefix: hourly
daily 2 daily
weekly 2 weekly
NetApp Confidential 13
SNAPSHOT POLICIES
Two Snapshot policies are automatically created: default and none. If a volume uses none as its Snapshot
policy, no Snapshot copies of it will be created. If a volume uses the default policy, after two weeks, there
will be a total of ten Snapshot copies of it (six hourly copies, two daily copies, and two weekly copies).
12-13 Clustered Data ONTAP Administration: Data Protection: Snapshot and SnapMirror Copies
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Snapshot Directory View
from a Windows Client
NetApp Confidential 14
12-14 Clustered Data ONTAP Administration: Data Protection: Snapshot and SnapMirror Copies
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 2
NetApp Confidential 15
LESSON 2
12-15 Clustered Data ONTAP Administration: Data Protection: Snapshot and SnapMirror Copies
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
The SnapMirror Engine
Is used for the volume move, volume
copy, and snapmirror commands
Uses SpinNP as the transport protocol
between the source and destination volumes
(intracluster only)
Uses a Snapshot copy of the source,
determines the incremental differences, and
transfers only the differences
Executes SnapVault backups
NetApp Confidential 16
12-16 Clustered Data ONTAP Administration: Data Protection: Snapshot and SnapMirror Copies
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
SnapMirror Considerations
NetApp Confidential 17
SNAPMIRROR CONSIDERATIONS
12-17 Clustered Data ONTAP Administration: Data Protection: Snapshot and SnapMirror Copies
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
LS and DP Mirror Copies
1 of 2
Mirror copies are read-only copies of a volume.
Mirror copies are volumes that have SnapMirror
relationships with source volumes.
Mirror copies are updated from source volumes
manually, or automatically based on a scheduled.
LS mirror relationships stay within the Vserver of the
source volume.
DP mirror relationships can be within a Vserver,
between Vservers within the cluster, and between
Vservers of two different clusters.
Mirrors cannot be cascaded.
NetApp Confidential 18
12-18 Clustered Data ONTAP Administration: Data Protection: Snapshot and SnapMirror Copies
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
LS and DP Mirror Copies
2 of 2
A volume must be created before the volume can be
used as a mirror destination.
A SnapMirror volume must be created as type DP, a
RW volume cannot be changed to a DP mirror.
Creating a mirror relationship does not cause an initial
update to be performed.
An LS mirror copy can be promoted to become the
source volume using the snapmirror promote
command.
A DP mirror copy can be converted to a writable
volume using the snapmirror break command.
A mirror copy can be restored to its source.
NetApp Confidential 19
12-19 Clustered Data ONTAP Administration: Data Protection: Snapshot and SnapMirror Copies
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
The snapmirror promote Command
For LS Mirrors Only
The snapmirror promote command:
Performs a failover to a destination volume
Changes the destination volume to the new source
volume
Read-only volume becomes read-write
New source volume assumes the identity and SnapMirror
relationships of the original source volume
Destroys the original source volume
The destination volume must be an LS volume.
Client accesses are redirected from the original
source volume to the promoted destination volume.
NetApp Confidential 20
12-20 Clustered Data ONTAP Administration: Data Protection: Snapshot and SnapMirror Copies
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Mirror Creation Steps
1. Create a (mirror) volume: volume create
2. Create a mirror relationship: snapmirror create
3. Perform baseline replication:
Data protection: snapmirror initialize
Load sharing: snapmirror initialize-ls-set
4. Perform incremental replication:
Data protection: snapmirror update
Load sharing: snapmirror update-ls-set
NetApp Confidential 21
12-21 Clustered Data ONTAP Administration: Data Protection: Snapshot and SnapMirror Copies
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
The snapmirror show Command
cluster1::> snapmirror show
Source Destination Mirror Relationship Total
Path Type Path State Status Progress Healthy
------------- ---- ------------ ------------- -------------- ---------- -------
cluster1://vs2/vs2root
DP cluster1://vs2/vs2root_dp1
Snapmirrored Idle - true
cluster1://vs2/vs2root_dp2
Snapmirrored Idle - true
LS cluster1://vs2/vs2root_ls2
Snapmirrored Idle - true
cluster1://vs2/vol227
XDP cluster2://vs7/xdp_vol227
Snapmirrored Idle - true
4 entries were displayed.
NetApp Confidential 22
12-22 Clustered Data ONTAP Administration: Data Protection: Snapshot and SnapMirror Copies
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
The snapmirror show instance
Command
cluster1::> snapmirror show -source-volume vs2root -type ls -instance
Relationship Type: LS
Tries Limit: 8
Transfer Snapshot: -
Snapshot Progress: -
Total Progress: -
Snapshot Checkpoint: -
Healthy: true
NetApp Confidential 23
12-23 Clustered Data ONTAP Administration: Data Protection: Snapshot and SnapMirror Copies
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
LS Mirror Copies
LS mirror copies are primarily used for load sharing
(balancing) when client read access is used.
Read access requests for a volume are distributed to the
volumes LS mirror copies, unless the special . admin path is
used.
LS mirror copies are automatically available in the
namespace.
LS mirror copies are implicitly accessed by clients (for read
access).
Junctions are accessible in LS mirror copies.
LS mirror copies are always replicated as a group.
A source volume can have a maximum of one LS mirror copy
per node.
NetApp Confidential 24
LS MIRROR COPIES
The purpose of LS mirror copies is to offload volumes (and a single data module) of read activity. Therefore,
all mirror copies must be synchronized at the same data-version level. When a volume is replicated to its LS
mirror copies, all LS mirror copies of the volume are synchronized directly from the volume (without
cascading).
The way that NFS is mounted on a client, or which CIFS share is mapped to the client, changes which data is
accessedeither the read/write volume or one of its LS mirror copies. NFS is usually mounted at the root of a
Vserver by using a command such as mount <host>://myvserver. This command causes the LS
selection algorithm to be invoked. If, however, the NFS mount command is executed by using the . admin
path, such as mount <host>:/.admin /myvserver, this mount from the client always accesses the
read/write volumes when traversing the namespace, even if there are LS mirror copies for volumes.
For CIFS, the difference is not in how a share is accessed but in which share is accessed. If you create a share
for the . admin path and use that share, the client always has read/write access. If you create a share without
using . admin, the LS selection algorithm is used.
Unless the special .admin path is used, clients are transparently directed to an LS mirror copy for read
operations rather than to the read/write volume.
12-24 Clustered Data ONTAP Administration: Data Protection: Snapshot and SnapMirror Copies
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
LS Mirror Selection
The Data ONTAP operating system:
If an LS mirror copy is on the same node as the network module
that fields the request, the network module uses that LS mirror
copy.
If no LS mirror copy is on the same node as the network module
that fields the request, the network module uses an up-to-date
LS mirror copy on another node.
NFS and CIFS:
NFS: A new LS mirror can be selected even if a file remains
open.
CIFS: A new LS mirror is not selected while a file remains open.
NetApp Confidential 25
LS MIRROR SELECTION
When the / path is used (that is, the . admin path is not used) and a read or write request comes through that
path into the network module of a node, the network module first determines if there are any LS mirror copies
of the volume that it needs to access. If there arent any LS mirror copies of that volume, the read request is
routed to the read/write volume. If there are LS mirror copies of the volume, preference is given to an LS
mirror copy on the same node as the network module that fielded the request. If there isnt an LS mirror copy
on that node, an up-to-date LS mirror copy from another node is chosen.
If a write request goes to an LS mirror copy, it returns an error to the client, which indicates that the file
system is read-only. To write to a volume that has LS mirror copies, you must use the . admin path.
For NFS clients, an LS mirror copy is used for a set period of time (minutes), after which a new LS mirror
copy is chosen. After a file is opened, different LS mirror copies can be used across different NFS operations.
The NFS protocol can manage the switch from one LS mirror copy to another.
For CIFS clients, the same LS mirror copy continues to be used for as long as a file is open. After the file is
closed, and the period of time expires, a new LS mirror copy is selected before the next time that a file is
opened. CIFS clients use this process because the CIFS protocol cannot manage the switch from one LS
mirror copy to another.
12-25 Clustered Data ONTAP Administration: Data Protection: Snapshot and SnapMirror Copies
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Typical LS Mirror Issues
1 of 2
Client machines cannot see volumes that have
been created.
The volume must be mounted (given a
junction path) to the namespace.
Replicate the parent volume.
NetApp Confidential 26
12-26 Clustered Data ONTAP Administration: Data Protection: Snapshot and SnapMirror Copies
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Typical LS Mirror Issues
2 of 2
Client requests always go to the source volume rather than
to the LS mirror copy. This issue occurs when the client is
mounted by using the .admin path or share.
Because the client is read-only, client write requests fail.
This issue occurs when the client is not mounted by using the
.admin path or share.
For read/write NFS access to a volume that has LS mirror
copies, clients must be mounted by using the .admin path.
For read/write CIFS access to a volume that has LS mirror
copies, a specific volume .admin CIFS share must be created,
and the clients must connect to that share.
NetApp Confidential 27
12-27 Clustered Data ONTAP Administration: Data Protection: Snapshot and SnapMirror Copies
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
FlexCache Volumes and LS Mirror Volumes
1 of 2
A Origin volume
FlexCache volume
A
B LS mirror volume
A A A A
B B B B B
NetApp Confidential 28
12-28 Clustered Data ONTAP Administration: Data Protection: Snapshot and SnapMirror Copies
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
FlexCache Volumes and LS Mirror Volumes
2 of 2
NetApp Confidential 29
12-29 Clustered Data ONTAP Administration: Data Protection: Snapshot and SnapMirror Copies
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
DP Mirror Copies
1 of 2
DP mirror copies are not implicitly accessed by
clients.
DP mirror copies can be mounted (through a junction)
into the namespace by the administrator.
In DP mirror copies, junctions are not accessible.
Each DP mirror copy replication is independent of the
LS mirror copies and of other DP mirror copies of the
same source volume.
NetApp Confidential 30
DP MIRROR COPIES: 1 OF 2
Data-protection mirror copies are not meant for client access, although they can be mounted into the
namespace by an administrator. Junctions cannot be followed in a data-protection mirror copy, so access is
given to only the data that is contained in that data-protection mirror copy, not to any other volumes that are
mounted to the source read/write volume.
Data-protection mirror copies are primarily meant for disk-based online backups. Data-protection mirror
copies are simpler, faster, more reliable, and easier to restore than tape backups are, although data-protection
mirror copies are not portable for storing offsite. A typical use of data-protection mirror copies is to put them
on aggregates of SATA disks that use RAID-DP technology and then mirror data to them daily during the
least active time in the cluster. One data-protection mirror copy per volume is generally sufficient.
12-30 Clustered Data ONTAP Administration: Data Protection: Snapshot and SnapMirror Copies
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
DP Mirror Copies
2 of 2
Consider using inexpensive, high-capacity (and slower)
SATA disks for DP mirror copies.
DP mirror copies can be restored or resynchronized:
To restore a mirror copy is to re-create a broken SnapMirror
relationship such that destination changes overwrite the
source data.
To resynchronize a mirror copy is to re-create a broken
SnapMirror relationship such that source changes overwrite
the destination data.
You can restore and resynchronize to a new volume.
NetApp Confidential 31
DP MIRROR COPIES: 2 OF 2
A feature that is available only for data-protection mirror copies is the ability to perform a SnapMirror restore.
This action can restore a broken mirror relationship between a source and destination and perform an
incremental overwrite of the source volume with the current contents of the mirror destination. If the restore is
performed between a source and destination that didnt formerly have a SnapMirror relationship, a baseline
copy of the destination contents are performed to the source volume.
Resynchronizing a source and destination is similar to restoring a source and destination, except that the
source content overwrites the destination content.
12-31 Clustered Data ONTAP Administration: Data Protection: Snapshot and SnapMirror Copies
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Intercluster Logical Interfaces and Ports
Intercluster LIFs Ports Cluster LIFs
New in Data
ONTAP 8.1
Share data ports ifgrp1
with data LIFs or
e0a e0b e0c e0d
use dedicated
intercluster ports
Node scoped!
failover only to
other intercluster
capable ports on Data LIFs
Intercluster LIFs
same node (IP Addresses)
NetApp Confidential 32
12-32 Clustered Data ONTAP Administration: Data Protection: Snapshot and SnapMirror Copies
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Intercluster SnapMirror Replication
Replication between clusters for DR
Data transfers on intercluster network
RW Source volume
DP Destination volume
NetApp Confidential 33
12-33 Clustered Data ONTAP Administration: Data Protection: Snapshot and SnapMirror Copies
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Intercluster Networking for SnapMirror
NetApp Confidential 34
12-34 Clustered Data ONTAP Administration: Data Protection: Snapshot and SnapMirror Copies
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Cluster and Vserver Peering
Supported relationships include:
Intercluster (Cluster and Vserver peers required)
Intracluster (Vserver peer required)
12-35 Clustered Data ONTAP Administration: Data Protection: Snapshot and SnapMirror Copies
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Starting in clustered Data ONTAP 8.2, more granularity in SnapMirror security is provided. Replication
permission must be defined by peering Storage Virtual Machines together. Before creating any SnapMirror
relationships between a pair of Storage Virtual Machines, you must have a Storage Virtual Machine peer
relationship between the pair of Storage Virtual Machines. These Storage Virtual Machines can be local
(intracluster) or remote (intercluster). Storage Virtual Machine peering is a permission-based mechanism and
is a one-time operation that must be performed by the cluster administrators.
12-36 Clustered Data ONTAP Administration: Data Protection: Snapshot and SnapMirror Copies
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
SnapMirror Snapshot Copies
1 of 2
bcluster1::> vol snap show -vserver vs2 -volume vs2root
(volume snapshot show)
---Blocks---
Vserver Volume Snapshot Size Total% Used%
-------- ------- ---------------------------------- ------------ ------ -----
vs2 vs2root
weekly.2011-10-02_0015 84KB 0% 1%
daily.2011-10-04_0010 80KB 0% 1%
snapmirror.79deda29-e8a6-11e0-b411-
123478563412_4_2147484684.2011-10-04_052359
0% 1%
92KB
hourly.2011-10-04_2105 72KB 0% 1%
hourly.2011-10-04_2205 72KB 0% 1%
hourly.2011-10-04_2305 72KB 0% 1%
hourly.2011-10-05_0005 72KB 0% 1%
daily.2011-10-05_0010 72KB 0% 1%
hourly.2011-10-05_0105 72KB 0% 1%
NetApp Confidential 36
12-37 Clustered Data ONTAP Administration: Data Protection: Snapshot and SnapMirror Copies
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
SnapMirror Snapshot Copies
2 of 2
---Blocks---
Vserver Volume Snapshot Size Total% Used%
-------- ------- ---------------------------------- ------------ ------ -----
snapmirror.79deda29-e8a6-11e0-b411-
123478563412_4_2147484683.2011-10-05_020500
0% 1%
60KB
hourly.2011-10-05_0205 72KB 0% 1%
snapmirror.79deda29-e8a6-11e0-b411-
123478563412_4_2147484676.2011-10-05_023500
0% 1%
72KB
12 entries were displayed.
NetApp Confidential 37
12-38 Clustered Data ONTAP Administration: Data Protection: Snapshot and SnapMirror Copies
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Module Summary
Now that you have completed this module, you should
be able to:
Create a Snapshot copy of a volume and create
Snapshot policies
Create load-sharing (LS) and data-protection (DP)
mirror copies
Manually and automatically replicate mirror copies
Promote an LS mirror copy to replace its read/write
volume
Restore a Snapshot copy to be a read/write volume
Configure Vserver and cluster peering for data
protection
NetApp Confidential 38
MODULE SUMMARY
12-39 Clustered Data ONTAP Administration: Data Protection: Snapshot and SnapMirror Copies
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Exercise
Module 12: Data Protection:
Snapshot and SnapMirror Copies
Time Estimate: 60 minutes
NetApp Confidential 39
EXERCISE
Please refer to your exercise guide.
12-40 Clustered Data ONTAP Administration: Data Protection: Snapshot and SnapMirror Copies
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Module 13
Data Protection: Backups and
Disaster Recovery
NetApp Confidential 1
13-1 Clustered Data ONTAP Administration: Data Protection: Backups and Disaster Recovery
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Module Objectives
NetApp Confidential 2
MODULE OBJECTIVES
13-2 Clustered Data ONTAP Administration: Data Protection: Backups and Disaster Recovery
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 1
NetApp Confidential 3
LESSON 1
13-3 Clustered Data ONTAP Administration: Data Protection: Backups and Disaster Recovery
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
SnapVault Software for Clusters
NetApp Confidential 4
13-4 Clustered Data ONTAP Administration: Data Protection: Backups and Disaster Recovery
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
SnapVault Benefits
NetApp Confidential 5
SNAPVAULT BENEFITS
13-5 Clustered Data ONTAP Administration: Data Protection: Backups and Disaster Recovery
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Use SnapMirror Commands
NetApp Confidential 6
13-6 Clustered Data ONTAP Administration: Data Protection: Backups and Disaster Recovery
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
SnapMirror Types
NetApp Confidential 7
SNAPMIRROR TYPES
In clustered Data ONTAP, SnapMirror technology is organized to include several types of replication
relationships.
DP is for asynchronous data protection mirror relationships.
LS is for load-sharing mirror relationships.
XDP is for backup vault relationships.
TDP is for transition relationships from Data ONTAP running in 7-Mode to clustered Data ONTAP.
RST is a transient relationship for restore operations.
13-7 Clustered Data ONTAP Administration: Data Protection: Backups and Disaster Recovery
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
SnapMirror Commands for SnapVault
snapmirror create
snapmirror initialize
snapmirror modify
snapmirror policy -type XDP
snapmirror show
snapmirror update
snapmirror restore
NetApp Confidential 8
13-8 Clustered Data ONTAP Administration: Data Protection: Backups and Disaster Recovery
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Snapshot Policy on the Source Volume
cluster1::> snapshot policy show
Vserver: cluster1
Number of Is
Policy Name Schedules Enabled Comment
------------------------ --------- ------- ---------------------------------
default 3 true Default policy with hourly, daily
& weekly schedules.
Schedule Count Prefix SnapMirror Label
---------------------- ----- ---------------------- ----------------
hourly 6 hourly -
daily 2 daily daily
weekly 2 weekly weekly
NetApp Confidential 9
13-9 Clustered Data ONTAP Administration: Data Protection: Backups and Disaster Recovery
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Steps for Creating a SnapVault
Relationship on the Destination Cluster
1. Create a destination data aggregate.
2. Create a SnapVault Vserver.
3. Create a destination volume.
4. Create a SnapMirror policy (type -XDP).
5. Create the SnapVault relationship.
6. Initialize the SnapVault relationship.
NetApp Confidential 10
13-10 Clustered Data ONTAP Administration: Data Protection: Backups and Disaster Recovery
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Fan-In Deployments
Multiple clusters/Vservers backing up to single cluster or
Vserver
Affected by cluster peer limit (7:1 cluster fan-in limit in 8.2)
NetApp Confidential 11
FAN-IN DEPLOYMENTS
Clustered Data ONTAP supports system level Fan-in. Since replication is now done at the volume level, you
cannon have multiple source volumes backing up to the same destination volume. Similar to the way you
could have multiple source Qtrees backing up to one volume with 7-Mode SnapVault, you can have volumes
from different Vservers and different clusters backing up to volumes on the same vserver. To configure Fan-
in, you must set up cluster peers. Note that in 8.2 the number of cluster peers is limited to 8, so volumes from
a maximum of 8 clusters can back up to a single cluster. Because the current limit in Data ONTAP is 8 cluster
peers, this means that volumes from a maximum of 7 different clusters can back up to a single destination
cluster.
13-11 Clustered Data ONTAP Administration: Data Protection: Backups and Disaster Recovery
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Fan-Out Deployments
SnapMirror and SnapVault of single primary volume
1:4 fan-out possible (Can be any combination of
SnapMirror and SnapVault)
NetApp Confidential 12
FAN-OUT DEPLOYMENTS
Up to four SnapVault destination volumes can be replicated from the same source volume. The limit of four
destination volumes is shared between SnapMirror and SnapVault, therefore the 1:4 ratio applies to the total
number of SnapMirror and SnapVault relationships of any combination.
13-12 Clustered Data ONTAP Administration: Data Protection: Backups and Disaster Recovery
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Cascades
SnapMirror SnapVault
SnapVault SnapMirror
Only one SnapVault replication supported in a cascade
Cascade configuration transfers the SnapMirror base
Snapshot copy to SnapVault destination
NetApp Confidential 13
CASCADES
Supported cascade relationships include SnapMirror to SnapVault and SnapVault to SnapMirror.
Cascade relationships can contain only one instance of a SnapVault relationship; however, you can include as
many mirror copies as you require.
The cascade function is designed to guarantee that all volumes in a cascade chain have a common Snapshot
copy. The common Snapshot copy makes it possible for any pair of end points in a cascade to establish a
direct relationship.
13-13 Clustered Data ONTAP Administration: Data Protection: Backups and Disaster Recovery
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
The snapmirror restore Command
NetApp Confidential 14
13-14 Clustered Data ONTAP Administration: Data Protection: Backups and Disaster Recovery
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Comparing SnapVault to SnapMirror
If the source FlexVol volume is lost The read-only SnapVault copy can
or destroyed, clients can connect to be rendered writable only by creating
the mirror image of the source data. a FlexClone volume copy.
NetApp Confidential 15
13-15 Clustered Data ONTAP Administration: Data Protection: Backups and Disaster Recovery
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
SMTape to Seed Baselines
SnapMirror
Resync SMTape
SMTape
NetApp Confidential 16
13-16 Clustered Data ONTAP Administration: Data Protection: Backups and Disaster Recovery
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 2
NetApp Confidential 17
LESSON 2
13-17 Clustered Data ONTAP Administration: Data Protection: Backups and Disaster Recovery
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Backup and Restoration with NDMP
You can perform local NDMP, remote NDMP, and
three-way NDMP backups.
NDMPv4
Direct Access Recovery (DAR)
A clustered Data ONTAP system does not
provide native NDMP backup and restoration,
only NDMP through third-party software.
Backups do not traverse junctions; you must list
every volume to be backed up.
You should not back up directly through NFS
or CIFS.
NetApp Confidential 18
13-18 Clustered Data ONTAP Administration: Data Protection: Backups and Disaster Recovery
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Vserver-aware NDMP
CAB* Enabled Remote NDMP
DMA Server
Control
Control Connection
Data or Intercluster LIF Connection
Target
Volume
NetApp Confidential 19
VSERVER-AWARE NDMP
Clustered Data ONTAP now enables NDMP to function at the Vserver level. Resources, including FlexVol
volumes, can be backed up, restored, and scoped. Vserver-aware backups are critical for implementing multi-
tenancy.
For NDMP to be aware of a Vserver, the NDMP data management application software must be enabled with
cluster-aware backup (CAB) extensions, and the NDMP service must be enabled on the Vserver. After the
feature is enabled, you can back up and restore all volumes that are hosted across all nodes in the Vserver. An
NDMP control connection can be established on different LIF types. An NDMP control connection can be
established on any data or intercluster LIF that is owned by a Vserver that is enabled for NDMP and owns the
target volume. If a volume and tape device share the same affinity, and if the data-management application
supports the cluster-aware backup extensions, then the backup application can perform a local backup or
restore operation and, therefore, you do not need to perform a three-way backup or restore operation.
Vserver-aware NDMP user authentication is integrated with the role-based access control mechanism. For
more information about Vserver-aware NDMP and cluster-aware backup extensions, see the Clustered Data
ONTAP Data Protection Tape Backup and Recovery Guide.
13-19 Clustered Data ONTAP Administration: Data Protection: Backups and Disaster Recovery
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Configuring for NDMP
1. Enable and configure NDMP on the node or nodes:
cluster1::> system services ndmp modify
2. Identify tape and library attachments:
cluster1::> system node hardware tape drive
show
cluster1::>system node hardware tape library
show
3. Configure the data management application (such as
Symantec NetBackup) for NDMP.
NetApp Confidential 20
13-20 Clustered Data ONTAP Administration: Data Protection: Backups and Disaster Recovery
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Clustered Data ONTAP and NDMP
Clustered Data ONTAP supports the Symantec
NetBackup and IBM Tivoli Storage Manager (TSM)
data management applications, and more are being
added.
Clustered Data ONTAP supports local NDMP, remote
NDMP, and three-way NDMP backup.
A data management application with DAR can restore
selected files without sequentially reading entire
tapes.
NetApp Confidential 21
13-21 Clustered Data ONTAP Administration: Data Protection: Backups and Disaster Recovery
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Local, Remote, and Three-Way NDMP
LAN
Remote NDMP
NDMP Backup
Hosts
VERITAS
NetBackup
Server
Local Three-Way
Backup Backup
Tape Drive
Automated Automated
Tape Library Tape Library
NetApp Confidential 22
13-22 Clustered Data ONTAP Administration: Data Protection: Backups and Disaster Recovery
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
A Six-Node Cluster with Data-Protection
Mirror Backups
Cluster Network
Compute Farm
SATA Storage
and
Data-Protection
Mirror Copies
Data Network
Tape Library
NetApp Confidential 23
13-23 Clustered Data ONTAP Administration: Data Protection: Backups and Disaster Recovery
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 3
NetApp Confidential 24
LESSON 3
13-24 Clustered Data ONTAP Administration: Data Protection: Backups and Disaster Recovery
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
SnapMirror Data Protection
Create
Intracluster or intercluster replication (TR-4015)
R WAN
R
A B B
dp
NetApp Confidential 25
13-25 Clustered Data ONTAP Administration: Data Protection: Backups and Disaster Recovery
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
SnapMirror Data Protection
Initialize and Update
Primary DR
Data Center Data Center
R WAN R
A B dp B
NetApp Confidential 26
13-26 Clustered Data ONTAP Administration: Data Protection: Backups and Disaster Recovery
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Failover Considerations
SnapMirror does not replicate the Vserver namespace
junction path information.
Because the NAS volumes have no junction path,
they will not be accessible after a SnapMirror break
occurs unless they are premounted before failover, or
until they are mounted after failover.
The security style and permissions on the destination
Vserver root volume must be set correctly or the
namespace might be inaccessible after failover.
Use the Cluster Config Dump Tool to collect and
replicate system configuration settings to a disaster
recovery site.
NetApp Confidential 27
FAILOVER CONSIDERATIONS
Currently, failover is a manual task. If there are multiple volumes in the namespace, failover will have to
repeated for each volume.
The Cluster Config Dump Tool (http://communities.netapp.com/thread/17921) is a Java-based
Windows/Linux/Mac utility that collects configuration information.
The tool stores information that is needed in a disaster recovery scenario:
Volume junction paths
NFS export policies, CIFS shares information
Snapshot and storage efficiency policies
LUN mapping information
Run the tool locally and replicate, or run it remotely.
The tool does not restore a configuration.
13-27 Clustered Data ONTAP Administration: Data Protection: Backups and Disaster Recovery
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
SnapMirror Data Protection
Disaster
Primary DR
Data Center Data Center
R
WAN
B R
A B
dp
NOTE: Admin must redirect the clients (or host) of the source
volume on the primary site to the new source volume at the DR
site in a disaster situation.
NetApp Confidential 28
13-28 Clustered Data ONTAP Administration: Data Protection: Backups and Disaster Recovery
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Disaster Considerations
NetApp Confidential 29
DISASTER CONSIDERATIONS
Currently, breaking the mirror relationship and redirecting clients this is a manual task. If there are multiple
volumes in the namespace, the steps are repeated for each volume.
13-29 Clustered Data ONTAP Administration: Data Protection: Backups and Disaster Recovery
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
SnapMirror Data Protection
Recovery Scenario ADev/Test Recovery
Primary DR
Data Center Data Center
R
WAN
R
A B B
dp
NOTE: All new data written to the destination after the break will
be deleted.
NetApp Confidential 30
13-30 Clustered Data ONTAP Administration: Data Protection: Backups and Disaster Recovery
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
SnapMirror Data Protection
Recovery Scenario BSource Is Recoverable
Primary DR
Data Center Data Center
R
WAN
R
A B B
dp
NetApp Confidential 31
13-31 Clustered Data ONTAP Administration: Data Protection: Backups and Disaster Recovery
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
SnapMirror Data Protection
Recovery Scenario BChange Relationship
Primary DR
Data Center Data Center
R
WAN
R
A B B
dp
NetApp Confidential 32
13-32 Clustered Data ONTAP Administration: Data Protection: Backups and Disaster Recovery
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
SnapMirror Data Protection
Recovery Scenario CSource Is Unrecoverable
Primary DR
Data Center Data Center
R
WAN
R
A B B
dp
NetApp Confidential 33
13-33 Clustered Data ONTAP Administration: Data Protection: Backups and Disaster Recovery
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
RDB Disaster Recovery
NetApp Confidential 34
13-34 Clustered Data ONTAP Administration: Data Protection: Backups and Disaster Recovery
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Module Summary
NetApp Confidential 35
MODULE SUMMARY
13-35 Clustered Data ONTAP Administration: Data Protection: Backups and Disaster Recovery
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Exercise
Module 13: Data Protection:
Backups and Disaster Recovery
Time Estimate: 30 minutes
NetApp Confidential 36
EXERCISE
Please refer to your exercise guide.
13-36 Clustered Data ONTAP Administration: Data Protection: Backups and Disaster Recovery
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Module 15
Recommended Practices
NetApp Confidential 1
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Module Objectives
NetApp Confidential 2
MODULE OBJECTIVES
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Recommended Practices for Nondisruptive
Operations (NDO) RAID-DP Technology
Create a dedicated, three-disk RAID-DP root
aggregate on each node.
Best practices for ONTAP 7G and Data ONTAP 7-
Mode RAID and storage still apply.
Use RAID-DP technology for all user-data
aggregates.
Use RAID-DP technology to enable online disk
firmware upgrades.
Maintain two spare disks per disk type to allow for
disk maintenance center and NDU of disk firmware.
NetApp Confidential 3
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Recommended Practices for NDO
SFO
Enable storage failover (SFO).
Reboot the high-availability (HA) pair after
enabling SFO for the first time.
Enable two-node high availability for clusters
that contain only two nodes.
Consider the advantages and disadvantages
of automatic giveback.
NetApp Confidential 4
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Recommended Practices for NDO
Nondisruptive Upgrade (NDU)
You can perform upgrades in stages.
Rolling upgrades are becoming the norm.
You can reboot multiple nodes in parallel, depending
on number of nodes in the cluster.
You should use an HTTP or FTP server as your
primary means of performing package downloads.
Remember to revert the logical interfaces (LIFs) back
to their home ports after you boot the LIFs (or set the
automatic reversion option).
NetApp Confidential 5
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Recommended Practices for NDO
Mixed-Version Clusters and LIF Failover
Mixed-version clusters are supported with
caveats that are specific to each version.
You should use the default configuration of
LIF failover and manually assign policies for
any exceptions:
First-level failover: same node, different
network interface card (NIC)
Second-level failover: different node (not the
HA partner)
NetApp Confidential 6
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Recommended Practices for NDO
Load-Sharing Mirror Copies
Place load-sharing mirror copies of the virtual
server (Vserver) root volume onto all nodes or
at least onto one node of each HA pair:
This configuration enables continuous access,
even if the node with the Vserver root volume
is down.
Because default access is to a load-sharing
mirror copy (a read-only volume), this
configuration prevents the root volume from
filling up accidentally.
NetApp Confidential 7
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Recommended Practices for NDO
Servers and Locality
Where possible, configure multiple Domain
Name System (DNS), Network Information
Service (NIS), Lightweight Directory Access
Protocol (LDAP), and Network Time Protocol
(NTP) servers.
Time zone settings should be the same
across all nodes.
Language settings should be consistent
among Vservers and volumes.
NetApp Confidential 8
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Recommended Practices for Resource
Balancing 1 of 2
Balance resources across the cluster:
Data and cluster interfaces
Flexible volumes of a namespace
Load-sharing mirror copies
Maintain a junction-only Vserver root volume
with a low change rate, and create multiple
load-sharing mirror copies of the volume.
NetApp Confidential 9
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Recommended Practices for Resource
Balancing 2 of 2
Use built-in DNS load balancing to balance
NAS client connections across network
interfaces:
Create many data LIFs for the cluster.
Consider creating dedicated LIFs for NFS and
SMB protocols respectively.
Assign LIFs evenly to available network ports.
Monitor network use levels and migrate LIFs to
different ports as needed to rebalance the load.
When many clients are attached, the clients are
evenly spread across the system.
NetApp Confidential 10
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Recommended Practices for
Load-Sharing Mirror Copies
Use load-sharing mirror copies:
For read-only or mostly read-only data
When data is updated only by a few authorized
individuals or applications
When the data set is relatively small, or the cost of
the mirror copies in disk space is justified
To netboot many clients at the same time (which is
a read-only operation and a popular use of load-
sharing mirror copies)
Schedule load-sharing mirror copies to be
automatically replicated every hour.
NetApp Confidential 11
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Recommended Practices for
Intercluster Mirror Copies
A full mesh intercluster network supports node
failover and volume moves of the source or
destination volumes.
Intercluster LIFs can be created on ports that
have an intercluster role or a data role
(through the CLI).
NetApp Confidential 12
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Recommended Practices for
Manageability Granularity
When you are deciding whether to create a volume, a
directory, or a qtree, ask these questions:
Will this element benefit from being managed or
protected separately?
How large will this element get?
Greater volume granularity is beneficial for many
workflows and enables movement of volumes and
resource distribution.
Larger volumes tend to yield better compression and
dedup ratios.
NetApp Confidential 13
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Recommended Practices for
Manageability Volume Naming Conventions
Volume names and junction names are
distinct.
Each volume name must be unique within the
Vserver.
Volume names should be wildcard-friendly.
Volumes can be grouped by name (in
alphanumeric order).
Volume names should be consistent with case
usage (all lower case or all upper case)
NetApp Confidential 14
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Recommended Practices for Networking
NetApp Confidential 15
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Recommended Practices for Disaster
Recovery 1 of 2
Enable Snapshot copies and data-protection
mirror copies for critical volumes.
Consider putting data-protection mirror copies
on SATA disks:
The use of data-protection mirror copies on
SATA disks is a disk-based backup solution.
Intercluster data-protection mirror copies can
be used for off-site backups.
NetApp Confidential 16
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Recommended Practices for Disaster
Recovery 2 of 2
Plan disaster-recovery implementations
carefully by considering taking quorum and
majority rules. (You can recover an out-of-
quorum site, but doing so is not customer-
friendly.)
Use NDMP to back up important volumes to
tape.
Have a policy for rotating backups off-site for
disaster recovery.
NetApp Confidential 17
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Module Summary
NetApp Confidential 18
MODULE SUMMARY
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Course Summary
1 of 2
Now that you have completed this course, you should be
able to:
Explain the primary benefits of a Data ONTAP cluster
Create a cluster
Implement role-based administration
Manage the physical and logical resources within a cluster
Manage features to guarantee nondisruptive operations
Discuss storage and RAID concepts
Create aggregates
List the steps that are required to enable storage failover
(SFO)
NetApp Confidential 19
COURSE SUMMARY: 1 OF 2
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Course Summary
2 of 2
Create a Flash Pool
Build a namespace using multiple volumes
Configure FlexCache
Create an Infinite Volume
Identify supported cluster interconnect switches
Set up and configure SAN and NAS protocols
Configure the storage-efficiency features
Administer mirroring technology and data protection
Explain the notification capabilities of a cluster
Scale a cluster horizontally
Configure the storage QoS feature
NetApp Confidential 20
COURSE SUMMARY: 2 OF 2
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
NetApp Confidential 21
THANK YOU
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Appendix A
Technical Reports and
Knowledge Base Articles
NetApp Confidential 1
A-1 Clustered Data ONTAP Administration: Appendix: Technical Reports and Knowledge Base Articles
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Technical Reports
TR-3450: High-Availability Overview and Best Practices
TR-3802 Ethernet Storage Best Practices
TR-3832: Flash Cache Best Practices Guide
TR-3967: Deployment and Best Practices Guide for Clustered Data ONTAP Windows File
Services
TR-3982: Clustered Data ONTAP 8.2: An Introduction
TR-3966: Compression and Deduplication for Clustered Data ONTAP
TR-4015: SnapMirror Configuration and Best Practices for Clustered Data ONTAP
TR-4037: Introduction to NetApp Infinite Volume
TR-4067: Clustered Data ONTAP NFS Implementation Guide
TR-4070: NetApp Flash Pool Design and Implementation Guide
TR-4078: Infinite Volume Technical FAQ
TR-4080: Best Practices for Scalable SAN in Clustered Data ONTAP
TR-4129: Namespaces in Clustered Data ONTAP
TR-4183: SnapVault Best Practices Guide for Clustered Data ONTAP
TR-4182: Best Practices for Clustered Data ONTAP Network Configurations
TR-4186: Nondisruptive Operations (NDO) Overview
NetApp Confidential 2
TECHNICAL REPORTS
A-2 Clustered Data ONTAP Administration: Appendix: Technical Reports and Knowledge Base Articles
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Knowledge Base Articles
KB-1013801: How to set up DNS load balancing in
Clustered Data ONTAP
KB-1013831: How to create and understand Vserver
name-mapping rules in Clustered Data ONTAP
NetApp Confidential 3
A-3 Clustered Data ONTAP Administration: Appendix: Technical Reports and Knowledge Base Articles
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.