Está en la página 1de 495

Mo

re
Le
ar
ni
ng
Re
so
ur
ce
s:
ht
tp
://
le
ar
ni
ng
.h
ua
we
i.
co
m/
en
HCNA-Storage

Huawei Certified

e n
HCNA-Storage BSSN

m/
co
Huawei Certified Network Associate–Storage

i.
we
ua
.h
ng
ni
ar
le
: //
tp
ht
s:
ce
ur
so
Re
ng
ni

Huawei Technologies Co.,Ltd


ar
Le
re
Mo

HUAWEI TECHNOLOGIES
HCNA-Storage

Copyright © Huawei Technologies Co., Ltd. 2015. All rights reserved.

No part of this document may be reproduced or transmitted in any form or by


any means without prior written consent of Huawei Technologies Co., Ltd.

n
Trademarks and Permissions

e
m/
co
and other Huawei trademarks are trademarks of Huawei Technologies Co.,
Ltd. All other trademarks and trade names mentioned in this document are the

i.
property of their respective holders.

we
Notice

ua
.h
The information in this document is subject to change without notice. Every
effort has been made in the preparation of this document to ensure accuracy of

ng
the contents, but all statements, information, and recommendations in this

ni
document do not constitute the warranty of any kind, express or implied.

ar
le
: //

Huawei Certified
tp

Huawei Certified Network Storage Associate


ht

BSSN Building the Structure of Storage Network


s:

Version 3.0
ce
ur
so
Re
ng
ni
ar
Le
re
Mo

HUAWEI TECHNOLOGIES
HCNA-Storage

Huawei Certification System

Relaying on its strong technical and professional training system, according to different

e n
customers at different levels of ICT technology, Huawei certification is committed to provide

m/
customs with authentic, professional certification. Based on characteristics of ICT

co
technologies and customers ’needs at different levels, Huawei certification provides customers

i.
with certification system of four levels.

we
HCNA-Storage BSSN (Huawei Certified Network Associate –Storage Building the Structure of

ua
Storage Network) training aims to provide guidance to participants in learning contents related
to the HCNA-Storage exam.The training covers the knowledge, technologies and application

.h
of SAN,NAS, and structure ,network, connection, deploying ,troubleshooting in Huawei SAN

ng
Storage system.

ni
HCNP-Storage certification is positioned in ability construction for IT information storage

ar
professional engineer or storage scheme expert. The curriculum includes, but is not limited to
le
the following: SAN, NAS, Backup and DS technology, Unified storage system principle and
//

application, Huawei storage solution planning, deployment, troubleshooting and maintenance.


:

HCIE-Storage (Huawei Certified Internetwork Expert-Storage) is designed to endue engineers


tp

with a variety of Storage network and system technology and proficiency in maintenance,
ht

diagnostics and troubleshooting of Huawei solution, which equips the engineers with
competence in planning, design and optimization of large-scale ICT Solution.
s:
ce
ur
so
Re
ng
ni
ar
Le
re
Mo

HUAWEI TECHNOLOGIES
Mo
re
Le
ar
ni
ng
Re
so
ur
ce
s:
ht
tp
://

HUAWEI TECHNOLOGIES
le
ar
ni
ng
.h
ua
we
i.
co
HCNA-Storage

m/
en
HCNA-Storage

Foreword

Outline

n
HCNA-Storage covers the storage technologies (RAID, SCSI, iSCSI, FC) and its application,

e
m/
focusing on the IP-SAN, FC-SAN architecture, networking, connectivity, deployment and
maintenance.

co
Content

i.
we
The course contains a total of 11 chpaters:

ua
Chapter1 Describes what is information, the life cycle of data , concept of business Continuity,

.h
and an introduction of components in an ICT infrastructure.

ng
Chapter2 Describes what is DAS, SCSI technology, Hard Disk Technology, Solid State

ni
Technology.

ar
Chapter3 Describes What is NAS, Ethernet Basics, Ethernet Hardware Components.
le
Chapter4 Describes the ideal ICT infrastructure, Storage protocols Fibre Channel, IP SAN
//

Storage protocols iSCSI.


:
tp

Chapter5 Describes traditional RAID Technology.


ht

Chapter6 Describes basic concepts of Big Data, Object-based storage technologies, and
OceanStor 9000 key technologies of Big Data
s:

Chapter7 Describes backup concepts and topologies, backup technologies, disaster Recovery
ce

introduction.
ur

Chapter8 Describes concepts and background of Cloud Computing, Modules of Cloud


so

Computing, and Huawei FusionCloud Products..


Re

Chapter9 Describes Huawei Storage Products, Huawei RAID 2.0+,Huawei platform


ng

improvements, Huawei NAS products, Huawei Backup products, and Huawei Licensing policy.
ni

Chapter10 Describes how to initialize a Huawei OceanStor system, how to configure a Huawei
ar

OceanStor system, File systems and Storage, and maintenance jobs.


Le

Chapter 11 Describes Data Cofferr, Pre-emptive replacements, Firmware and updates, principle
of HyperSnap, SmartThin, HyperClone, SmartTier, and HyperReplication.
re

In conclusion, you should be able to plan and deploy SAN networks and storage systems, to
Mo

install, deploy, and maintain Huawei SAN storage products, and to become a qualified SAN
storage engineer or system administrator.

Readers’ Knowledge Background

Have basic network knowledge


HUAWEI TECHNOLOGIES
HCNA-Storage

Have basic computer knowledge


Have basic knowledge of Windows/Linux

e n
m/
co
i.
we
ua
.h
ng
ni
ar
le
: //
tp
ht
s:
ce
ur
so
Re
ng
ni
ar
Le
re
Mo

HUAWEI TECHNOLOGIES
HCNA-Storage

Icons Used in This Book

e n
m/
co
i.
FC Switch GE Switch Storage Array Host

we
//

ua
.h
ng
ni
ar
le
//
:
tp
ht
s:
ce
ur
so
Re
ng
ni
ar
Le
re
Mo

HUAWEI TECHNOLOGIES
Mo
re
Le

HCNA
ar
ni
ng
Re
so
ur
ce
s:
ht
Introduction to storage
tp
://
le
ar
ni
ng
.h
ua
we
i.
co
m/
en
www.huawei.com
Mo
re
Le
ar
ni
ng
Re
so
ur
ce
s:
ht
tp
://
le
ar
ni
ng
.h
ua
we
i.
co
m/
en
Table of Contents

Chapter 1 Data Management Introduction


Data Management 11

e n
Information Life Cycle Management 17

m/
The Value of Data 23

co
Components of an ICT Infrastructure 27

i.
Questions 30
Exam Preparation 31

we
ua
Chapter 2 DAS Technology

.h
Building an ICT infrastructure 37

ng
Direct Attached Storage 38

ni
SCSI Protocol and Storage System 40

ar
ATA and SATA le 57
Disk Technology 63
SSD Introduction 80
//

Questions 84
:
tp

Exam Preparation 85
ht

Chapter 3 NAS Technology


s:

Network Attached Storage 93


ce

NAS Network Topology 95


ur

What is CIFS? 99
What is NFS? 100
so

Ethernet Standard 104


Re

Questions 115
ng

Exam Preparation 116


ni

Chapter 4 SAN Technology


ar

The Ideal ICT Infrastructure 123


Le

Storage Area Networks 125


Differences between DAS and SAN 128
re

Network Topology: Fibre Channel 137


Mo

IP SAN 157
iSCSI connection modes 160
Convergence of Fibre Channel and TCP/IP 166
Questions 168
Exam Preparation 169

HCNA V3 | Table of Contents Page | 3


Chapter 5 RAID Technology
Traditional RAID 177
Basic concepts and implementation modes of RAID 177
Data Organization modes of RAID 178
RAID technology and application 181

n
Working principle of RAID 0 182

e
Working principle of RAID 1 186

m/
Working principle of RAID 4 190

co
Working principle of RAID 5 194

i.
Overview of RAID 6 198

we
Working principle of RAID 6 P+Q 199

ua
Working principle of RAID 6 DP 200

.h
Hybrid RAID - RAID 10 202
Hybrid RAID - RAID 50 203

ng
Comparison of common RAID levels 204

ni
Application scenarios of RAID 205
RAID Data Protection
ar 206
le
Questions 211
//

Exam Preparation 212


:
tp

Chapter 6 Basics of Big Data


ht

What is Big Data? 219


Advantaged of Object Based Storage 226
s:

Hadoop: Internet Big Data solution 229


ce

Huawei OceanStor 9000 231


ur

Erasure Code 234


so

OceanStor 9000 hardware structure 236


Re

Recommend networking: Front and Back End 10Gb 237


Questions 238
ng

Exam Preparation 239


ni
ar

Chapter 7 Backup and Recovery


Le

What is a backup? 247


LAN-free backup topology 250
re

Components of a backup system 251


Mo

Deduplication 257
Contents of a backup strategy 261
Huawei Backup Products: VTL6900 family 268
Introdution to HDP3500E 271
Backup Software Architecture 273

Page | 4 HCNA V3 | Table of Contents


Introduction to Disaster Recovery 274
Questions 280
Exam Preparation 281

Chapter 8 Basics of Cloud Computing

n
Concept of Cloud Computing 289

e
Cloud computing models 296

m/
Categories of cloud computing 298

co
Value of cloud computing 305

i.
Huawei FusionCloud solutions 306

we
Questions 311

ua
Exam preparation 312

.h
ng
Chapter 9 Huawei Storage Product Information and Licenses
RAID 2.0+ Evolution 319

ni
RAID 2.0+ Logical objects 324
Huawei Storage Products
ar 329
le
OceanStor 5300 V3 334
//

OceanStor 5500 V3 Specifications 338


:

OceanStor 5600 V3 339


tp

OceanStor 5800 V3 Specifications 341


ht

OceanStor 6800 V3 342


OceanStor 18000 346
s:

OceanStor 18500 Specifications 347


ce

OceanStor 18800 Specifications 348


ur

OceanStor 18800F Specifications 349


so

I/O Modules for the OceanStor V3 series 350


Re

OceanStor Dorado 2100 G2 353


OceanStor Dorado 5100 355
ng

OceanStor VIS6600T 356


ni

OceanStor 9000 Big Data Storage System 358


ar

Cabling Diagrams 360


Le

Huawei Licensed Software Features 365


Questions 368
re

Exam Preparation 369


Mo

HCNA V3 | Table of Contents Page | 5


Chapter 10 Huawei Storage Initial Setup and Configuration
Initial Setup 377
Launching the DeviceManager User Interface 381
Create Storage Pool 387
Create LUN 391

n
Create LUN Group 397

e
Create Host 400

m/
Create Host Group 405

co
Create Port Group 408

i.
Create Mapping View 410

we
OS Specific Steps 413

ua
Disk Management 416

.h
Questions 423
Exam Preparation 424

ng
ni
Chapter 11 Huawei Storage Firmware and Features
HyperSnap
ar 431
le
Create Snapshot 433
//

SmartThin 448
:

SmartTier 450
tp

HyperClone 463
ht

HyperReplication: Synchronous mode 468


HyperReplication: Asynchronous mode 469
s:

Firmware Updates 471


ce

Questions 481
ur

Exam Preparation 482


so
Re
ng
ni
ar
Le
re
Mo

Page | 6 HCNA V3 | Table of Contents


en
m/
co
i.
we
ua
OHC1109101

.h
Data Management Introduction

ng
ni
ar
le
//

www.huawei.com
:
tp
ht
s:
ce
ur
so
Re
ng
ni
ar
Le
re
Mo
Mo
re
Le
ar
ni
ng
Re
so
ur
ce
s:
ht
tp
://
le
ar
ni
ng
.h
ua
we
i.
co
m/
en
Introduction

In this, the first module of the course, the focus will be on data management. The scope of the
entire course is about the technology that Huawei provides to build an ICT infrastructure, but in
this module we will look at the reason why a company needs an ICT infrastructure. A company’s

n
primary goal is to provide a service to its customers and for almost every company an ICT

e
infrastructure is required to be able to do that. The module will discuss the data that is generated

m/
in the company to do its business and about the way this data is kept.

co
i.
we
ua
Objectives

.h
After completing this module, you will be able to:

ng
 Describe the importance of data for an organization

ni
 Understand the difference between structured and unstructured data
 Explain what Information Lifecycle Management is
ar
le
 List a number of file formats to store digital data in
//

 Understand the reasons for data retention


:

 Describe how data can be protected


tp
ht
s:

Module Contents
ce

1. Data Management
ur

2. What is information?
so

3. What is Information Lifecycle Management?


Re

4. File formats
5. Retention policies
ng

6. Protecting the data


ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109101 Data Management Introduction Page | 9


e n
m/
co
i.
we
ua
.h
ng
ni
ar
le
: //
tp
ht
s:
ce
ur
so
Re
ng
ni
ar
Le
re
Mo

Page | 10 HCNA-storage V3 | OHC1109101 Data Management Introduction


Data Management

Data Management

n
SNIA definition: Data is the digital representation of anything in

e
any form.

m/
• A company uses/creates a large amount of data to run its business.

co
• Each employee needs the data to be present in a specific form or

i.
shape.

we
• Data should be available as long as the business needs it.

ua
• When data is no longer needed, it must be or can be destroyed.

.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.
ar
Slide 6
le
//

Today a company uses a wide range of resources to run its business well. Examples of resources
:

are:
tp
ht

- telecommunication equipment: i.e. smart phones, faxes


s:

- computer hardware: i.e. PC’s, laptops, network switches, storage devices


- computer software: i.e. email programs, databases, graphical design software,
ce

web design, software


ur

- facilities: i.e. production plant, warehouse, showroom, offices,


so

production tools
Re

- personnel: i.e. production staff, logistics staff, Accounting / Finance,


Marketing, management, IT staff
ng
ni

Each of these resources has to be bought and implemented. These processes have to be defined
ar

to make sure every person in the business process has all the information he or she needs to do
Le

their work well. In a later module the physical solutions, that can be used to achieve the goals of a
business, will be explained in more detail. In this module the actual data has the focus. So the
re

question to be answered first is: What is data?


Mo

HCNA-storage V3 | OHC1109101 Data Management Introduction Page | 11


What is data?

A definition by the SNIA (Storage Networking Industrial Association) defines data as: “The
digital representation of anything in any form”

n
Although this definition seems very vague it is true because if you look at an average company it

e
m/
generates an enormous amount of data every day. All this data is there to keep the business

co
running and keep it making a profit.

i.
Imagine a company that does not use any electronic messaging system like emails, that has no

we
website to promote their products or no web shop where customers can order the products the

ua
company offers. Also imagine a company where everybody still creates handwritten documents

.h
when ordering parts and raw materials; where all employees use traditional A0 size drawings for

ng
production purposes.

ni
ar
The reasons that we do not use the traditional skills and tools anymore are because of the
le
obvious advantages of having the information in a digital format.
//

Digital information is easier to keep, modify and/or duplicate. Also: it is relatively easy to have
:
tp

multiple persons work with/on the same information.


ht

What is Information?
s:
ce

Data equals Information?


ur
so

Information will be extracted from the data that was gathered.


Re

Information can :
ng

• provide a company with marketing information and customer


behavior.
ni

• help to run the business more effectively.


ar

• help to determine risk factors.


Le
re
Mo

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 7

Page | 12 HCNA-storage V3 | OHC1109101 Data Management Introduction


The biggest challenges companies face today is how to interpret the tremendous amount of data
that was collected on a daily or yearly base. This is so important because it is not data that will
benefit the company I work for. What is important, is information extracted from all the data. If we
would just look at the numbers (=collected data) it does not show if a company is making a profit
or not. Only when we compare numbers (this week’s and last week’s sales) we can understand

n
that the sales have gone up or down. The information extracted from two weeks of sales data

e
could then be: “we had a good week!”

m/
co
Depending on the information it can extract, a company can gain insight about the way the

i.
organization works and the way it collects data. So looking at the data might lead to the

we
conclusion that more data is required!

ua
.h
Information about the sales that are going down can lead to a lot of changes for a company in the

ng
way it works, what the products should be like, what the target customers are and how expensive
the products are compared to other manufacturers.

ni
ar
So in most situations more data means a better chance to find useful information from it. And
le
there the problem occurs: we now generate so much data that we cannot handle it anymore.
: //

Problems:
tp

- First the problem is with the capacity available to store the data on a digital medium.
ht

- Second problem is to filter out the relevant data that provides the correct information.
s:

- Third problem is how to make sure that the relevant data is available to all the employees that
ce

might need the data for their job.


- Fourth problem: how do I make sure we do not lose that vital data? Most data loss occurs
ur

because of human failures. So how do I prevent a single person deleting information that is
so

vital for a company?


Re

- Fifth problem is to determine how long the data must be kept.


ng

Some of the problems we face will be discussed in this module. In other modules of this course
ni

we will look at solutions for the other problems. In module 9 and 10 we will discuss Huawei
ar

storage arrays. In module 5 and 9 we will explain RAID which is a way of protecting data against
Le

losing it. Module 6 discusses Big Data.


re

In the rest of this module we will focus on the data itself, the format in which we want to keep it
Mo

and the length of time we want to keep it.

HCNA-storage V3 | OHC1109101 Data Management Introduction Page | 13


Where is the data?

Where is the data?

e n
m/
co
1 1

i.
4

we
ua
2

.h
ng
3

ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.
ar
Slide 8
le
//

Most companies are situated in multiple sites, sometimes in multiple cities and even in multiple
:

countries. But even for a smaller company the data is generated by all employees working from
tp

various offices. Each one of these employees uses tools to generate the data. Some of these
ht

tools are very common tools like e-mail programs and word processor software. Others will use
s:

highly specialized software designed for the company itself.


ce

For companies that produce goods we find that they have some sort of Graphical Design
ur

Software (Computer Aided Design). They often use logistics software to keep track of ordered
so

goods (parts, materials, tools) and delivered products. Customer information must be kept as well
Re

as financial information. All this data must be stored and kept safe.
ng

A less ideal situation would be when that data was stored on laptops and PC’s of individual
ni

employees in their respective offices. There would be no easy way of protecting the data against
ar

human errors and /or against hardware failures.


Le

That is why in most organizations data is stored centrally in Main Equipment Rooms (MERs).
re

Another term that is often used next to MER is Data Center. A MER should always have enough
Mo

cooling capacity to keep the systems running at the optimal temperatures and enough power
ratings to support the power consumption of all equipment. In a well-equipped data center there
are also facilities like fire-extinguishing installations and for instance a diesel generator that can
power the entire data center when the external power to the data center fails.

Page | 14 HCNA-storage V3 | OHC1109101 Data Management Introduction


However well-equipped the data center may be, there is always a need to protect the data itself.
Hardware will fail and sometimes disasters occur that ruin entire buildings. Examples of disasters
are earthquakes, floods and fires. If something dramatic as that happens it is nice to know that the
data is still intact and available.

n
So of the most business critical data we want to have a copy stored outside of the original MER in

e
another MER or stored in a (fireproof and waterproof) safe.

m/
co
i.
we
Who creates or uses the data?

ua
.h
Who creates or uses the data?

ng
ni
Logistics Human Resources

ar
le
Marketing
//

& Sales
:
tp
ht

Finance
s:

Customers i.e.
- e-mail
ce

- purchase orders
ur
so

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 9


Re

A great part of the data created in an organization is structured data. That means that the data is
ng

very applicable to specific employees and the format is directly useable to them. Examples of
ni

structured data are e-mails; databases and electronic forms.


ar
Le

It is the unstructured data that is confusion for many organizations as it is not directly clear what
the data represents and what the contribution of the data to the information is. Text documents,
re

images and web pages are examples of unstructured data. Although the contents of a document
Mo

can be relevant to an organization it is not visible at first glance. Someone should read the text
and from that decide of the contents is useable for the organization.

HCNA-storage V3 | OHC1109101 Data Management Introduction Page | 15


Statistics have proven that the data generated and stored within an average company mainly
consists of static data. With that we mean that data is generated and stored, but the data is hardly
ever read again. About 70% of the stored data is static data which might lead to the question:
“Why do we store data and not look at it again on a later stage?”

n
The answer to this is not very scientific. Most organization cannot determine the value of data

e
quickly and then take the decision to keep the data. They think that maybe later the data may

m/
prove to be useful.

co
i.
The 30% of the data that is used or re-read must definitely be accessible for all employees. This

we
is called file sharing or data sharing. It is an important task for a company to arrange this well.

ua
.h
ng
ni
Information and data

Information and data ar


le
: //

a. Every company needs information to be able to do business.


tp

b. Information is extracted from both structured as well as


ht

unstructured data.
c. Almost all data is now generated in a digital form.
s:

d. Data should be accessible for multiple employees.


ce
ur
so
Re
ng
ni

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 10


ar
Le

Each employee should have to get to the right data quickly to fulfill the tasks for the company.
With data in a digital form we can use networking and file-sharing technologies to make that work.
re

The process of determining who needs what information is a science itself. It is called Information
Mo

Analysis. It is not a topic for this course but it is a vital step in the process of a business to
understand how data should flow within the organization.

If the analysis is incorrect employees might be missing information for their part of the business
process. That might lead to other people also missing information and so on.

Page | 16 HCNA-storage V3 | OHC1109101 Data Management Introduction


Information Life Cycle Management

Information Life Cycle Management

n
a. What data is needed for every person in the organization ?

e
b. What is the format in which data should be presented / kept ?

m/
c. How long should the data be kept ?

co
d. If the data is no longer required what needs to be done with

i.
the data?

we
ua
.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.
ar
Slide 11
le
//

What data each employee needs to do its job is of course depending on the job. There are
:

hundreds of categories of information: marketing data, sales information, production costs, cost of
tp

staff (wages), logistic costs, Research and Development.


ht
s:

In any way a company must make sure that everybody has the right information at the right time.
Almost just as important as having the information /data is the format in which you give that
ce

information. If someone sends an email to a colleague with vital information that other person
ur

should have a computer, an email program and an account to be able to receive and read it.
so
Re

If I receive a document in a format that my software application cannot read this information is
inaccessible to me!
ng
ni

Next important question to ask is how long the information is needed. Again this varies from one
ar

business to the next but mostly regulations of the government has companies store and keep
Le

information for years. Sometimes information is needed for decades for instance if you are a
bridge building company you would have to keep diagrams; structural design information for as
re

long as the bridge exists!


Mo

Assuming we know what the data is that each employee needs, the next step would be to look at
the format in which the data should be accessible.

HCNA-storage V3 | OHC1109101 Data Management Introduction Page | 17


1.1 Physical Parameters

Information Life Cycle Management

What is the format in which data should be available for the

n
organization?

e
m/
1. For physical parameters.

co
• Online information or paper based.

i.
• Read only / Eyes only / not reproduceable.

we
• Version control.

ua
Environmental requirements when keeping hard copy.

.h
ng
ni
ar
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. le Slide 12
//

The format in which data is stored needs to be thought of in the broadest sense of the word.
:

Although much of the information nowadays is kept as digital information, there is still a lot of
tp

analog information. Examples of this analog information are pictures, paper documents that have
ht

a legal basis, faxes or entire archives that were never digitized.


s:

1.2 Digital Information Parameters


ce
ur

Information Life Cycle Management


so
Re

What is the format in which data should be available for the


organization?
ng

2. For digital information parameters


ni

• Which application is required to read/modify the data.


ar

• Which file format for text (PDF; ODF; DOC).


Le

• Which file format for images (JPG; TIFF; DWG; PNG....).


re

• Use lossless or lossfree formats.



Mo

Are there standards to be met (ODF; CALS; BASEL) ?

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 13

Page | 18 HCNA-storage V3 | OHC1109101 Data Management Introduction


Digital documents also have their restrictions. We need the correct applications to open, read
and/or modify the files. It is therefore important to choose a format for the file that allows all of the
appropriate users to access the information in the files.

We typically identify three types of files:

e n
1. Text documents.

m/
These documents contain mainly characters (letters and/or numbers) and sometimes small

co
images. Examples are word processor documents, spreadsheet and databases.

i.
we
2. Bitmap image documents.

ua
In a bitmap all relevant picture elements (or pixels) of the image are individually stored.

.h
Photos and scanned images are examples of bitmap files. As for each image thousands of

ng
individual pixels (dots in many colors that make up the images) have to be kept it means that
bitmap images take up a lot of storage capacity.

ni
3. Vector Based image documents
ar
le
The image is described as mathematical objects and the formulas are stored. Most Computer
//

Aided Design software (i.e. AutoCAD; SolidWorks)


:
tp

When selecting a method consider using a file format that is not vendor specific and therefore is
ht

readable with any program. Several of these file formats exist and they typically are supported
over many years. Examples: TIFF format for bitmap images, IGES for vector based images and
s:

SGML for text files. For text documents there is also the ODF (Open Document Format) that is
ce

becoming more popular.


ur
so

Important when storing bitmap information, is the effect of compression. Although compression
Re

is mostly used to minimize the space required to store the information digitally, one must realize
that any compression method implies loss of information! Sometimes storing information in so-
ng

called lossless formats prevents this loss of information. TIFF is a lossless format whereas the
ni

popular JPG and PNG formats have built-in compression and that makes them not lossless.
ar
Le

Note:
CALS and BASEL are other examples of standards that are very specific for a branch in the
re

industry.
Mo

CALS (short for Computer Aided Logistics Support) is used by the United States army to make
sure that every part of the army can get to all relevant information. The impact of CALS is huge
for every company that wants to do business with any part of the army. Even a bakery store that
wants to deliver bread to the army cantina needs to comply with the CALS standard.

HCNA-storage V3 | OHC1109101 Data Management Introduction Page | 19


That means his purchase order; price list etc. must be created in a format dictated by the army’s
CALS standard. This would also apply to a manufacturer of rotors for an army helicopter. All
drawings, test reports etc. have to be CALS approved.

BASEL is a standard for organizations in the financial sector. BASEL has strict rules for reports,

n
accounting information and all other financial matters.

e
m/
1.3 Hardware

co
i.
Information Life Cycle Management

we
ua
What is the format in which data should be available for the

.h
organization?

ng
3. Hardware

ni
• Is access to the information granted/allowed?

ar
Should the data be kept intact and therefore unable to be
le
changed?

//

Multiple employees can access the same data simultaneous.


:
tp
ht
s:

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 14


ce
ur

All documentation has its relevancy and with it comes the need to keep it for a certain period.
so

Some documents have sensitive information and should be stored safely. Some information is
Re

eyes-only and for example should not be duplicated. In those situations special paper can be
used that prevents the paper being copied as it makes the text on the copy unreadable. With
ng

paper there is also the problem of version control. In other words version control means that you
ni

want to keep different versions of a document because changes have been made to the original.
ar
Le

For paper the concept of version control means that multiple versions of the document are stored
in the archives. Paper nowadays is pretty reliable but older types of paper have the tendency to
re

become brittle. Also the ink used can fade away or damage the paper it is on. It is a tremendous
Mo

expensive job to restore and preserve old documents. Nowadays we digitize many of those
documents and store the originals in conditioned rooms. We now can inspect the scanned
documents and have the added options of zooming in on details, modifying the image file and
share them with other users by simply copying the document files.

Page | 20 HCNA-storage V3 | OHC1109101 Data Management Introduction


If access to information should be controlled methods can be implemented where documents are
stored in vaults. Archives must then be in enclosed spaces (is mostly the case if conditioned
rooms are required) and guarded.
For digital information we can use physical blockades and software blockades. By creating
multiple separate physical networks we can regulate access to data. Only devices inside of the

n
physical network are able to interconnect.

e
m/
The same kind of separation can also be done via software. Then we would use technical

co
possibilities of the ICT infrastructure to block access to specific sections of the network. This can

i.
be done with techniques like firewalls, security gateways, access control lists and in switches we

we
can create so-called Vlan’s (virtual LAN)

ua
.h
In some situations there is the requirement for data to be integer. This implies that information will

ng
be stored as it is now and there is no way to change the information later. In legal documents and
medical reports this is sometimes required to prevent illegal changes being made. For paper

ni
documents this is done by storing the document in a container that is tamper proof. Digital

ar
information can be stored on so-called WORM media where WORM is short for (Write Once Read
le
Many). This technology allows data to be written once and not changed afterwards. Reading the
//

data can be done as often as needed.


:
tp

To have access to information by multiple persons we can create multiple copies. Having multiple
ht

persons modifying the same paper documents requires them to sequentially access and modify
the document.
s:
ce

Digitally allowing multiple applications to open and modify the same document files requires
ur

technologies such as cluster technologies. In cluster technologies multiple hosts and their
so

applications access a single file simultaneous. Each of the users is now allowed to change the file
Re

contents and store all changes correctly in the document file afterwards.
ng
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109101 Data Management Introduction Page | 21


1.4 The retention periods

Information Life Cycle Management

What are the retention periods?

ne
• Based on the business requirements of the organization itself.

m/
• Based on the general rules for your type of business.

co
• Based on the rules that governments in specific countries dictate.

i.
• How to arrange for digital information to be stored for many years.

we
ua
.h
ng
ni
ar
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. le Slide 15
//

It is not only important that you need to have the information but in most cases you must keep it
:

for a certain period. All businesses keep orders, invoices, pay check information, bills etc. For
tp

many years in case they need to reproduce the information for their own business process.
ht

Warranty information or service agreements for production tools are kept as long as the tool will
be used.
s:
ce

Sometimes the type of business you are in also has external rules. So is stated in many Western
ur

European countries that medical information on patients has to be stored for more than fifteen
so

years. This allows doctors in hospitals to “look back” at a patient’s history and can help him plan a
Re

better treatment for that patient. If your company is providing any medical services than this is a
requirement for your organization. On top of that government rules might force you to keep the
ng

information for even longer than needed for your organization. Business information like invoices,
ni

employee contracts etc. should typically be kept for seven or more year.
ar

Fact is that much information is stored digitally and the question is now: “How long will the digitally
Le

stored data survive?”


re

If we store data on magnetic media (we may remember the video recorders and cassette players)
Mo

the tape gets demagnetized after a few years. Even data stored on CD or DVD is not stored
indefinitely. We have heard of situations where CD’s became unreadable after some time. We
have to find a way to store the data more reliable or we have to make sure we update the medium
on which the data is stored regularly (make a copy of a tape every two years)

Page | 22 HCNA-storage V3 | OHC1109101 Data Management Introduction


1.5 How to remove obsolete information?

Information Life Cycle Management

How to remove obsolete information?

e n

m/
Who is responsible for data ?
□ SOX ; JSOX ; EuroSOX.

co
• Physically destroying information.

i.
□ Shredding.

we
□ Burning.

ua
• Digitally destroying information.

.h
□ Whipe the disk in Operating System.

ng
□ Secure whipe.

ni
□ Disk shredding.

ar
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.
le Slide 16
//

After the retention period information is sometimes no longer useful and sometimes an
:

organization is not supposed to keep the information within the organization. The question is how
tp

to get rid of information we don’t need anymore.


ht

First of all there are rules about keeping data safe against disabuse. The SOX (Sarbanes-Oxley
s:

Act) regulations state that a company is responsible for storing and removing information it
ce

generates or uses. This also includes the responsibility for an organization to make sure that
ur

nobody can make copies of important documents (or files) and take them outside of the
so

organization.
Re

If the information is stored as paper archives shredding might be a definitive solution and also
ng

burning the information might be applicable.


ni
ar

Digital information is not so easily discarded. Traditional methods like formatting a disk is not
Le

secure enough as it might leave traces of data recoverable. For those situations there is
specialized software that erases data from a medium and afterwards writes random data over the
re

old information (and multiple times if needed).


Mo

For many government based organizations wiping data from a disk requires them to physically
shred the disks so nobody can reuse the media ever again.

HCNA-storage V3 | OHC1109101 Data Management Introduction Page | 23


The Value of Data

Business Continuity

n
Definition according to the SNIA organization:

e
Processes and / or procedures for ensuring continued business

m/
operations.

co
Applies to physical and operational procedures.

i.
Physical: Buildings: Machinery, tools, products.

we
Personell: Production staff.
Management staff.

ua
Financial staff, etc.

.h
Operational procedures: Workflows.

ng
Planning and delivery of production.
Human Resource Management, etc.

ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.
ar Slide 17
le
//

The term Business Continuity is almost ten years old now. It was around that time that companies
:

starting thinking about situations that could impact the business processes. Like so many times
tp

before it took some serious accidents and disasters for companies to be aware of the risks the
ht

companies have.
s:

Recent examples of the impact of a disaster are:


ce
ur

1. The tsunami that hit Thailand. Apart from the human lives that were lost and the houses that
so

were destroyed there were other consequences. One of the buildings that was hit was a
manufacturing plant for specific parts for hard disks. In that plant millions of these parts were
Re

created per year. Now all of a sudden this plant produced no more spare parts. So the
ng

companies that assemble the hard disk could not produce any hard disks anymore. And for
the manufacturers of computer, laptops and storage devices it meant they could not get the
ni

hard disks from that plant. Hard disks became scarce and the production slowed down at the
ar

plants of the computer/laptop manufacturers.


Le

2. In 2011 a volcano on Iceland erupted. Unfortunately the wind was blowing towards the
re

European continent at that time. The dust particles that were pushed in the air were a
Mo

problem for airplanes. If the dust would come into the jet engines they might be damaged or
even be destroyed. So thousands of planes had to be kept on the ground. This situation kept
going on for days and in that time almost all air traffic in Northern Europa was cancelled. For
companies who depend on airplanes for travelling or transport this is a very bad situation.

Page | 24 HCNA-storage V3 | OHC1109101 Data Management Introduction


3. In 2007 an Apache helicopter of the Dutch army crash into a high voltage network system in
the Netherlands. The cables that are used to transport 150.000V signals were disrupted
leaving 50.000 households without electrical power for three days. But also businesses were
impacted. Supermarkets had to close because the lights would not work nor the refrigerators
or the cash registers. All security alarms were not working as well.

e n
These are just a few examples of a problem that leads to other companies having problems with

m/
their business. So business continuity has companies think about these types of problems. But

co
the question is: “Can you prevent these accidents happening and what could you do if it actually

i.
happens?

we
ua
.h
The Value of Data

ng
ni
The general manager decides what the data is worth.

Recovery Point Objective (RPO):


ar
le
amount of data that may be lost without consequences for the
organization.
//

Restore Time Objective (RTO):


:
tp

Time allowed to restore the data to the last saved situation.


ht

Cost Of Downtime (COD):


Total costs involved for every hour the data is not available.
s:
ce
ur
so

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 18


Re

Of course it is impossible to prevent disasters like earthquakes or volcanic eruptions happening,


ng

but smaller scale problems like the power outage can be addressed.
ni
ar

The question then is how much does the solution cost. For instance … is it cost effective for
Le

everyone to have their own diesel generator so they still can watch TV if the main power grid fails?
The answer is probably no, but for a supermarket or a small company that might be a solution.
re
Mo

HCNA-storage V3 | OHC1109101 Data Management Introduction Page | 25


To determine if the solution can be implemented cost effective we have to ask ourselves the
following questions:

1. What is the value of your data?


Not all data is equally important. Assign a quality grade to all data and try to protect the most

n
relevant or costly data. For many companies e-mails are costly data as their business is

e
driven by e-mails. Purchase Orders, online transaction, websites are all vital information that

m/
should be available 100% of the time. So we must find a way to keep that data safe.

co
i.
2. How old can the data be?

we
In case of a problem we have safe copies of the vital data. But this data is not the latest data.

ua
It is the data at the time the safety copy was made. It is in fact “old” data.

.h
ng
For that we have to explain the concept of RPO or Recovery Point Objective. It means how old
the recovered data can be before it becomes useless. In a huge online web shop like ALIBABA

ni
hundreds of thousands of products are sold every day. That translates into a couple of hundred

ar
items per hour. If the ICT administrator would make safe copies every four hours that would mean
le
his RPO is four hours. In case of a problem with the current data the only thing he has is the
//

saved data from up to four hours ago.


:
tp

If losing four hours of incoming purchase orders represent $100.000 it means that each problem
ht

will at least cost $100.000 for the owner of ALIBABA. It is the owner that decides if that $100.000
is a big problem (maybe bankruptcy) or that it is a minor setback in the turnover of the company.
s:

So the RPO basically means: how much data can my company lose and still not go bankrupt.
ce
ur

A second thing when making safe copies is the time needed before we can use the saved data
so

again. If an ICT administrator make safe copies every four hours his general manager might be
Re

happy. However: if a problem occurs and it takes the ICT administrator twelve hours to restore the
four hour old data that might lead to a big problem still.
ng
ni

The RTO or Restore Time Objective is also a very important factor in the business continuity
ar

plan. However: setting up a plan with excellent RPO and RTO will only succeed if the cost of that
Le

plan outweighs the costs of not having the data!


re

That is why the last and maybe most important business continuity factor is COD or Cost Of
Mo

Downtime. How much money per hour is lost if I cannot have access to my business critical data?

Page | 26 HCNA-storage V3 | OHC1109101 Data Management Introduction


It is typically the general manager of an organization that can determine that. He knows the
turnover per day. He knows the cost of all employees. He can calculate, using last week’s
information, how much money could have been made in the time the ICT administrator is
restoring old data!

n
With the Cost of Downtime as a calculated factor a company may decide to spend money to

e
prevent downtime happening or in other words have the business continuity guaranteed

m/
sufficiently.

co
i.
In the remaining modules of this course we will look at the various technical solutions (software

we
and hardware) that can be used to build an ICT infrastructure that is providing business continuity.

ua
.h
The next images are an introduction of some general components one might see in the technical

ng
solution for the various ICT infrastructures.

ni
ar
le
Components of an ICT Infrastructure
: //
tp

Components of an ICT Infrastructure


ht

Used terminology in ICT infrastructures


s:
ce

Host: Any computer system to which disks, disk subsystems or


file servers are attached and accessible for data storage and
ur

I/O.
so

Switch: A network infrastructure component to which multiple


Re

ports attach

Storage Array: A collection of disks or tapes from one or more


ng

commonly accessible storage subsystems, combined with a


ni

body of control software.


ar

Network: An interconnect that enables communication among a


collection of attached nodes
Le

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 19


re
Mo

The term Host (or server) is used to indicate a higher specification computer that runs software
programs that are vital to the company. A computer (desktop or laptop) is a simpler version of the
host. Hosts are built to run twenty-four hours a day and for many years.

HCNA-storage V3 | OHC1109101 Data Management Introduction Page | 27


A switch is an electronic component that is used to interconnect devices. Switches have many
ports where cables can be plugged into to connect multiple devices to the same switch.

Storage array is a term generally used for a device that provides capacity to store digital data.
Storage arrays can be the size of a server or much bigger as some storage arrays can hold

n
thousands of hard disks.

e
m/
The cables and switches are used to interconnect hosts and storage arrays together form the

co
network.

i.
we
ua
Components Front View

.h
ng
ni
Host
ar Host
le
: //

Switch Switch
tp

Switch Network
ht
s:

Storage Array Storage Array


ce
ur
so
Re

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 20


ng
ni
ar
Le
re
Mo

Page | 28 HCNA-storage V3 | OHC1109101 Data Management Introduction


Components Rear View

e n
Host Host

m/
co
i.
Switch Switch

we
Switch Network

ua
.h
Storage Array Storage Array

ng
ni
ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 21
: //
tp
ht
s:
ce
ur
so
Re
ng
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109101 Data Management Introduction Page | 29


Questions

Questions

n
1. Name four important steps in Information Lifecycle

e
Management.

m/
2. What is the main reason for data loss in most companies?

co
3. What methods can be used to protect data?

i.
4. What is the difference between structured and unstructured

we
data?

ua
5. Name three file formats in which we can store images.

.h
Describe the difference between them.

ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.
ar Slide 22
le
//

Answers
:
tp

1. Determine what data is needed. Decide who can access the data. Determine how long to
ht

keep the data. Determine what must be done with the data that is not needed anymore.
s:

2. Human errors (80%).


3. Make copies of the data; restrict access to the data to qualified employees; for very delicate
ce

information use a storage medium with WORM specifications.


ur

4. Unstructured data has no known structure to it. Structured data has a well-defined database
so

structure
Re

5. TIFF (bitmap image, lossless, been used for years); JPG (bitmap image; popular because of
compression option, not lossless); DWG (vector based; Autocad format; lossless)
ng
ni
ar
Le
re
Mo

Page | 30 HCNA-storage V3 | OHC1109101 Data Management Introduction


Exam Preparation

Exam Preparation

n
1. E-mails are examples of unstructured data.

e
This statement is:  True or  False.

m/
2. Statement 1: Files should be stored in formats that are

co
supported by many independent software builders.

i.
Statement 2: The retention period for data is determined by
government based rules.

we
a. Statement 1 is true ; Statement 2 is true.

ua
b. Statement 1 is true ; Statement 2 is false.

.h
c. Statement 1 is false ; Statement 2 is true.
d. Statement 1 is false ; Statement 2 is false.

ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.
ar
Slide 23
le
//

Answers:
:
tp

1. False. E-mails are structured data.


ht

2. B. (Statement 2 is false). The combination of government rules and requirements for your
s:

own organization determines how long data should be kept.


ce
ur
so
Re
ng
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109101 Data Management Introduction Page | 31


ne
m/
co
Thank you

i.
www.huawei.com

we
ua
.h
ng
ni
ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 24
: //
tp
ht
s:
ce
ur
so
Re
ng
ni
ar
Le
re
Mo

Page | 32 HCNA-storage V3 | OHC1109101 Data Management Introduction


Mo
re
Le
ar
ni
ng

OHC1109102
Re
so

DAS Technology
ur
ce
s:
ht
tp
://
le
ar
ni
ng
.h
ua
we
i.
co
m/
en
www.huawei.com
Mo
re
Le
ar
ni
ng
Re
so
ur
ce
s:
ht
tp
://
le
ar
ni
ng
.h
ua
we
i.
co
m/
en
Introduction
In the first module you have learned that a great amount of digitally generated data is used to keep
the average company running its business. All equipment (hardware and software) that is needed to
have people do their job well is referred to as the ICT infrastructure. In this module you will learn
about the first of three possible technical solutions a company can use to build its ICT infrastructure.

e n
In a Direct Attached Storage (or DAS) solution we see a compact solution with server technology; the

m/
interconnect devices and the storage device all connected together where the distance between the

co
components is short, typically less than 25 meters.

i.
As DAS was the way used to build ICT infrastructures some 15 to 20 years ago. This module is also a

we
perfect place to explain the SCSI technology that was used then (and often still today) to transport

ua
user data from the host (and the application it runs) to the actual disk systems that store the

.h
information.

ng
ni
Objectives
ar
le
After this module you will be able to:
//

 Describe the characteristics of a DAS solution and mention the advantages of DAS.
:


tp

Explain what the major disadvantages are of DAS.


 Describe the SCSI technology and identify the characteristics of a bus structure; explain the
ht

way electrical signals are transported over a SCSI bus.


s:

 Describe the difference between parallel and serial SCSI technology.



ce

Explain how traditional hard disk technology works.



ur

Understand the workings of Solid State Disks.


so
Re

Module Contents
ng

 Building an ICT Infrastructure using DAS.


ni

 DAS characteristics.

ar

SCSI technology.
o
Le

Parallel SCSI.
o Serial SCSI.
re

 Hard disk technology.


Mo

o Mechanics.
o Disk drive characteristics.
o Disk drive performance.
Solid State Technology

HCNA-storage V3 | OHC1109102 DAS Technology Page | 35


e n
m/
co
i.
we
ua
.h
ng
ni
ar
le
: //
tp
ht
s:
ce
ur
so
Re
ng
ni
ar
Le
re
Mo

Page | 36 HCNA-storage V3 | OHC1109102 DAS Technology


Building an ICT infrastructure

Building an ICT infrastructure

n
An ICT infrastructure is the physical solution that allows users to

e
access the digital information they need.

m/
co
Components of an ICT infrastructure include:

i.
• Personal computers; laptops.
• Smartphones / VOIP telephones.

we
• Software like Operating Systems and business applications.
• Devices to make secure backups of data that has to be kept.

ua
• Network devices to interconnect various components with each other.
• Storage devices that actually store the information and also allow a

.h
user to quickly access the data when necessary.

ng
Three major infrastructural designs can be used : DAS – NAS – SAN

ni
ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 3
: //
tp

In this module we will look at the possible solutions a company can use to build its ICT infrastructure.
ht

With an ICT infrastructure we mean all equipment (hardware, networks and software) that can be
used to create, store and distribute all relevant information for a company.
s:
ce

In the last decades the role of digital information has grown and nowadays a company cannot do
ur

business without emails, websites and other applications. This results in the need for a company to
so

generate the digital information; store it safely and have the information available for every employee
Re

that needs the information to do its work well.


ng

Examples of components of an ICT infrastructure include personal computers, laptops, mobile phones
ni

but also network switches, backup devices, digital scanners and of course the storage systems on
ar

which the digital information is stored.


Le

Three methods can be used to physically build the ICT infrastructure. In this module we will have a
re

closer look at the first (and oldest method): Direct Attached Storage.
Mo

In the next modules two alternative methods will be discussed: Network Attached Storage (NAS) and
Storage Area Network (SAN).

HCNA-storage V3 | OHC1109102 DAS Technology Page | 37


Direct Attached Storage

Direct Attached Storage

An ICT infrastructure is the physical solution that allows users to

n
access the required information they need.

e
m/
The first ICT infrastructures were based on a very simple concept we
now refer to as : Direct Attached Storage.

co
i.
DAS definition : One or more dedicated storage devices connected
to one or more servers.

we
ua
Disk technologies used:
SCSI / SATA / SAS.

.h
ng
ni
HOST DISK STORAGE

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.


ar
Slide 4
le
: //

In a DAS environment, every host is responsible for the data it generates. So the information
tp

generated by the user with his application is stored locally on the same host. For that purpose the
ht

host needs physical storage capacity in the server to store the data, but also storage capacity is
needed to store the operating system and application software. The actual storage devices used in
s:

each server can be internal and\or external. Internal storage mostly means that the server has built in
ce

hard disks that hold both the operating software as well as the user data. External storage means that
ur

in most cases the capacity of the internal disk was not enough. When more capacity is needed an
so

extra chassis holding hard disks can be connected to the server via a SCSI cable.
Re

Because all data is stored locally it meant that the host administrator was also responsible for keeping
the data secure.
ng

In case of a technical problem or when a user deletes data the host administrator should be able to
ni

recover the lost data. So in practice every host was fitted with a local backup device and on the host a
ar

backup software program was installed.


Because of the fact that no centralization was possible we also describe DAS infrastructures as
Le

“Islands of Storage”. Sharing information between DAS infrastructures was/is virtually impossible.
re

The method used to connect a host with its physical disk (both internal disks in the host itself as well
as a connection to an external disk storage unit) in the first generation of DAS was based on the SCSI
Mo

technology.
In SCSI (Small Computer Systems Interface) there are strict regulations on the cables, connectors
and electronic signals used to transmit the user data between host and physical disk.

Page | 38 HCNA-storage V3 | OHC1109102 DAS Technology


Direct Attached Storage

Direct Attached Storage

• Initially based on parallel SCSI technology.

e n
Small Computer System Interface is an intelligent system for

m/
exchanging data between SCSI devices.

co
• Limited in :
- Number of devices (max=16).

i.
- Cable length (up to 25 meter).
- Performance (320 MB/s).

we
• SCSI Bus architecture  congestion problems.

ua
• A SCSI block represents 512 Bytes of data.

.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.

ar
Slide 5
le
: //

The technology used to connect the host to the storage device (could be a hard disk or a CD-ROM
tp

player or a backup unit) was parallel SCSI. The technology was developed in the 1970’s and has
ht

been in use until the beginning of this millennium.


s:

In SCSI we use the term block to indicate the smallest amount of data that can be transported. The
ce

block size for SCSI is 512 bytes. If a file of 2 MB is stored on a SCSI based device it means that many
ur

individual blocks are used to represent the file.


so
Re

Let us look to the traditional parallel SCSI technology first.


ng

Based on a so-called bus system we can connect up to 16 devices to a SCSI bus and have them
ni

communicate amongst themselves. As the technology was improved over the years from the original
ar

SCSI standard into Ultra 320 SCSI the throughput was increased from 5 MB/s to 320 MB/s. But at the
same time the maximum cable length allowed has decreased because of technical limitations. At best
Le

the cable length in Ultra 320 SCSI is 25 meters, but in practice a cable is hardly ever over 12 meters.
re
Mo

HCNA-storage V3 | OHC1109102 DAS Technology Page | 39


SCSI Protocol and Storage System

SCSI protocol and storage system

 Small Computer System Interface (SCSI) is an interface technology

n
specifically developed for midrange computers and used for connecting

e
hosts and peripheral devices.

m/
 The SCSI protocol is the basic protocol for communication between hosts
and storage arrays.

co
 DAS uses the SCSI protocol to interconnect hosts and storage arrays.

i.
SCSI bus

we
HBA Data/Address bus
SCSI ID 7

ua
Control signal

.h
SCSI array SCSI array
ID 0 ID 5

ng
LUN 0 LUN 1 LUN 0

ni
LUN 2 LUN 1

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.


ar
Slide 6
le
The controller sends a signal requesting to use the bus to the bus processor. After this request is
//

accepted, the source device sends data. In this way, the bus is occupied by the source device and the
:
tp

other devices connected to this bus cannot use the bus. SCSI is an interface used to connect
ht

between hosts and peripheral devices including disk drives, tape drives, CD-ROM drives, and
scanners. Communication is handled according a protocol and consists of user data, commands and
s:

status information. Communication is started by the initiator and is directed to go to the target.
ce
ur

SCSI protocol
so

Host-to-Disk communication is from the Initiator to a Target


Re

Host/ I/O request Disk/


ng

Initiator Target
ni

C/S
SCSI Application Layer SCSI Application Layer
ar
Le

Command/Data
SCSI Transport SCSI Transport
Protocol Layer Protocol Layer
re
Mo

Bus connection

SCSI Interconnect Layer SCSI Interconnect Layer

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 7

Page | 40 HCNA-storage V3 | OHC1109102 DAS Technology


Parallel SCSI Technology

Parallel SCSI Technology

n
ANSI standard (describes electrical bus interface and command set).

e
m/
• Bus for computer devices attachment.

co
termination
host

i.
adapter
multidrop bus
SCSI

we
Device A Device B Device C Device D

ua
devices

.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.
ar Slide 8
le
: //
tp

In the SCSI bus architecture any of the connected devices can communicate with any other device.
ht

To achieve that a signal will be transmitted from the device and it will eventually end up at the
multidrop bus. From there it should be forwarded to the required second device. There are a few
s:

physical and logistical problems in this way of communicating.


ce
ur

Two of these problems are:


- How to make sure that multiple simultaneous users of the bus do not interfere with one other?
so

- How to arrange things so that data actually arrives at the right device on the bus?
Re
ng

These problems of course have been solved and the solutions will be explained in the upcoming
sections.
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109102 DAS Technology Page | 41


SCSI Principles

SCSI principles

• SCSI is an intelligent protocol that allows devices to communicate

n
without the “help” of the CPU or SCSI adapter card.

e
m/
• Transfer protocols: - asynchronous.

co
- synchronous.

i.
• Multiplexed bus for transfer of commands, data and status
information.

we
ua
SCSI
commands

.h
DATA DATA Status info DATA

Sync Async Sync Async Sync

ng
time 

ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.
ar
Slide 9
le
//

A great advantage of SCSI is the intelligence of the system. If multiple devices are connected to the
:

same bus (parallel communication path) they can communicate with each other independently. That
tp

means that two devices that want to communicate do not need the approval of for instance a CPU in
ht

the host or a special card.


s:

The communication itself is in two different types: synchronous and asynchronous.


ce
ur

In asynchronous transmission there is no predefined timeframe between two sent data packets.
so

The protocol uses extra information, that will be send before the official data, so the receiving side
Re

becomes aware of the fact that packets that will be arriving soon. Examples of information that is sent
asynchronous: status information (i.e. bus free checks) or commands that initiate a new connection.
ng

Commands and status information will not be generated in a fixed pattern so the time between the
ni

transmissions is variable. This is very specific for asynchronous communication.


ar
Le

Synchronous communication requires a clock circuit to transmit the data packets with specific
intervals. In practice two devices will communicate asynchronously first to find out if the other device
re

is ready to receive new information. After this initial connection is set up the actual data is sent using
Mo

the fastest method possible and that is synchronous communication. In synchronous mode data is
sent quickly after another with a fixed time between two data packets. The receiving devices know
this fixed time interval and can accept and process the packets quickly.

Page | 42 HCNA-storage V3 | OHC1109102 DAS Technology


Time Multiplexing is the term used to describe a system where a physical cable is shared by
sequentially allocating the use of the cable to different devices. In this case the data send across the
cable on a certain moment is a user data packet and a moment later it can be an address or status
information. It means that inside of a SCSI cable there are no separate wires used for addressing the
devices and separate wires for sending user data across.

e n
m/
co
Parallel SCSI Technology

i.
we
Parallel SCSI Technology

ua
.h
host
bus

ng
adapter
SCSI

ni

ar
le
//

Device A Device B Device C Device D


:
tp

(1) Device B transmits a signal headed for device D.


ht
s:

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 10


ce
ur
so

Multiplexing helps to limit the number of cables/wires used to transmit SCSI based information. In a
Re

typical SCSI cable you will find around only 20 wires. Without the multiplexing technique the number
of wires needed would be at least twice as much.
ng

To communicate across a so-called bus signals will be transmitted from a device (here device B) and
ni

enter the bus at the point where the cable from device B connects to the SCSI bus cable. Next let us
ar

look at what happens when device B wants to send a packet of information to device D.
Le

Electrical signals move across a copper wire in all directions and at each intersection the signal splits
re

up and continues (as a bit weaker signal) across all wires.


Mo

So : as the signal arrives at the intersection of the cable from device B and the bus; the signal will be
split up into two identical signals and move on in two different directions. The signal will split at the
intersection to device A as well as at the intersection to device C. But also will it continue towards the
intersection with device D.

HCNA-storage V3 | OHC1109102 DAS Technology Page | 43


Parallel SCSI Technology

host 

adapter
SCSI

ne
m/
co
i.
Device A Device B Device C Device D

we
ua
(1) Device B transmits a signal headed for device D.

.h
(2) The signal will be split up at the intersection and moves in two directions !

ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 11

ar
le
//

The signal travels onwards towards the intersection of the bus and the cable from Device D. There
again it will split up into two identical copies.
:
tp
ht

Parallel SCSI Technology


s:
ce

host

ur
adapter
SCSI

so
Re
ng
ni

Device A Device B Device C Device D


ar
Le

(1) Device B transmits a signal headed for device D.


(2) The signal will be split up at the intersection and moves in two directions !
(3) The signal will split again : a signal goes towards Device D but another signal will go on !
re
Mo

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 12

One copy moves towards device D just as we wanted. The second copy continues until it reaches the
physical end of the cable.

Page | 44 HCNA-storage V3 | OHC1109102 DAS Technology


Device D (just as devices A and C) receives the signal. Inside of the message the devices receive
there is information that makes it clear that a packet is meant for one specific device only. So devices
A and C will see that the packet is not for them and ignore the information. Of course device D
recognizes that the information is for him and accepts the new packet.

e n
Parallel SCSI Technology

m/
co
i.
host

adapter

we
SCSI

ua
.h
ng
ni
Device A Device B Device C Device D

(1) Device B transmits a signal headed for device D. ar


le
(2) The signal will be split up at the intersection and moves in two directions !
//

(3) The signal will split again : a signal goes towards Device D but another signal will go on !
(4) The terminator at the cable end will absorb the signal so it cannot be reflected and cause problems.
:
tp

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 13


ht

So device D gets the information it needs, but we still have a signal that continues to travel across the
s:

bus towards the end of the physical cable. At the end of the cable there are a few possibilities : the
ce

signal could be reflected, absorbed or distorted. In any case we don’t want any signals to be reflected
ur

as the signal will interfere with other signals sent that move over the bus.
so
Re

To avoid the signal being reflected back onto the bus in the opposite direction a so-called terminator
is used to absorb the signal. A terminator looks like a very simple plug that is connected to the end of
ng

the cable but it is a very important part of the success of any SCSI bus communication. A SCSI bus
ni

without terminator will not be able to transmit any packets of information successfully.
ar
Le

Note:
On the first slide of the SCSI bus we saw that a signal also travels towards Device A (and will have a
re

copy continue to the SCSI adapter). That signal that travels to the SCSI adapter will have to be
Mo

terminated too in order to prevent reflections there.

HCNA-storage V3 | OHC1109102 DAS Technology Page | 45


Parallel SCSI Specifications

Parallel SCSI specifications

 Maximum of 16 devices on the SCSI bus.

ne
 Bandwidth limitation of 320 MB/s.

m/
 Cable length limitation of 25 m in HVD and 12½ m in LVD.

co
 Terminators are used at end of bus.

i.
we
ua
.h
ng
ni
Single Ended LVD LVD/SE HVD

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.


ar Slide 14
le
: //

The number of devices connected to the SCSI bus (including the adapter) was 8 in the very first SCSI
tp

standards. Later the number was increased to be 16.


ht

Physical problems (skewing and interference) have made it almost impossible to keep on improving
s:

the bandwidth of SCSI beyond 320 MB/s. Different technologies like Single Ended, Low Voltage
ce

Differential and High Voltage Differential have been used but the cable length could not be more than
ur

25 meters at best.
so
Re

As the technologies are different it is important not to mix them : Single Ended devices cannot be
connected to a SCSI bus that is also connected to High Voltage Differential devices !
ng
ni

Each technology is indicated with an icon. There is one combination allowed : Single Ended and Low
ar

Voltage Differential can work together because they use the same voltage level of the signal so the
Le

components will not be damaged. However the Single Ended technology has much lesser
specifications and whenever two devices SE and LVD are mixed the lowest specifications will be used.
re

This of course means that the LVD device will work less optimal.
Mo

Page | 46 HCNA-storage V3 | OHC1109102 DAS Technology


Electrical Specifications

Electrical specifications

 Single Ended.

n
Uses a reference (ground) to determine whether or not a signal

e
that is received is a logical “1” or a “0”.

m/
Operates at a level of 3.3 Volt.

co
i.
 Low Voltage Differential and High Voltage Differential.
Uses a clever trick to eliminate the effect from external distortions.

we
Operates at 3.3 Volts (LVD) or 5 Volts (HVD).

ua
.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.
ar
Slide 15
le
: //

Inside of the definition of the SCSI standard there are rules and regulations about how the SCSI
tp

protocol works with the sending of data and the way to make sure the right device receives the data it
ht

needs. But there are more things defined in the SCSI standard and one of the things is all electrical
properties of the devices. Because all devices are connected to the bus the requirements are such
s:

that signals should not influence other signals or other devices. First thing was to agree on a specific
ce

voltage level for a signal. In SCSI the data is transmitted as digital information. In digital information
ur

the only information is 0 or 1.


so

The way to make clear that a logical 1 was sent is by defining a voltage level to represent it. The
Re

sending device now creates a pulse with a given voltage level. The receiving device can detect the
signal as the electronics detect a signal with a certain voltage level. When the voltage level is equal to
ng

what was defined as a logical 1 the message will be interpreted as a valid signal 1. Anything less than
ni

that voltage level is not “accepted” as a valid signal.


ar

In the electronics in the 1970’s and 1980’s commonly used the 5 Volt voltage level. Later the levels
Le

have been lower to 3.3 Volts and nowadays it is 1.5 Volts. Although the difference between 5 V and
3.3 V seems very small for the production of the electronic components it is a big advantage when the
re

voltage level is lower.


Mo

There are two ways to transmit signals over a copper wire : asymmetrical (or Single Ended) and
Symmetrical (Differential Signaling). In the next section the difference will be explained.

HCNA-storage V3 | OHC1109102 DAS Technology Page | 47


Single Ended SCSI

Single Ended SCSI

 Cable lengths are from 6 m (Fast SCSI) down to 1.5 m for the last

n
standard that supported SE (Wide Ultra SCSI).

e
m/
co
i.
original signal external signal
3.3 Volt

we
0 Volt

ua
“1” “0” “1” “0” “1” “0” “1” “0” “1” “1” “1” “0”
0 Volt

.h
Ground/ reference signal Ground/ reference signal

ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.
ar Slide 16
le
: //

With Single Ended, the signal is transported to the other device using a single cable and for reference
tp

purposes a ground signal (equals 0 Volts) is used. At the receiving end the signal is measured again
ht

with reference to the ground signal. If somewhere in the cable an external signal is picked up (cross
talk signals, external noises) the receiving end might interpret this distortion as a legitimate signal and
s:

read a logical “1” where the original signal sent a logical “0” signal.
ce
ur

As the performances got higher and higher it became more difficult to distinguish between real data
so

and distortions. Single Ended technology was basically used until the Wide Ultra SCSI standard was
Re

defined.
ng

At the end Single Ended cables could not be longer than 1.5 meters. The reasons is that physics
ni

creates problems for Single Ended systems with high speed communication.
ar

The biggest problem is that with higher speeds the signals that need to be transported cannot easily
be distinguished from externally created distorting signals.
Le

The next problem was that it became more and more difficult to protect the physical cable against the
re

influence of external signals. It is obvious that when the cable is very long the chance that a cable
Mo

picks up distortion signals is higher than with short cables. That is basically the reason that Single
Ended cables had to be so short that the distance was less than a couple of meters. And that of
course is not useful when building an ICT infrastructure.

Page | 48 HCNA-storage V3 | OHC1109102 DAS Technology


Differential Signaling

Differential signaling

 Two signals are transmitted: 3.3 Volt

ne
a) the original signal. 0 Volt

m/
b) the inverted original signal. 0 Volt

co
-3.3 Volt

i.
we
 At the receiving end the inverted signal is subtracted from the
original signal.

ua
.h
6.6 Volt
a
a-b

ng
0 Volt
b

ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.
ar
Slide 17
le
//

With the differential signaling the effect of external distortion can be eliminated because the original
:

signal will be detected in an amplified state (3.3 V - -3.3 V equals a 6.6 V output signal). However the
tp

distortion, here the red pulse in the cable that transports the original signal, will not be amplified and
ht

so it is possible to state that all signals less than 5 Volts are logical “0”-’s or distortions which we now
s:

can ignore.
ce

With differential signalling it becomes easier to determine whether a received signal is a valid 1 or a
ur

distortion. That is why with differential signalling the cable lengths were 25 meters.
so
Re

However : there are two versions of differential signalling called HVD and LVD.
In HVD or High Voltage Differential the voltage levels used are the traditional 5 Volts. With LVD or
ng

Low Voltage Differential the voltage level is 3.3 Volts. Just as with other electronic components the
ni

cost for producing 5 Volts components is higher than with 3.3 Volts components. So over the years
ar

the HVD devices became less popular and LVD devices became more or less the standard.
Le

It is obvious that on a bus only signals of the same voltage levels can be transmitted. It is therefore
re

impossible to connect HVD devices on a bus that is also connected to LVD (or Single Ended) devices.
Mo

The difference in voltage levels will probably damage the electronics in the LVD and SE devices !

HCNA-storage V3 | OHC1109102 DAS Technology Page | 49


SCSI Bus Communication

SCSI bus communication

 While one device uses the bus other devices may be active

n
performing internal activities.

e
m/
 Devices only connects to the bus for Data transfer or status reports.

co
 Devices may disconnect from the bus and reconnect if needed.

i.
 Connections takes place between Initiator and Target.

we
ua
4
7
target
initiator
device

.h
ng
5 9 3 13

ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.
ar
Slide 18
le
: //

One of the problems with bus communication is to make sure that multiple devices should not send
tp

data at the same time. When multiple devices send out signals at the same time congestion will occur.
ht

Congestion means that the signals will clash together and the result is that both signals will be
distorted. At that point no transmission is successful and the devices must try again. When many
s:

devices create a lot of congestion the bus will appear to be very slow sending data.
ce

A system had to be found to make sure just one device at a time is sending signals. In the image
ur

above it is represented using cars that want to travel across a single lane road.
so

To achieve this a waiting system with priorities has been designed. Each device gets a priority
Re

indicated with the so-called SCSI ID. The SCSI ID determines how long you should wait after you
have detected that the line was busy.
ng
ni

So before a device can transmit a signal it must find out if the bus is not used by another device.
ar

When a device detects a busy bus (for instance because somebody else is also transmitting data) he
waits a specific time which is defined by his SCSI ID. More important devices have a higher priority
Le

which means they will wait less longer and have a better chance of finding the bus free after their
re

waiting period has expired.


Mo

The SCSI ID is also the ID used in the message to indicate who the specific receiver\addressee of the
message is. So using the SCSI ID it can be determined who will receive a packet but also how high
the priority is of that receiver. Typically the fastest devices (i.e. hard disk) on the bus get higher
priorities than slower devices (tape backup units)

Page | 50 HCNA-storage V3 | OHC1109102 DAS Technology


SCSI Priorities: SCSI ID’s

SCSI priorities : SCSI ID’s

SCSI-communication is divided into phases :

e n
• Busy Free: before starting a communication the bus must be idle.

m/
A test signal will detect if this is the case.

co
• Addressing: using the sender address and the receiver address.

i.
Here it is decided who are about to communicate.

we
• Negotiation: both sides decide on which data path width and
speed to use in the transmission afterwards.

ua
• Connection: the actual data transmission part.

.h
• Disconnect: transmission successfully completed => bus released.

ng
ni
Tagged Command Queuing; Disconnect-reconnect increases performance.

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.


ar
Slide 19
le
: //

Every time a connection is established (or in other words a connection between two SCSI devices is
tp

created) all steps of the communication have to be made. Once the device notices that the bus is free
ht

the device has now exclusive rights to transmit data over the bus.
s:

First thing to do next is to tell with which device he wants to communicate. This is called the
ce

addressing phase and SCSI uses the SCSI ID’s to indicate the target device.
ur
so

Because of the fact that various SCSI versions exist (in speed and number of devices used) both
Re

devices have to negotiate on which settings to use:


ng

- What will be the transmission rate


ni

- How many addresses are available (8 or 16).


ar
Le

This negotiation phase takes a relative long time to complete. Only then the actual user data will be
transmitted between the devices.
re
Mo

As for every data transmission the steps have to be completed it means that sending data across
SCSI busses can take a long time. Techniques are used to make this time shorter. One important
technique is disconnect-reconnect. Here a device makes the initial connection following all the steps.

HCNA-storage V3 | OHC1109102 DAS Technology Page | 51


Only when the device wants to transmit data to the same device again they can now skip the
negotiation phase as they already know who the receiving device is and what its specifications are.

Another time-saving feature is Tagged Command Queuing or Native Command Queuing.


It is used in most modern hard disks and it uses the concept of sending multiple data packets in one

n
batch. The device (here the hard disk) will then internally handle the multiple packets and write the

e
individual SCSI blocks to the physical disk.

m/
co
While the device internally stores the SCSI block the bus will be released so other devices can use

i.
the bus in the meantime. This requires the connection to be created less often and the usage of the

we
bus gets improved.

ua
.h
ng
ni
SCSI Development

ar
le
SCSI development
: //

Datapath : 8 BITS Datapath : 16 BITS


tp

1½ MB/s 5 MB/s
ht

SCSI Async Sync

freq x 2
FAST SCSI 10 MB/s FAST WIDE 20 MB/s
s:

freq x 2
ce

ULTRA 20 MB/s ULTRA WIDE 40 MB/s

freq x 2
ur

ULTRA2 40 MB/s WIDE ULTRA2 80 MB/s


so

DTC
ULTRA3 160 MB/s
Re

freq x 2 DTC
ULTRA320 320 MB/s
ng
ni

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 20


ar
Le

It was decided in the first SCSI standard to transmit all status information and all SCSI commands (i.e.
re

addresses) in asynchronous mode at 1.5 MB/s. Once the selection phase was completed the actual
Mo

user data was sent in the synchronous mode which leads to higher transmission speeds.

To stay backward compatible in Fast SCSI the asynchronous status/command transmission was kept
constant at 1.5 MB/s whereas the data speed was doubled to 10 MB/s. This is still the situation at this
moment !

Page | 52 HCNA-storage V3 | OHC1109102 DAS Technology


Mostly the performance gain was achieved because of increasing the clock frequency so signals
could be transmitted faster. As from Ultra3 they used a second technology to improve the
transmission rate: Double Transition Clocking.

In SCSI a clock is used to determine when a sample has to be taken of the incoming signal and at

n
that point they will measure the signal. The clock signal is a block shaped signal and it varies between

e
0 Volts and 3.3 Volts. The stage in which the signal changes from 0 Volts to 3.3 Volts is called the

m/
raising flank of the clock signal.

co
When the signal has a value of more than 3.3 Volts it is considered to be a logical “1” signal. Anything

i.
less than 3.3 Volts is considered a logical “0”.

we
ua
.h
Single / Double Clocking

ng
ni
Single/Double Transition Clocking

ar
le
“1” “0” “1” “1” “0” “1” “1” “0”
//

single
:
tp
ht
s:

“1” “0” “1” “0” “1” “1” “1” “0”


“1” “0” “1” “1” “0” “1” “1” “0”
ce
ur

double
so
Re

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 21


ng
ni
ar

In the above diagram with single transition clocking the rising flank of the clock signal is used. In
Le

the example the data received will be interpreted to be 8 bits of data :


“1” “0” “1” “1” “0” “1” “1” “0”
re
Mo

With double transition clocking the falling flank of the clock signal (indicated in red) is also used as
sampling moments. Now not 8 data bits but 16 bits can be represented and that means the number of
transmitted information is doubled without actually changing the clock frequency !
“1” “1” “0” “0” “1” “1” “1” “0” “0” “1” “1” “1” “1” “1” “0” “0”

HCNA-storage V3 | OHC1109102 DAS Technology Page | 53


SCSI Definitions

SCSI definitions

e n
Max. Bus Length, Meters

m/
Bus Speed Bus Bus Max.
Mbytes/sec. Speed, Width, Single

co
SCSI-Protocol Devices
Max MHz Bits - LVD HVD
Support

i.
Ended

we
SCSI-1 5 5 8 6 25 8

ua
Fast SCSI 10 10 8 3 25 8
Wild Fast SCSI 20 10 16 3 25 16

.h
Ultra SCSI 20 20 8 1.5 25 8

ng
Wide Ultra SCSI 40 20 16 1.5 25 16

ni
Ultra 2 SCSI 40 40 8 12 25 8
Wide Ultra 2 SCSI 80 40 16
ar 12 25 16
le
Ultra 3 SCSI 160 40 16 12 25 16
//

Ultra 320 SCSI 320 80 16 12 25 16


:
tp
ht

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 22


s:
ce

The table clearly shows that the maximum cable lengths have decreased over the years. For Single
ur

Ended devices the cable could not be more than 1,5 meters at the time of Wide Ultra SCSI.
so

Also visible is the fact that there is no Wide Ultra 3 SCSI defined. At that time it was decided that the 8
Re

bit wide addressing was no longer required and therefore only the 16 bit version was standardized.
ng

Although both HVD and LVD are still supported as a SCSI standard in practical life the LVD standard
ni

is mostly used. Reason is mainly because of the cost difference between the hardware components
for LVD and HVD. It was already stated before that HVD devices cannot be mixed with LVD devices
ar

on the same SCSI bus. To prevent this happening it is important to check that before powering on the
Le

devices. At that point it is useful to look at the specifications of all connected devices and the icons
re

used for SE, LVD and HVD.


Mo

Page | 54 HCNA-storage V3 | OHC1109102 DAS Technology


SCSI Protocol Addressing

SCSI protocol addressing

n
Bus

e
Differentiates SCSI buses.
number

m/
co
i.
Device Differentiates devices
ID connected to SCSI buses.

we
ua
.h
Differentiates sub-
LUN

ng
devices in SCSI devices.

ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.
ar Slide 23
le
: //

The SCSI protocol introduces SCSI device IDs and logical unit numbers (LUNs) to address devices
tp

connected to the SCSI bus. Each device connected to the SCSI bus has a unique ID. The host bus
ht

adapters (HBAs) on servers also have device IDs. Each bus has 8 or 16 device IDs. It is the Device
ID that can be used for prioritization. SCSI ID’s were set inside of the devices and with that the priority
s:

of a device could be determined. It was then important not to give the same SCSI ID to two different
ce

devices as that would interfere with the addressing and priorities !


ur
so

Storage devices may have a number of sub-devices, such as virtual disks, tape drives, and medium
Re

changers. LUNs are used to address those sub-devices.


ng

A traditional SCSI adapter is connected to a single bus and therefore has only one bus number. One
ni

server may be configured with multiple SCSI controllers. Accordingly, the server has multiple SCSI
ar

buses. In a storage network, each Fibre Channel HBA or iSCSI network adapter is connected to a bus.
Le

Therefore, each bus must have a unique bus number. We can identify a SCSI target with three
variables: bus number, device ID, and LUN.
re
Mo

HCNA-storage V3 | OHC1109102 DAS Technology Page | 55


Method for querying the SCSI device ID in Windows

Method for querying the SCSI device ID in Windows

e n
m/
co
i.
we
ua
.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.
ar Slide 24
le
: //

Right-click on My computer and choose Manage from the short-cut menu. In the Computer
tp

Management window, click Disk Management in the navigation tree. Right-click the mapped disk and
ht

choose Properties from the shortcut menu. On the General tab page, you can view the SCSI device
ID information in Location.
s:
ce

The picture shows the identifier as Bus Number, Target ID and LUN ID. (or B-T-L). The target ID is
ur

now the actual SCSI ID. The term target is generally used for the location where data is physically
so

stored. That could be a physical hard disk but also a more complex storage system.
Re
ng
ni
ar
Le
re
Mo

Page | 56 HCNA-storage V3 | OHC1109102 DAS Technology


ATA and SATA

ATA and SATA

 Advanced Technology Attachment was the standard in desktops in

n
the 1990’s.

e
m/
 Use the Programmed IO method and are therefore not very fast or
intelligent.

co
Serial ATA is the improved version. It has first replaced ATA in

i.

desktops but...

we
 Because they were relatively cheap and had big capacities they are

ua
also used in enterprise servers and storage devices.

.h
 NL-SAS offers the advantage of big capacity with SAS intelligence.

ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.
ar
Slide 25
le
//

Parallel SCSI has reached its limits of use. It is too difficult to improve the performance as physical
:

problems at that point become hard to solve. Serial communication is indifferent to many of the
tp

physical problems that parallel communication has. It is therefore the way technology evolves in.
ht

SATA is the improved serial version of the ATA (Advanced Technology Attachment) technology that
s:

was used in laptops and desktops. With ATA (or better Parallel ATA or PATA) there is a bus
architecture just like with parallel SCSI. However : the PATA interface works different from SCSI.
ce

Unlike with SCSI, where the devices can independently decide to communicate with other devices, a
ur

PATA interface uses a so-called PIO mode concept.


so

In PIO mode, or Programmed Input Output, communication is always controlled by the Central
Re

Processing Unit (CPU) in the host. In the CPU a special software program is used to transfer the data
that needs to be stored from the RAM memory towards a special register in the CPU. The design of
ng

the CPU and software now enables the data to be moved from within the CPU chip via a copper
ni

based bus system to the interface of the hard disk.


ar

PATA interface were not used in high end solutions because the speed was not optimal. That was
Le

partly because of the PIO mode but also because with parallel communication in general the
performance is limited.
re

When SATA was introduced they initially replaced the PATA interfaces that were used in desktops
Mo

and laptops. Later they also got used more and more in high end systems. That was primarily
because the capacity of SATA drives was larger than of SCSI drives and at the same time the price
was relatively low. Many vendors used SATA drives in their storage solutions because of the price
and capacities of the disks for some 5 years.

HCNA-storage V3 | OHC1109102 DAS Technology Page | 57


SATA itself is not completely outdated but most vendors have switched over to the superior SAS
technology. With SAS the benefits of SCSI are kept and the limitations it had have been removed.
Capacities of SAS disks are however smaller than the capacities of SATA disks. So a number of
vendors offer storage solutions that use so-called NL-SAS disks or Near Line SAS. The NL SAS disk
is basically a SATA disk drive that is fitted with a SAS type interface and that therefore can be

n
connected with a SAS device.

e
m/
So let us have a look at Serial Attached SCSI.

co
i.
we
ua
.h
ng
ni
Serial Attached SCSI

ar
le
Serial Attached SCSI (1)
: //

• In storage SAS has taken over from parallel attached SCSI and
tp

from SATA.
ht

• SAS uses a point-to-point architecture : performance ≥ 300 MB/s.


s:
ce
ur
so
Re
ng
ni

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 26


ar
Le

A point-to-point connection is designed to be a dedicated link for communication whereas a multidrop


bus has to share the bus. Accessing a point-to-point link is much quicker because no negotiations
re

have to be held to find out who is allowed to use the link.


Mo

Page | 58 HCNA-storage V3 | OHC1109102 DAS Technology


Serial Attached SCSI (2)

e n
m/
co
i.
we
ua
.h
SAS – SATA compatibility

ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 27

ar
le
In the design of the SAS interfaces, they have decided to use the same form factor as with SATA for
//

all connectors. This even allows some mixes of device types within a group of disks.
:
tp
ht

Serial Attached SCSI (3)


s:

 Architecture allows multiple datapaths with each link running at full


ce

speed. Supports bundling of channels for wide-links.


ur
so
Re
ng
ni
ar
Le

 SAS is using full duplex communication.


re

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 28


Mo

The most important improvements that SAS offer compared to parallel SCSI are :

 Much more throughput because of the serial communication and the promise for the future
that even more performance will be possible. Four channels can be bundled: Wide Link

HCNA-storage V3 | OHC1109102 DAS Technology Page | 59


 A greater number of devices can be connected together. Where SCSI per domain had a
maximum of 16 the maximum for SAS per domain is now 16,384.

 Full Duplex or bidirectional communication with SAS instead of simplex (unidirectional). With
traditional parallel SCSI only one connection could be used in one direction. When a device

n
received a packet in parallel SCSI the response to the packet would be arranged as a new

e
m/
SCSI communication (with all the necessary steps) after the first connection was released.
Now in SAS two-way communication is possible.

co
i.
we
ua
Serial Attached SCSI (4)

.h
• Up to 16,384 SAS devices can be joined together in a SAS domain.

ng
1

ni
Expander

ar
Expander le
Expander
//

SAS
RAID Expander
Controller
:

1
tp

Expander
ht

Expander

128
s:

128
ce

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 29


ur
so

Per expander a maximum of 128 devices (expanders and/or drives) can be connected. The total
Re

maximum of drives attached is 128x128 equals 16,384 drives.

A SAS domain therefore exists of expanders and SAS drives. Two types of expanders were defined :
ng
ni

1. Edge expander with only disks attached


2. Fan-out expander that hold up to 128 expanders.
ar
Le

Fan-out expanders are originally equipped with an address routing table that keep track where all
SAS drives are located (each SAS drive gets an unique “home address” within the domain).
re

Nowadays also edge expanders are equipped with the routing functionality so the need for separate
Mo

fan-out expanders is no longer there.

Note : In practical life the amount of connectors on expander cards (like shown in the picture above) is
less than 128.

Page | 60 HCNA-storage V3 | OHC1109102 DAS Technology


Principles of SAS cabling

Principles of SAS cabling

n
• SAS cable has four channels typically. Each channel is now 12 Gb/s.

e
m/
• SAS devices are linked together in a loop (also called chain).

• Bandwidth of 4 x 12 Gb/s limits the number of disks in the loop.

co
• Currently the maximum number is 168 as best practise.

i.
• With 24 disk drive enclosures this makes 7 enclosures.

we
• However: with the faster SSD drives the maximum number is 96 disks or 4

ua
disk enclosures.

.h
• SAS connectors are:

ng
Mini SAS

ni
Mini SAS High Density

ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 30
: //
tp

Most vendors of storage devices now offer SAS as the technology to connect disk enclosures to the
controller(s). SAS cables usually contain 4 separate channels that can be bundled to provide more
ht

bandwidth. At this point a channel can perform at a speed of 12 Gb/s and as a wide link the four
s:

channels can provide 48 Gb/s of bandwidth. To make sure that the bandwidth is not exceeded best
ce

practices are defined that limit the number of disks that are connected in one single loop.
ur

For Huawei this maximum at this moment is 168 disks. There can be 24 disks in an enclosure which
so

means that a maximum of 7 enclosures are supported per loop. However, this is assuming that the
Re

disks are traditional SAS disks. Now that the SSD is getting more popular we must realize that they
can deliver more output than a SAS disk. This has resulted in a best practice maximum for SSD in a
ng

loop to be set to 96 (or 4 enclosures).


ni
ar

The physical connector has changed when the 6 Gb/s standard was improved. The 6 Gb/s connector
Le

is referred to as mini SAS and the newer 12 Gb/s standard uses the mini SAS HD (High Density)
connector.
re
Mo

HCNA-storage V3 | OHC1109102 DAS Technology Page | 61


SAS and bandwidth limitations

In principle a loop or chain can contain an unlimited number of devices. It is however very important to
realize that practical problems can occur when the number is too big. In the picture below we will
explain these problems.

e n
m/
co
SAS and bandwidth limitations

i.
SAS Interface

we
5 blocks on the loop

ua
.h
 
Target

ng
Disk Enclosure #1

ni
3 blocks on the loop

Target

ar
le
Disk Enclosure #2
2 blocks on the loop
 
Target

: //

Disk Enclosure #3
tp
ht

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 31


s:
ce

In the above image a typical situation is given where three disk enclosures are connected to a
controller. It is a simplified drawing as in real life all cabling is a bit more complex. This will be
ur

explained in the chapter about SAN and there are a few real cabling schemes in chapter 9 too.
so
Re

The three enclosures are daisy-chained (or put in a one-after-one loop) and all data from an
enclosure will pass through the enclosure “in front” of the enclosure. In other words the data sent from
ng

a disk in enclosure three will pass through enclosure 2 and enclosure 1 on its way to the SAS
ni

interface in the device. Similar will all data from enclosure 2 pass through enclosure 1.
ar
Le

The diagram now shows that adding enclosure after enclosure means that the last cable, from
enclosure 1 back to the SAS interface, transports all sent date from enclosure 1,2 and 3. If too many
re

disks are sending data at the same time the total sum of data in the last cable may be higher than
Mo

what the cable can handle. So in the example 2 or 3 blocks (from 3-2 and 2-1) is not a problem, but
the last cable has to handle 5 blocks (all of them). When 4 block is the maximum for the cable then
sending 5 blocks would be done slower than expected. That is why a maximum number is suggested.

Page | 62 HCNA-storage V3 | OHC1109102 DAS Technology


Disk Technology

Disk technology

e n
m/
co
i.
we
ua
.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.
ar
Slide 32
le
: //

Regardless of the technology of the disk (SCSI, SATA, SAS), the mechanics of disk drives have not
tp

changed much over the years. Having said this we must already mention a new technology that is
ht

making a big entry in the disk storage world. This new technology is called SSD or Solid State Disks.
s:

In a SSD hard disk there are no more mechanical moving parts and data is stored on a medium
ce

which is best compared with a huge USB flash disk. Solid State Disks are therefore also referred to as
ur

Flash Disks. At this moment SSD drives are relatively expensive and their capacity is smaller than
so

that of traditional mechanical spinning disks. Later in this module Solid State technology will be
Re

explained in more detail.


ng

So let us look at the interior of a disk now.


ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109102 DAS Technology Page | 63


HDD Components

HDD components

n
Platter

e
m/
Spindle

co
i.
Actuator
Head

we
ua
Control Circuit

.h
ng
Interface

ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.
ar Slide 33
le
: //

The following are mechanical and electrical components of an HDD:


tp
ht

 Head: It reads and writes data.


 Actuator: It moves the head or head arm to a desired position.
s:

 Platter: It holds the recorded data.


ce

 Spindle: It spins the flat circular disks, which is the platter.


ur

 Control circuit: Implements system control, speed and spinning adjustments.


so
Re

All hard disks are based on the same principle :


Magnetic materials are used to cover the platter and then magnetic particles are polarized to encode
ng

a binary information unit (or bit).


ni
ar

Using the magnetic properties to store data is very old, relatively cheap and therefore very popular to
Le

store large amounts of data. Other storage technologies that also use the magnetic properties
are/were floppy disks and tape.
re
Mo

Page | 64 HCNA-storage V3 | OHC1109102 DAS Technology


Recording Methods

Recording methods

 Longitudinal recording (used in the past).

e n
m/
co
i.
we
 Perpendicular recording. Now used and offers disk capacities of
many terrabytes.

ua
.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.
ar
Slide 34
le
: //

Although hard disks have now gotten smaller (the format was 3.5 inch initially but now the format is
tp

2.5 inch), the capacity of disk drives has increased over the years.
ht

An important reason is the quality of the magnetic materials, the actuator motors and the construction
s:

of the read/write head. But even more important was the introduction of perpendicular recording.
ce

Now the magnetic field of the read/write head can change the magnetic particles in a vertical plane
ur

where in the past it changed the particles in a horizontal plane.


so
Re

With perpendicular recording a higher density can be achieved and therefore a higher capacity. In the
nearby future capacities of more than 8 TB per disk will be available.
ng
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109102 DAS Technology Page | 65


Hard Disk Properties

Hard disk properties

Sector Track

n
Read / Write head

e
m/
co
i.
we
ua
Cylinder
Actuator

.h
Platters

ng
Motor

ni
Motor

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.


ar
Slide 35
le
: //

Data on a hard drive is stored in tracks and sectors. This is because the platter on which the magnetic
tp

material is fixed rotates and a magnetic read/write moves to a specific location over the disk platter.
ht

The pattern the read/write head “sees”, is a circular pattern called a track. A cylinder is made up of all
similar sectors on a track on all of the platters. So in the picture above sectors A, B, C and D form one
s:

cylinder
ce

The amount of tracks a hard disk uses is dependent on the size of every individual step the actuator,
ur

on which the read/write head is mounted, makes. In modern hard disks the number of steps the
so

actuator arm can make could be in the hundreds which create hundreds of tracks on the platter. Each
Re

of these tracks is divided into sectors. In a sector a fixed amount of binary information can be stored:
For most drives this is 512 bytes (or 512 x 8 bits) although a new sector size of 4k (4096 bytes) is now
ng

also available.
ni

The motor spinning the platters/disks are high speed motors that have rpm’s (rotations per minute)
ar

ranging from 7200 rpm up to 15.000 rpm for modern disk drives.
Le
re

The motor moving the actuator is a so-called stepper-motor which can make specific steps from 1 or 2
Mo

degrees if necessary with a great accuracy. This of course is also required for the read/write head to
be positioned in the correct way for each movement it makes.

A small difference in the movement will lead to the head not being positioned over the correct track!

Page | 66 HCNA-storage V3 | OHC1109102 DAS Technology


Hard Disk Performance

Hard disk performance (1)

Track Sector ≈ 250 sectors per track

ne
Data: 512 bytes per sector (0.5kB)

m/
Per track: 512

co
250
x

i.
125 kB

we
ua
.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.
ar Slide 36
le
: //

In this and following slides a few simplifications have been made. Most important one is the
tp

assumption that every track contains 250 sectors. That was the case with early magnetic storage
ht

devices but nowadays drives are more intelligent and one can definitely state that the outer tracks
have more sectors in them than the inner tracks. However the average of 250 still is valid in most
s:

cases.
ce
ur

The number of 512 physical bytes of data per sector is also valid but it is depending on the operating
so

system accessing the drive how much actual data can be stored on a disk. Within operating systems
Re

like MS Windows the term cluster size is used. This is the smallest amount of hard disk space a file
can occupy. Floppies have a cluster size of 512 bytes and hard disks can have a cluster size ranging
ng

from 1 kilobyte to 16 kilobytes (sometimes even more).


ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109102 DAS Technology Page | 67


Hard disk performance (2)

10k RPM One revolution takes

60
= = 6 ms

n
10.000

e
m/
One revolution equals 125 kB of data

co
i.
we
125 kB

ua
Transfer = = 20.83 MB/s
6 ms

.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 37

ar
le
The rotational speed of a disk drive is the number of rotations the platter makes every minute. In
//

storage devices nowadays three rotational speeds (or RPM’s) are used:
:
tp

- 7.200 Rotations Per Minute


ht

- 10.000 Rotations Per Minute


s:

- 15.000 Rotations Per Minute


ce
ur

It takes the platter 6 ms to make one full turn. If the read/write head reads all the data in that track it
so

has read 125 kB of data. Transfer speeds or throughput are measured in MB/s so in this case: 125 kB
Re

in 6 ms makes a throughput of 20,83 MB/s for a 10.000 RPM disk drive.


ng

Note :
ni

This is the ideal situation. As normally the read/write head is not over the right track and has to be
ar

moved there. Also: once the read/write head is over the track it does not mean that the right sector is
Le

beneath the read/write head. Statistically you will have to wait half a turn to get to the correct sector to
begin the read. This half turn is called the rotational latency. Sometimes the sector is directly under
re

the read head and sometimes it has just moved past the read/write head and you will have to wait a
Mo

full turn. The average wait is therefore a half turn.

Page | 68 HCNA-storage V3 | OHC1109102 DAS Technology


Hard disk performance (3)

15k RPM One revolution takes

60
= = 4 ms

n
15.000

e
m/
One revolution equals 125 kB of data

co
i.
we
ua
125 kB
Transfer = = 31.25 MB/s

.h
4 ms

ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 38

ar
le
With a disk with a higher number of RPM’s, the full turn is shorter in time. Now it would take just 4 ms
//

to read the same 125 kB and the throughput would then be 31,25 MB/s.
:
tp

As mentioned before this is the ideal situation. The next picture shows the effect of rotational latency
ht

and the effect of having to move the read /write head to the proper track on the throughput.
s:
ce

Hard disk performance (4a)


ur

10k drive:
so

Seek Time  6 ms
Re

Rotational latency = ½ track = 3 ms


ng

Read time track = 6 ms


ni

+
Total time needed = 15 ms
ar
Le

Full
re

access 125 kB
transfer = = 8.33 MB/s
Mo

15 ms

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 39

HCNA-storage V3 | OHC1109102 DAS Technology Page | 69


Hard disk performance (4b)

15k drive:

Seek Time  6 ms

n
Rotational latency = ½ track = 2 ms

e
m/
Read time track = 4 ms

co
+
Total time needed = 12 ms

i.
we
ua
Full
access 125 kB

.h
transfer = = 10.4 MB/s
12 ms

ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 40

ar
le
Modern day hard disks take approximately 6 ms to move the read/write head actuator from one track
//

to another track. This is referred to as the seek time. So it takes 6 ms to get to the right track; another
:

half a turn to find the right starting point on the track and then another full turn to read all data in the
tp

track. The above picture shows that this has a big impact on the throughput of a disk. Things get even
ht

worse when we do not want to read the entire track but we are now interested in a single sector !
s:

The term sequential read is used when data is read from a disk drive from many consecutive sectors
ce

on the same track. Sequential reads (or writes) are relatively quick as the read\write head does not
ur

have to move between tracks to get to many sectors of data.


so

In real life the data is stored randomly across the magnetic surface of the platters. It is partly because
Re

of the working of the operating system but also because of the technology inside of storage device.
ng

For random reads the data needs to be picked up as individual sectors that are located on different
ni

tracks. The next picture shows what that means for the performance of the disk drive.
ar
Le
re
Mo

Page | 70 HCNA-storage V3 | OHC1109102 DAS Technology


Hard disk performance (5a)

10k drive:

Seek Time  6 ms

n
Rotational latency = ½ track = 3 ms

e
m/
Read time one sector = 0.02 ms

co
+
Total time needed = 9.02 ms

i.
we
ua
Single
sector 512 bytes

.h
transfer = = 55.4 kB/s
9.02 ms

ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 41

ar
le
: //

Hard disk performance (5b)


tp
ht

15k drive:
s:

Seek Time  6 ms
ce

Rotational latency = ½ track = 2 ms


ur

Read time one sector = 0.016 ms


so

+
Total time needed = 8.016 ms
Re
ng

Single
ni

sector 512 bytes


transfer = = 63.9 kB/s
8.016 ms
ar
Le

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 42


re
Mo

Per disk the throughput is not very high if only individual sectors are picked up off of a disk platter.
Fortunately in a hard disk multiple platters are used and multiple read/write heads that can pick up
more data for us. Add to that the fact that many disk drives can be used simultaneously which then
implies that the amount of data that can be read per second is enormous.

HCNA-storage V3 | OHC1109102 DAS Technology Page | 71


Apart from the amount of data a disk drive can read from the magnetic platters there is also a
parameter to be mentioned: IOPS

IOPS is short for Input and Outputs Per Second. This IOPS value states how many times per
second a disk drive can “push out” data blocks (different sizes are possible when you do test) out of

n
the interface of the disk drive onto the network/path to the host.

e
m/
For performance information the number of IOPS a disk drive can deliver is very important. If an

co
application wants data to be moved from the disk to the host quickly it needs many IOPS. The number

i.
of IOPS per disk is mechanically fixed. The following (average) values for IOPS can be used :

we
ua
 Drives based on SATA technology : 80 – 100 IOPS

.h
 Drives based on SAS technology : 150 – 200 IOPS

ng
ni
By having multiple hard disks send data simultaneously the total amount of IOPS can be calculated

ar
by simply adding the IOPS values of all individual hard disks used.
le
: //
tp
ht
s:
ce
ur
so
Re
ng
ni
ar
Le
re
Mo

Page | 72 HCNA-storage V3 | OHC1109102 DAS Technology


Average Access Time

Average access time

Average access time contains two parts:

n
1. Average seek time.

e
m/
2. Average latency time.

co
Platter Latency

i.
we
Data block

Seek

ua
.h
Tracks

ng
Seek time Latency time

ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.
ar Slide 43
le
//

Average Seek Time


:
tp

The average seek time of an HDD is the time it takes for the head to move from its initial position to
ht

the specified position. It is an important parameter that affects internal data transfer rate. The lower
the average seek time, the better. The average seek time of IDE HDDs ranges from 8 ms to 11 ms.
s:
ce

Average Latency Time


ur

The latency time, also known as hibernation time, refers to the time it takes for the desired data to be
so

beneath the read head, assuming the head is over the desired tracks. It is exactly half of the time it
Re

takes for a complete turn of the platter. Therefore, the faster an HDD rotates the lower the average
latency time. The average latency time is usually less than four milliseconds.
ng
ni

Average Access Time


ar
Le

The average access time is the sum of the average seek and latency time.
re
Mo

HCNA-storage V3 | OHC1109102 DAS Technology Page | 73


Transfer Rates

Transfer rates

• Data transfer rate.

n
• Internal transfer rate.

e
m/
• External transfer rate.

co
Platter
HDD

i.
we
ua
Seek

.h
Tracks

ng
External Internal

ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.
ar
Slide 44
le
//

Data Transfer Rate


:
tp

Data transfer rate refers to the speed at which an HDD writes or reads data and is expressed in MB/s.
ht

Data transfer rate is divided into internal transfer rate and external transfer rate.
s:

Internal Transfer Rate


ce

Internal transfer rate, also called sustained transfer rate, refers to the speed at which data are
ur

transferred from an HDD to its high-speed cache. It reflects the performance when the disk cache is
so

not in use. It is a bottleneck for the overall HDD speed. Internal transfer rate mainly depends on the
Re

HDD rotational speed and is expressed in Mbit/s rather than MB/s.


ng
ni

External transfer rate


ar

External transfer rate, also known as burst data transfer rate or interface transfer rate, refers to the
Le

speed at which data are transferred from the system bus to the disk cache. It is affected by the HDD
interface type and the size of HDD cache.
re
Mo

Page | 74 HCNA-storage V3 | OHC1109102 DAS Technology


IOPS and Throughput

IOPS and throughput

• IOPS

n
• Input/Output Operations Per Second (IOPS) is a common disk

e
performance indicator that refers to the number of reads and writes

m/
per second in an HDD.

co
i.
• Throughput

we
• Throughput indicates the amount of data that can be successfully
transferred within a given time. For applications involving large-

ua
quantity sequential reads and writes such as video editing and video
on demand (VoD), throughput is more important than IOPS.

.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.
ar
Slide 45
le
//

I/O calculation algorithm


:
tp

The time it takes for a disk to complete an I/O request consists of the seek time, latency time, and
ht

data transfer time.


s:

The seek time (Tseek) refers to the time taken by the head to move to a specified position. A shorter
ce

seek time indicates faster I/O operations.


ur

Mainstream disk seek time ranges from 3 ms to 15 ms.


so

The rotation latency (Trotation) refers to the time it takes for the desired data to be beneath the
Re

read head. The rotation latency depends on the rotational speed and is usually half of the time it takes
ng

for a complete turn of the platter.


For example: The average latency of a 7200 rpm disk is: 60 x 1000/7200/2 = 4.17 ms, and that of a
ni

15,000 rpm disk is 2 ms.


ar
Le

The data transfer time is the time that an HDD takes to transfer the requested data. It depends on
the data transfer rate. It is equal to the data size divided by the data transfer rate. Mainstream IDE
re

and ATA disks can reach an interface data transfer rate of 133 MB/s, and SATA II disks can reach up
Mo

to 300 MB/s.

HCNA-storage V3 | OHC1109102 DAS Technology Page | 75


n
Module 2: What is DAS ?

e
m/
co
i.
www.huawei.com

we
ua
.h
ng
ni
Copyright © 2015 Huawei Technologies, Ltd. All rights reserved.

ar
le
Given such a high transfer rate, the data transfer time is usually much shorter than the seek and
//

latency time. Therefore, the maximum IOPS in theory is 1000 ms/ (Tseek + Trotation), neglecting data
:

transfer time.
tp

Suppose that the average seek time is 3 ms and rotational speeds are 7200, 10,000, and 15,000 rpm,
ht

then the maximum IOPS values in theory are:


s:

 IOPS = 1000 / (3 + 60,000/7200/2) = 140


ce

 IOPS = 1000 / (3 + 60000/10,000/2) = 167


ur

 IOPS = 1000 / (3 + 60000/15,000/2) = 200


so
Re

Earlier it was mentioned that SATA based disk drives on average could deliver 80-100 IOPS and SAS
based disk drives could deliver 150-200 IOPS. That number of IOPS varies a little with different
ng

rotational speeds but the maximum numbers are still valid.


ni

Depending on the size of each block that is pushed out we can calculate theoretical throughputs for
ar

the hard disks. The table below shows the values for the three most used types : SATA; SAS; SSD
Le

Device Type Realized Transfer speed Number of devices/bus


re

SATA 200-300 MB/s 2


Mo

SAS 300-500 MB/s 16,384

SSD 500-1800 MB/s 16,384

Page | 76 HCNA-storage V3 | OHC1109102 DAS Technology


Solid State Disk

Solid State Disk

A Solid State Disk (or SSD) is becoming more popular because of


the price that is dropping and the capacity that is getting bigger and

n
bigger.

e
m/
Three basic types of SSD exist:

co
• Single Level Cell or SLC.
• Multi Level Cell or MLC.

i.
• Triple Level Cell or TLC.

we
SSD’s:

ua
• use flash technology to store digital information.

.h
• have no mechanical moving parts internally and therefore use less
power; generate less heat and noise.

ng
However: SSD’s have a life span based on the usage of the SSD.

ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.
ar
Slide 46
le
//

Although the traditional mechanical hard disk will not disappear very soon the successor is already
:

widely available and becoming more popular every day. Solid State Disks or SSD’s do not store the
tp

information using magnetic properties but store the information within so-called cells. This technology
ht

is referred to as flash and it makes it possible to store digital information very quickly and very
compact. Another big advantage of SSD’s is that they do not generate noise and also do not generate
s:

a lot of heat compared to traditional hard disks.


ce
ur

SSD’s have no moving parts internally but that does not mean they will last forever. Because of the
so

internal technology used in flash drives there is what they call a wear process. Every cell has a limited
Re

number of times the content of the cell can be changed. Once this number has been reached the disk
cannot guarantee to be used (reads or writes) without errors in the data. This drive wear is however
ng

easy to monitor and predict so a replacement disk can be ordered in time. Traditional hard disks often
ni

fail without any warning which means that replacement disks have to be available at that moment.
ar
Le
re
Mo

HCNA-storage V3 | OHC1109102 DAS Technology Page | 77


SLC – MLC - TLC

Every cell in a SSD can store digital information using NAND.

In a SLC.
• every cell in a SSD can represent one single bit of information: 0 or 1.

e n
m/
co
i.
In a MLC.

we
• a cell represents two bits of information: 00, 01, 10 or 11.

ua
In a TLC.

.h
• a cell represents three bits of information: 000, 001, 010, 011, 100,
101, 110 and 111.

ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 47

ar
le
A cell consists of a small transistor-like component called a NAND circuit. Each NAND circuit
//

traditionally could store a single bit of information so a “1” or a “0”. The newer generation of SSD
:

drives uses a special technique to store more information in a cell.


tp
ht

A MLC or multi-level cell can store 2 bits in a cell and the TLC or triple level cell can store 3 bits per
cell. Two bits of information means that 4 different data patterns can be stored : 00 , 01 , 10 and 11.
s:

With three bits the number of data patterns is 8 so more information can be stored in a TLC as the
ce

physical size of a cell in a SLC is the same as for a MLC or TLC.


ur
so

That is the reason that the capacity of SSD’s has gone up a lot the last couple of generations. The
Re

first SSD’s had capacities starting from 64 GB. Now the biggest models TLC can store up to 2,4 TB of
data.
ng
ni

However : The different types of SSD drives have different wear patterns. This means that it is
ar

important to understand the wear characteristics when a SSD is selected.


Le
re
Mo

Page | 78 HCNA-storage V3 | OHC1109102 DAS Technology


Solid State Disk wear

The most important limitation is the number of changes a cell can


have.

Enterprise versions of SLC , MLC and TLC have different values:

n
e
m/
Type Capacity Number of P/E’s * Price per unit

co
SLC Small About 100,000 High

i.
eMLC Moderate About 30,000 Medium

we
cMLC Moderate 5,000 to10,000 Low

ua
TLC Large 500 to 1,000 Very Low

.h
* P/E’s are the number of changes of the cell.

ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 48

ar
le
The table shows that number of P/E’s vary between the SLC, MLC and TLC types. That means that a
//

basic understanding of the application that writes (or reads) data from the SSD is required so the
:

impact on the wear of the SSD can be determined. So for an application that primarily writes new data
tp

it is best to select a SLC type SSD. Those are much more expensive but the wear of the SLC is much
ht

better as it allows 100.000 P/E’s compared to the 1000 a TLC allows.


TLS on the other had are very good choices when a SSD should store a lot of data that gets read
s:

often for example video files, audio files or even website information. This data does not get changed
ce

a lot and will then not wear out the SSD so quickly.
ur
so

Note: eMLC and cMLC are terms to describe different versions of quality of Solid State Disks. The
Re

letter e stands for Enterprise (high quality, expensive) where the letter c stands for Consumer (lower
quality, less expensive).
ng
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109102 DAS Technology Page | 79


SSD Introduction

SSD introduction

SSD Hardware Components

n
SSD Structure

e
Backup power supply

m/
Multi-channel 6 Gbit/s

co
SSD controller SAS interface
Flash concurrency

i.
DDR memory

we
ua
 Elimination of high-speed rotational component, high
performance, and lower energy consumption.

.h
 Multi-channel concurrency.

ng
 TCQ/NCQ, simultaneous response to multiple I/O
requests.

ni
 Average response time less than 0.1 ms.

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.


ar
Slide 49
le
//

Native Command Queuing (NCQ) and Tagged Command Queuing (TCQ) technologies again sort
:

the commands sent from a computer to disks, improving disk performance. NCQ technology was
tp

introduced in 300 MB/s SATA II disks, tailored for mainstream disks. TCQ technology was introduced
ht

in SCSI2 (also in ATA-4) by Compaq, tailored for servers and enterprise-class disks.
s:

The same technology was later adapted by most hard disk manufacturers but the name was changed
to be NCQ.
ce
ur

For a system to support NCQ and TCQ, its disk interfaces as well as disks of the chip group must
so

support these two technologies. If a motherboard supports NCQ while a disk doesn't, then the
Re

technologies are unavailable.


ng
ni
ar
Le
re
Mo

Page | 80 HCNA-storage V3 | OHC1109102 DAS Technology


Advantages of SSD Performance

Advantages of SSD performance

• Short response time. SSD technology advantages

n
HDDs waste plenty of time in data

e
seeking and latency, greatly

m/
affecting data transfer efficiency.
I/O

co
I/O

IP/FC SAN

i.
• High read/write efficiency. Seek time
Latency time

we
When data is randomly read and
written on an HDD, its head has to

ua
keep rotating, leading to inefficient
reading and writing. An SSD uses

.h
its internal controller to locate and
directly read data, improving

ng
vs
reading and writing efficiency.
Traditional HDD SSD storage

ni
storage system system

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.


ar Slide 50
le
//

Short response time


:
tp

HDDs waste plenty of time in data seeking and latency, greatly affecting data transfer efficiency.
ht

SSDs eliminate the seek time and latency time as they have no mechanical motion components,
responding fast to read and write requests.
s:
ce

High read/write efficiency


ur

When data is randomly read and written on an HDD, its head has to keep rotating, leading to
so

inefficient reading and writing.


Re

An SSD uses its internal controller to locate and directly read data, improving reading and writing
ng

efficiency. In a 4k random read/write scenario, a Fibre Channel disk delivers 400/400 IOPS, while a
ni

SSD delivers 26,000/5600 IOPS.


ar
Le
re
Mo

HCNA-storage V3 | OHC1109102 DAS Technology Page | 81


SSD Energy Efficiency and SDD Environment Adaptability Advantage

SSD energy efficiency advantage

Heat distribution 100,000 read IOPS energy consumption

n
SSD HDD

e
m/
co
i.
we
2 SSDs 250 HDDs

ua
Energy consumption (W)

.h
4000

ng
Nearly 400x
2000

ni
SSD FC HDD

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.


arSlide 51
le
//

SSD's energy efficiency advantage over HDD cannot be seen when only a few disks are used.
However, if a large number of disks are used, SSDs consume far less energy than HDDs. This is also
:
tp

a key factor for enterprises to consider when selecting storage solutions.


ht

SDD environment adaptability advantage


s:

SSDs have no rotational component and can


ce

withstand severe environment conditions.


ur

HUAWEI SSDs are shock resistant and can:


so

• withstand a vibration acceleration of 16.4 G,


while HDDs can withstand only 0.5 G
Re

acceleration.

• withstand 1500 G impact, while HDDs usually


ng

withstand only 70 G.
ni

HSSDs have gone through the following


ar

tests using professional testing equipment:


Le

Static pressure test, drop test, random


vibration test, impact test, and collision test.
re
Mo

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 52

SSDs are resistant to harsh environments such as high temperature or humidity and strong vibration.
Some industry-class applications require that SSDs should withstand a temperature ranging from –
20◦C to +70◦C or –40◦C to +85◦C.

Page | 82 HCNA-storage V3 | OHC1109102 DAS Technology


SSD Application in Storage

SSD application in storage

 Level-A application: features highly concurrent random reads and


writes, such as database applications.

ne
 Level-B application: sequential reading and writing of large-size

m/
files, pictures, and stream media.

co
 Level-C application: features backup data or rarely used data.

i.
Access frequency

A SSD media

we
Fibre Channel or
SAS disk

ua
SATA or tape

.h
B

ng
C

ni
Data
distribution

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.


ar
Slide 53
le
//

80/20 Principle:
:

Data that is frequently read, written, and changed by users usually accounts for 20% of the total data
tp

amount. This type of data is called hot data and corresponds to level-A applications.
ht
s:

Tiered storage:
Hot data is stored on SSDs. Data of level-B and level-C applications is usually stored on high-speed
ce

HDDs or general HDDs to improve performance and reduce costs.


ur
so
Re
ng
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109102 DAS Technology Page | 83


Questions

Questions

n
1. Name three characteristics of a DAS ICT infrastructure

e
m/
2. What is the difference between parallel and serial
communication?

co
3. How many devices can be connected together in a SAS domain?

i.
4. Name the three types of Solid State Disks

we
5. Describe what is meant with the term : SSD wear

ua
.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.
ar Slide 54
le
: //
tp

Answers
ht

1. Block based (SCSI) ; Islands of Storage ; Short distances between components


s:

2. With parallel communication multiple paths are used simultaneous to transmit data. With
ce

parallel communication come physical and electrical problems.


ur

Serial communication uses a single path to transmit the data sequentially.


3. Maximum is 16384 devices in a SAS domain
so

4. SLC ; MLC ; TLC


Re

5. The maximum amount of physical changes to the SSD medium before the SSD reports that it
ng

has to be replaced. It is therefore not really a mechanical wear indicated in days, years or
months but a number.
ni
ar
Le
re
Mo

Page | 84 HCNA-storage V3 | OHC1109102 DAS Technology


Exam Preparation

Exam preparation (1)

ne
m/
1. Statements

co
Statement 1: A DAS solution is also referred to as an Island of Storage.

Statement 2: SLC type SSD’s are ideal when large amounts of data

i.
need to be stored and read many times

we
a. Statement 1 is true ; Statement 2 is true

ua
b. Statement 1 is true ; Statement 2 is false

.h
c. Statement 1 is false ; Statement 2 is true

ng
d. Statement 1 is false ; Statement 2 is false

ni
ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 55
: //
tp
ht

Exam preparation (2)


s:

2. Which of the following disk drive technologies are used in high


ce

end storage solutions?


ur

Select all that apply


so

a. Parallel SCSI
Re

b. ATA
ng

c. SAS
ni

d. SSD
ar

e. PIO
Le
re

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 56


Mo

Answers

1. The correct answer is : b.

2. The corrects answers are : a , c , d

HCNA-storage V3 | OHC1109102 DAS Technology Page | 85


Summary

Summary

n
Direct Attached Storage is not used anymore as the idea of

e
having islands of storage is no longer popular

m/
• SCSI technology is still used to connect hosts with their physical

co
disks

i.
• Serial Attached SCSI has replaced the old parallel SCSI
technology almost completely

we
• SAS is highly scalable; has a high performance and is relatively

ua
cheap to implement

.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.
ar Slide 57
le
: //

In the previous chapter we have seen the development of disk technology (parallel SCSI, SATA, SAS)
tp

as the interface between hosts, and their applications, with the physical hard disks that hold the user
ht

data.
s:

DAS systems have a limitation in the fact that all data is private to the host. Sharing was/is not easy
ce

between islands of storage. So the evolution of ICT infrastructures lead to the next step : Network
ur

Attached Storage
so
Re

Goals for Network Attached Storage solutions were:


- to eliminate the islands of storage
ng

- allow people to share disk space


ni

- allow people to share data with other hosts and their applications.
ar
Le
re
Mo

Page | 86 HCNA-storage V3 | OHC1109102 DAS Technology


ne
m/
co
i.
Thank you

we
www.huawei.com

ua
.h
ng
ni
ar
le
: //
tp

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 58


ht
s:
ce
ur
so
Re
ng
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109102 DAS Technology Page | 87


e n
m/
co
i.
we
ua
.h
ng
ni
ar
le
: //
tp
ht
s:
ce
ur
so
Re
ng
ni
ar
Le
re
Mo

Page | 88 HCNA-storage V3 | OHC1109102 DAS Technology


Mo
re
Le
ar
ni
ng

OHC1109103
Re

NAS Technology
so
ur
ce
s:
ht
tp
://
le
ar
ni
ng
.h
ua
we
i.
co
m/
en
www.huawei.com
Mo
re
Le
ar
ni
ng
Re
so
ur
ce
s:
ht
tp
://
le
ar
ni
ng
.h
ua
we
i.
co
m/
en
Introduction
In this module we will look at the second of the possible ICT infrastructures: NAS or in full Network
Attached Storage.

e n
m/
Objectives

co
After completing this module you will be able to:

i.
 Know the NAS structure and implementation.

we
 Master NAS file sharing protocols, NFS and CIFS.

ua
 Understand the I/Os and performance of a NAS system.

.h

ng
Understand the differences and relationship between SAN and NAS.

ni
 Understand Huawei NAS products.

ar
le
//

Module Contents
:
tp

1. Characteristics of a NAS ICT infrastructure.


ht

2. NAS network topology.


3. Network protocols CIFS and NFS.
s:

4. Ethernet Standard.
ce

5. Ethernet Cables.
ur

 10-BASE5.
so

 10-BASE2.

Re

10-BASE-T.
 Crossover and straight cables.
ng

6. Ethernet Frame.
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109103 NAS Technology Page | 91


e n
m/
co
i.
we
ua
.h
ng
ni
ar
le
: //
tp
ht
s:
ce
ur
so
Re
ng
ni
ar
Le
re
Mo

Page | 92 HCNA-storage V3 | OHC1109103 NAS Technology


Network Attached Storage

Network Attached Storage

OS=Windows OS=Linux OS=MAC OS

e n
m/
co
i.
1. Network is based on Ethernet.

we
2. With Gigabit Ethernet and CAT 6 cables: max = 100 m.

ua
3. Shared folders are created on the NAS server for
individual users.

.h
4. Files are moved across the network.
5. Hosts can run different Operating Systems.

ng
6. Different protocols are used such as CIFS and NFS.

ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.

ar Slide 4
le
//

With Direct Attached Storage (or DAS) there are a few problems and limitations.
:
tp

The lack of scalability and the fact that you cannot share data between the DAS islands of storage are
ht

the biggest problems. With the introduction of Network Attached Storage these problems have been
solved. Now it is possible to build an infrastructure that uses Ethernet networking technology to
s:

connect multiple workstations (that is where the applications run that need or create the data) to the
ce

actual place where the data is now centrally stored.


ur

Important difference with DAS technology is the shape in which data is moved between the application
so

running on a workstation and the physical disk.


Re

With DAS the data was transmitted as SCSI blocks with a size of 512 bytes. For the transmission all
ng

actions in the SCSI protocol, discussed in the previous module, were required.
ni
ar

Network Attached Storage (or NAS) solutions work differently. If you would be able to look inside of
the network cables you would see entire files being moved across the network. In the beginning when
Le

the speed of the Ethernet technology was rather limited it took a lot of time to move for instance a 2
re

GB file across the network. NAS solutions were not very popular then, but now the speed of the
Mo

Ethernet network is 1 or even 10 Gb/s and NAS infrastructures have been proven to be very fast as
well.

HCNA-storage V3 | OHC1109103 NAS Technology Page | 93


What still remains is the scalability of NAS infrastructures. As the medium across which we
transport the files is mostly a copper-based Ethernet cables (we use an indication like CAT 5E or
CAT6 to indicate the quality of an Ethernet cable) there are limits to the length of an individual
Ethernet cable.

Ethernet itself is a standard which is officially called IEEE 802.3 and it describes hardware as well as

e n
software specifications.

m/
co
Note:

i.
Throughout this course the speeds of a transmission will be indicated in Gb/s or Gbit/s. In both cases it

we
refers to a transmission speed of 1 gigabit per second or 1,000,000,000 bits per second.

ua
In upcoming slides a few of the most important specifications of Ethernet will be discussed.

.h
ng
ni
Note:

ar
le
IEEE is the name of the committee that has set up the specifications for many technologies among
//

which the Ethernet standard. The full title of the committee is the Institute of Electrical and
Electronics Engineers.
:
tp
ht
s:
ce
ur
so
Re
ng
ni
ar
Le
re
Mo

Page | 94 HCNA-storage V3 | OHC1109103 NAS Technology


NAS Network Topology

NAS network topology

e n
m/
co
i.
we
ua
.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.

ar Slide 5
le
//

The picture shows a modern NAS solution where the device identified with NAS is the most important
:

component. The NAS device is connected via a network switch with multiple servers and/or client
tp

systems (sometimes also referred to as workstations).


ht
s:

The workstations and servers can run different operating systems and they all run their specific
applications. The data that these applications generate is stored on the hard disk(s) that are inside of
ce

the NAS device.


ur
so

In the past the network technology to connect the workstations with the NAS device could be
Re

something like Token Ring, FDDI or ArcNet. As Ethernet has become the most popular network
connection mode, we will only discuss the NAS environments that are based on Ethernet.
ng

As discussed before the NAS device transports entire files across the network to and from the
ni

workstations/servers. As workstations optional run different operating systems there is a need for the
ar

NAS device to understand how each operating systems handles the transport of a file. Reason of
Le

course is that a Windows based host uses a different method to find and access a file that is stored
externally than a Linux\Unix based host. The way an operating system accesses a file that is stored on
re

a network connected device is called a protocol.


Mo

HCNA-storage V3 | OHC1109103 NAS Technology Page | 95


Protocols are used within Operating Systems to access a file which is not physically located inside a
host but are only accessible via the network interfaces over an Ethernet based network.

Operating system Protocol.

Windows SMB (Server Message Block), CIFS (Common Internet File System).

n
Linux/Unix NFS (Network File System).

e
m/
Apple AppleTalk (older Apple MAC OS-es), NFS.

co
Novell NCP (Novell Control Protocol).

i.
we
ua
Another example of a network file system is HTTP or the HyperText Transport Protocol. This is used

.h
to access webpages on the internet. Essentially, when a webpage is viewed, it means that a file (i.e.

ng
INDEX.HTML) is fetched from the remote webserver the website is hosted on!

ni
From the NAS perspective it means that remote users access the webpage INDEX.HTML but these

ar
remote users might use different operating systems browser software. It is therefore vital that the NAS
le
device “speaks” and “understands” all of the used protocols. A NAS device is basically a server with a
//

lot of local storage capacity, and it of course runs an operating system. To the standard network file
system protocol of the operating system itself additional intelligence will be added to support the other
:
tp

protocols.
ht

There are two possible implementations of NAS: Integrated NAS and NAS gateway.
s:

The next slides will discuss the differences between them.


ce
ur
so
Re
ng
ni
ar
Le
re
Mo

Page | 96 HCNA-storage V3 | OHC1109103 NAS Technology


NAS implementation: Integrated NAS

NAS implementation: Integrated NAS

e n
m/
co
IP

i.
we
ua
.h
Example: Huawei OceanStor V3

ng
NetApp FAS series

ni
ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 6
//

The Integrated NAS is the last stage of the evolution of NAS. In the “older” version called NAS
:
tp

gateway an extra device is used for NAS functionalities.


ht

In the integrated NAS everything needed is collected in one single device. It has the options to store
s:

data on hard disk and handle the request of all the clients computers that want to write (or read) the
files on the NAS.
ce
ur

Some examples of solutions of Integrated NAS are Huawei’s OceanStor V3 series storage and
so

NetApp’s FAS series.


Re
ng
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109103 NAS Technology Page | 97


NAS implementation: NAS gateway

NAS implementation: NAS gateway

en
m/
co
NAS gateway

i.
IP FC

we
ua
.h
Storage Array

ng
Example: Huawei N8500
NetApp FAS 8000

ni
ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 7
//

The picture clearly shows that a NAS gateway is a device that links the client computers (left) with the
:
tp

actual storage array where the data is stored. The storage arrays are then block based and the NAS
ht

gateway converts the data from bits and bytes into files (and vice versa). The NAS gateway is a
dedicated solution that has connections with both the IP network as well as the FC network.
s:
ce

NAS Architecture
ur
so

NETWORK FILE SYSTEM


Re

NFS and CIFS


ng

FILE SYSTEM
ni
ar
Le

OPERATING SYSTEM
re
Mo

NAS HARDWARE

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 8

This image show the NAS architecture or in other words: the software structure for a NAS device.

Page | 98 HCNA-storage V3 | OHC1109103 NAS Technology


What is CIFS?

What is CIFS?

Common Internet File System (CIFS).

n
is a protocol that enables application programs to access files

e
and services on a remote Internet computer.

m/
co
Transmission protocol used is TCP/IP.

i.
TCP (Transmission Control Protocol) is part of the TCP/IP

we
protocol that takes care that packets are send in the right order. It

ua
is also responsible for the error-checking part.

.h
IP (Internet Protocol) is responsible for the actual delivering of the

ng
packets to the receiving system. To find that receiving system it

ni
uses the IP address of the receiver.

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.

ar Slide 9
le
//

The name CIFS itself is not really accurate. The real name is SMB v2 (and later v3). SMB or Server
:

Message Blocks was used for a couple of reasons and one was to access files which were on another
tp

Windows based host connected to the same network.


ht
s:

CIFS uses the client/server model and is dedicated to file sharing in the Windows environment. A
client sends a request to a remote server, asking for services and the server responds to the request.
ce

A NAS system uses the CIFS file system to share storage resources with Windows servers. In a NAS
ur

system it is very important that we not only store our data centrally but also there should be the
so

possibility to have more hosts access the same data simultaneously. In the common language the
Re

name File Server is also used to describe the functions of NAS devices. In many organizations the
concepts of sharing data is then described as: our data is stored on a public folder on the fileserver.
ng
ni

Public folders, or better shared folders, are then used to store data that has to be accessible for
ar

several users. In practice a company creates multiple shared folders and it uses methods within the
operating system to allow only certain users to access certain folders.
Le

It is even possible to organize things in such a way that some users can only see the files and use
re

them (Read-Only permission) where others have the possibility to change the contents of a file (Read-
Mo

Write permission). These options to set permission levels (Read-Only \ Read-Write) can be set on
individual files or on folders or subdirectories that hold many, many files.

Also: both Linux as well as Windows have the options to assign these permissions to individual users
(or even groups of users).

HCNA-storage V3 | OHC1109103 NAS Technology Page | 99


What is NFS?

What is NFS?

Network File System (NFS) is a technology for sharing files among

e n
UNIX systems. It allows data to be stored on central servers and easily

m/
accessed from clients over a network.

co
i.
Originally developed by Sun Microsystems in 1984.

we
Based on the Open Network Computing Remote Procedure Call

ua
system. This is an open standard allowing anyone to implement it.

.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.

ar Slide 10
le
//

Although most of the servers in professional data centers run the Microsoft operating system there are
:

also quite a few companies that use the open source Linux operating systems. Nobody can really
tp

state to be the actual owner of the Linux operating system because the open source concepts means
ht

that everybody can get the software for free and use (or adapt) it freely. That has led to a number of
different versions of the Linux operating system.
s:
ce

Examples of Linux versions or Linux distributions are Red Hat, SuSe, Ubuntu, CentOS and NetBSD. A
ur

good thing with all these Linux versions, and also with similar operating systems like Unix and Mac OS
so

X, is that they all use the same foundation. In that foundation, also referred to as the kernel, the
protocol to access remote files is present: NFS. With NFS or Network File System a Linux\Unix based
Re

host can access a remote file via the network.


ng

The NFS protocol was originally developed by SUN Microsystems in 1984, allowing directories and
ni

files to be shared among systems, even if they are running different distributions. Through the NFS,
ar

users and programs can access files on a remote system just like they would when accessing local
Le

files. The NFS enables each computer to utilize network resources as conveniently as local resources,
that is to say, NFS allows file access and sharing among heterogeneous computers, operating
re

systems, network architectures, and transmission protocols.


Mo

NFS also uses the client/server model and involves a client program and a server program. The server
program allows other computers to access the shared file system, and the result of the process is
called "output". The client program accesses the shared file system, and result of the process is called
"input". Files are transmitted in blocks (a block = 8 KB). Operations may be divided into fragments of a

Page | 100 HCNA-storage V3 | OHC1109103 NAS Technology


smaller size. The NFS enables file access and sharing among servers and clients, and allows clients
to access data saved on remote storage devices.

In the past it was very common to have only Windows based hosts interconnected on a network or
Linux\Unix based hosts. A combination of the two was virtually impossible as the protocols CIFS and
NFS are not compatible as they “run” on different operating systems.

e n
m/
Long before the first real NAS solutions were made there was a project called SAMBA that was

co
intended to allow a Windows based host to transport files to and from a Linux\Unix based host. This
then forced the Windows host to install the SAMBA module that made the host understand NFS and of

i.
course the Linux\Unix host installed the SAMBA module that made the host understand CIFS. Today

we
a NAS device has both protocols “on board”.

ua
.h
ng
ni
ar
le
: //
tp
ht
s:
ce
ur
so
Re
ng
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109103 NAS Technology Page | 101


Comparison between CIFS and NFS

Comparison between CIFS and NFS

If a file system is already set to a:

n
• CIFS share, the file system can only be set to a read-only NFS share

e
additionally.

m/
• an NFS share, the file system can only be set to a read-only CIFS share

co
additionally.

i.
Supported
Transmission
Protocol Client Fault Impact Efficiency Operating

we
Protocol
Systems

ua
Integrated operating
CIFS TCP/IP system without the need Large High Windows
for additional software

.h
Small: The interaction
Requires additional

ng
NFS TCP or UDP process can be auto- Low Unix
software
matically resumed.

ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.

ar Slide 11
le
//

 CIFS is a network-based sharing protocol. It has high demands on network transmission reliability,
:

so it usually uses TCP/IP. NFS is used for independent transmission, so it uses TCP or UDP.
tp
ht

 One disadvantage of NFS is that clients must be equipped with dedicated software. CIFS is
integrated into the operating system and requires no extra software.
s:
ce

 NFS is a stateless protocol while CIFS is a stateful protocol. NFS can be automatically recovered
ur

from a fault while CIFS cannot. CIFS transmits only a little redundant information, so it has a
so

higher transmission efficiency than NFS.


Re

Both protocols require file format conversion.


ng
ni

From the picture above it becomes clear that a folder or volume can be accessed by users from
ar

different systems as there can be a CIFS and a NFS share created to access the files.
Le

However: looking at the restrictions that can be applied there is a limitation. Once a CIFS folder is
re

assigned a read-write permission; the NFS permissions can only be read-only!


Mo

Similarly: when a NFS read-write permission is assigned the additional CIFS permission will be read-
only.

Page | 102 HCNA-storage V3 | OHC1109103 NAS Technology


Accessing files on a NAS
We already discussed the fact that the files on a NAS device can be written to, or read from, client
computers that run different operating systems. In the next image we show an image of how a
Windows based client can create a so-called network mapping in Windows 2008/7/8/2012. Once this
mapping is created the user in Windows can “see” all the files on the share that is created on the NAS
device.

e n
m/
Accessing files on a NAS

co
i.
Steps to host a file system:

we
• Create a LUN.

ua
• Map LUN to
the NAS device.

.h
• Create a file system

ng
on the LUN.

ni
• Mount the file system.

• Access the file system.


ar
le
Use NFS in UNIX environment.
Execute mount/nfsmount command.
//

Use CIFS in windows environment.


:

Map the network drive as: \\ACCOUNT1\ACT_REP.


tp
ht

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 12


s:

Also in the image is the reference to the procedure for Linux-based clients that use NFS. There the
procedure consists of a few actions.
ce

The most important one is shown: the MOUNT/NFSMOUNT command


ur
so
Re
ng
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109103 NAS Technology Page | 103


Ethernet Standard

Ethernet Standard

The IEEE 802.3 standard from the Institute of Electrical and


Electronics. Engineers describes concepts and hardware (cabling,

e n
connectors) of Ethernet.

m/
Ethernet was defined in 1983 and over the years has replaced

co
alternatives like Token Ring ; FDDI and ArcNet.

i.
Ethernet (and all it’s variations) have been standardised in many

we
IEEE802.3 sub-definitions.

ua
Examples: 802.3a (100 Mb/s).
802.3ab (Gigabit Ethernet).

.h
802.3at Power-over-Ethernet.

ng
Concept of Ethernet is a technology called CSMA/CD or Carrier

ni
Sense Multiple Access with Collision Detection.

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.


ar Slide 13
le
//

The IEEE 802.3 standard is a working group standard which means that there will be changes,
:
tp

updates and improvements constantly. IEEE 802.3 is therefore never finished. It was primarily created
ht

to document and standardize methods that can be used in local area networks. Because of all the
improvements and additions over the last 30 years we now have a large number (more than 30) of
s:

IEEE 802.3 standards.


ce
ur
so

Here are a just few of them:


Re

IEEE number Year Description


ng

802.3 1983 10BASE-5 with thick coax.


ni

802.3i 1990 10BASE-T with twisted pair.


ar

802.3u 1995 100BASE-T also known as Fast Ethernet.


Le

802.3ab 1999 1000BASE-T Gigabit Ethernet with twisted pair.


re

802.3bq ~2016 40GBASE-T Planned 40 Gigabit Ethernet with twisted pair.


Mo

On top of all these versions of the 802.3 standards different physical versions of each standard can
exist. Again as an example some versions of the 802.3ab standard are shown in the next table.

Page | 104 HCNA-storage V3 | OHC1109103 NAS Technology


Name Medium Specified distance

1000BASE-CX Shielded balances copper cable. 25 meters.

1000BASE-KX Copper backplane. 1 meter.

1000BASE-LX Multi-mode fiber. 550 meters.

e n
1000BASE-EX Single-mode fiber at 1,310 nm wavelength. ~ 40 km.

m/
1000BASE-TX Twisted-pair cabling (Cat-6, Cat-7). 100 meters.

co
i.
All Ethernet based networks have a bus structure where multiple devices (hosts; switches; storage

we
arrays) can access the bus to transport information. Just like with the SCSI protocol something has to

ua
be arranged to prevent a device interfering with other devices on the network. The solution for

.h
Ethernet is CSMA/CD.

ng
ni
ar
le
: //
tp
ht
s:
ce
ur
so
Re
ng
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109103 NAS Technology Page | 105


CSMA / CD

CSMA / CD

START

e n
m/
Channel
Free?
No

co
Yes

i.
Transmit Data WAIT

we
ua
Collissin
detected?
Yes

.h
No

ng
Transmission
complete

ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.

ar Slide 14
le
//

The above picture shows that a wait period is started as soon as a collision is detected. This waiting
:

period is generated random so with CSMA\CD a device does not know the waiting period he will get
tp

when a collision happens. No priority system therefore can be used to make one device wait longer (or
ht

shorter). It is just a matter of waiting and trying again before a device can communicate in a very busy
Ethernet network. Especially in situations where the Ethernet speed was still low (10 or 100 MB/s) it
s:

might take a few minutes before let’s says 30 booting devices managed to connect to the network
ce

successfully.
ur
so
Re

The actual CSMA/CD process is a two steps approach:


ng

1. Main procedure
ni

- Is my frame ready for transmission? If yes, it goes on to the next point?


ar

- Is medium idle? If not, wait until it becomes ready.


Le

- Start transmitting.
- Did a collision occur? If so, go to collision detected procedure.
re

- Reset retransmission counters and end frame transmission.


Mo

2. Collision detected procedure


- Continue transmission (with a jam signal instead of frame header/data/CRC) until minimum
packet time is reached to ensure that all receivers detect the collision.
- Increment retransmission counter.

Page | 106 HCNA-storage V3 | OHC1109103 NAS Technology


- Was the maximum number of transmission attempts reached? If so:
□ Abort transmission.
□ Calculate and wait random back off period based on number of collisions.
□ Re-enter main procedure at stage 1.

n
What this means is that there will be collisions when two devices send packets at the same time. In

e
this respect it looks like the problems the SCSI protocol had when multiple devices started transmitting

m/
over the SCSI bus. With SCSI we used the SCSI ID for priority. Here with CSMA/CD each device,

co
when it detects a collision, uses a random calculated number to indicate the waiting period (or back off

i.
time) before trying again. So eventually an Ethernet based systems will allow, with optionally a few

we
collisions included, more than one device to send/receive Ethernet packets over a shared medium.

ua
.h
ng
ni
CSMA / CD: Principle
ar
le
CSMA/CD : Principle
: //
tp
ht

E
s:

F
ce
ur
so

A
Re

Sending
B device
ng

C
ni
ar

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 15


Le

When a device sends a packet it will be dropped onto the network. On every intersection the signal will
re

propagate in all possible directions. So a packet send from device A will be “delivered” in the network
Mo

interface of all other devices. Inside of the packet is the information of who did send the packet and to
which device the packet should go. This addressing information is present in each packet and is part
of the overhead needed to transmit packets.

HCNA-storage V3 | OHC1109103 NAS Technology Page | 107


At the start of a transmission two situations can exist:

1. The network is already moving packets from another device. A that point the device that wants
to send must wait. It uses a system called CARRIER SENSE to find out that the network is
already busy.

n
2. The network is free. Now the first packets can be sent.

e
m/
However : it is impossible to have two devices sending packets at the same time as the signals would

co
collide on the network which would mean that the signal will be distorted. So if two devices have

i.
checked the status of the network and both found that nobody is using the network they both think

we
they can go on to the transmission stage.

ua
Therefore we must investigate how to detect these collisions first. Then the next question would be

.h
how to allow multiple devices to communicate across the network.

ng
ni
CSMA/CD : Collision Detection
ar
le
//

CSMA/CD : Collision Detection


:
tp
ht
s:
ce
ur
so
Re
ng
ni
ar

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 16


Le
re

As soon as a device notices a collision it stops transmitting any further. A collision is detected because
Mo

the original signal that was sent is “damaged” because of the collision. This can be detected by each
device. Each device involved in the collision will then use a random number generator to calculate a
waiting period. Typically each device will have a different waiting time now. After that waiting period a
device will start sending packets again to find out if the network is free. So the one with the lowest
number of seconds to wait will win the access to the network!

Page | 108 HCNA-storage V3 | OHC1109103 NAS Technology


Ethernet Cable 10-BASE5

Ethernet Cable 10-BASE5

Original Ethernet was called 10-BASE5.


Cable length was a type of COAX cable with a length of up to 500m.

e n
m/
co
i.
we
ua
.h
A transceiver module was clamped onto the cable to be able to
connect a host to the transceiver module.

ng
10-Base5 was also known as Thick Ethernet.

ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.

ar Slide 17
le
//

The first implementation of Ethernet was called 10-Base5 which was also known as thick Ethernet.
:

The cable was of the type COAX which means there is a central copper wire within a plastic core.
tp
ht

Around that a meshed shield is placed which should protect the inner copper wire from being
influenced by external distorting signals. Next another plastic cover was placed over the mesh. A coax
s:

cable is built to be a Faraday’s cage. The 10-Base5 cable was around 1 cm thick and a 500 meter
ce

long cable is therefore very heavy. This created the nickname thick Ethernet.
ur

To connect a device to the thick Ethernet cable a transceiver module was clamped onto the cable.
so

Inside the module a screw was screwed right through the outer mantle; the mesh; the plastic core so it
Re

would touch the core wire.


ng

Thick Ethernet was rather bulky and the cables were difficult to maneuver.
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109103 NAS Technology Page | 109


Ethernet Cable 10-BASE2

Ethernet Cable 10-BASE2

10-BASE2 is the successor of 10-BASE5.


Cable length was up to 100m and the cable itself was much thinner!

e n
m/
co
i.
we
ua
T-shaped BNC connectors were used to make connections to hosts.

.h
The number in front of – BASE indicates the transmission speeds in

ng
Mb/s.

ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.

ar Slide 18
le
//

With the design of 10-Base2 the cables became much thinner and easier to handle. The cable length
:

was decreased to 100 meters as the changed physical dimensions meant that shielding was less
tp

optimal.
ht

The system of clamping connection modules to the cable was also abandoned as it was a precise task
s:

to do that with 10-Base5. Every device now was connected to the cable using t-shaped joins so the
ce

cable end itself was also fitted with a connector. The connectors used were BNC connectors or
ur

Bayonet Neil-Concelman.
so

Same as with SCSI busses a Ethernet network has to be terminated. For that purpose a plug with
Re

BNC connection with a build in resistor was connected at the cables end.
ng
ni
ar
Le
re
Mo

Page | 110 HCNA-storage V3 | OHC1109103 NAS Technology


Ethernet Cables UTP & STP

Ethernet Cables UTP & STP

Nowadays the cables used to connect Ethernet based devices are


based on 10-Based T. The T means Twisted Pair.

en
Two versions exist: Unshielded and Shielded.

m/
co
i.
we
Unshielded Twisted Pair Shielded Twisted Pair

ua
Twisted Pair cables are intertwined and that results in

.h
the fact that distortions are “compensated”

ng
10-Based T cables use a 8P8C connector but we
usually call them a RJ-45 connector.

ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.

ar Slide 19
le
//

A big improvement was the invention of 10-BaseT internet cables. This is the type we still use today. It
:

is no longer like a coax cable but they use another technology to eliminate the effect of external
tp

signals. The method used is called Twisted Pair cabling and the T in 10-BaseT is what indicates the
ht

twisted pair technology.


s:

Because of the twisting of the two wires that carry the signal the effect of external signals is
ce

compensated to a high degree. There is an even better version of this Twisted Pair cable : In a
ur

Shielded Twisted Pair (STP) cable there is a very thin metal foil around every pair of twisted cables.
so

So the original twisted pair cables that do not have this extra shielding are now referred to as
Re

Unshielded Twisted Pair (UTP).


ng

The connectors used are the familiar cables we see in switches, servers and laptops and we use the
name RJ-45 for them. However the name RJ-45 (Registered Jack) is not the official name. That is
ni

8P8C which is short for : 8 Position 8 Contact.


ar
Le
re
Mo

HCNA-storage V3 | OHC1109103 NAS Technology Page | 111


Ethernet Cable Wiring

Ethernet Cable Wiring

Ethernet cables are available in straigth cables and crossover cables.

e n
straight

m/
co
i.
we
ua
crossover

.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.

ar Slide 20
le
//

To properly work, Ethernet networks need four separate wires to send data across. For a twisted pair
:

cable that would mean eight copper wires per cable. Each of the four wires has a color to identify them.
tp

Green, Orange, Blue, Brown.


ht

The second wire that they are twisted with, also have specific colors: green-white, orange-white, blue-
s:

white, brown-white.
ce
ur

Depending on the usage of the cable we can identify a straight or a crossover cable. The above
diagram shows the pin number within a RJ-45 connector of each color wire.
so
Re

A crossover cable is typically used when two PC’s or servers are directly interconnected with a direct
cable plugged into the RJ45 network ports.
ng
ni

Straight cables are used to connect hosts or servers to switches. Today using the wrong cable is not
ar

really providing problems as most switch ports are designed in such a way that both straight as well as
crossover cables can be used. The switch port will auto-detect the cable type and adjust internally to
Le

make the correct connection.


re
Mo

Page | 112 HCNA-storage V3 | OHC1109103 NAS Technology


Ethernet Basics Frame size

Ethernet Basics Frame size

Ethernet sends socalled frames over the network.

e n
Preamble

Ethertype

m/
DMA

SMA
SFD

FCS
PAYLOAD

co
i.
7 bytes 1 bytes 6 bytes 6 bytes 2 bytes 46 - 1500 bytes 4 bytes

we
SFD = Start of Frame Delimiter.
DMA = Destination MAC Address.

ua
SMA = Source MAC Address.

.h
FCS = Frame Check Sequence.

Ethernet packets vary in size from 1500 bytes up to 9000 bytes

ng
(Jumbo frames).

ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.

ar Slide 21
le
//

Ethernet was developed in the 1970’s at Xerox in the United States and made into the IEEE 802.3
:

standard in 1983. Ethernet became popular and was commercially used in the 1980’s. With Ethernet
tp

networks the actual information sent is a predefined set of bits and bytes. This is officially referred to
ht

as a datagram but when we talk about Ethernet we often use the term PACKET or FRAME to identify
the individual packets of information that get sent across the network.
s:
ce

Ethernet frames were designed to be around 1500 bytes in size. Inside of a frame we have a portion
ur

of user defined data (the data the user wants to send to another device) also called the payload.
so

However we need more information to be able to bring the frame to the correct destination. This extra
Re

information is the overhead involved with Ethernet (and any other networking protocol). Information
needed is: who is sending the frame, where is it going to, error correcting information, etc. This
ng

overhead is also called heading and trailing information as, seen in the above image, some of the
ni

extra information is send before the payload data (heading information) and some is send after the
ar

payload is sent (trailing information).


Le

To be more efficient (ration between payload and overhead) a new frame size was developed. In a so-
called JUMBO frame they have increased the frame size to be around 9000 bytes. The overhead is
re

still the same but now the payload is roughly 6 times bigger!
Mo

HCNA-storage V3 | OHC1109103 NAS Technology Page | 113


Ethernet Networking Components

Ethernet Networking Components

Ethernet cards in hosts.

e n
m/
co
i.
Switches to interconnect hosts
with the NAS device using certified

we
Cables.

ua
.h
ng
NAS server.

ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.

ar Slide 22
le
//

To make a NAS based ICT infrastructure we need three main components:


:
tp

1. Hosts\servers\workstations with network interfaces.


ht

2. Ethernet switches.
3. NAS capable devices or NAS servers.
s:
ce

Important in the setup of a NAS solution is the physical distance between the various components and
ur

the cable types used to connect them.


so

The cables can be both copper-based as well as fiber optic-based although in practice the copper-
Re

based version is used predominantly. Then the quality aspect of the cable is the next thing to watch.
Copper-based cables used for Ethernet networks are classified with the letters CAT followed by a
ng

number. Generally a cable with CAT 5 is meant to be used with 100 Mb/s transmissions only. The
ni

improved CAT5e is also supported for 1000 Mb/s (also referred to as Gigabit) transmissions. However
ar

it would be better in the last situation to use CAT 6 qualified cables as they were specifically designed
Le

for 1000 Mb/s transmissions.


re
Mo

Page | 114 HCNA-storage V3 | OHC1109103 NAS Technology


Questions

Questions

1. What is NAS?

e n
2. What is a share?

m/
co
3. What is a collision?

i.
4. What are scenarios where NFS and CIFS can be applied?

we
5. What does STP mean?

ua
.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.

ar Slide 24
le
//

Answers
:
tp

1. Network Attached Storage where all devices (servers; storage devices; backup devices) are
interconnected with Ethernet based switches and cables.
ht

2. A share is a storage capacity allocated on a NAS server. Shares are accessible for one or
more hosts via the network.
s:

3. A collision occurs when multiple servers try to access the network. At that point the signals
ce

broadcast by the servers will collide and the signals will be distorted leading to failed
communication.
ur

4. NFS shares are set up in such a way that Linux based servers can use shares on the NAS
so

server. CIFS is the method used with Windows based servers to access shares on NAS
servers.
Re

5. STP is short for Shielded Twisted Pair. This is the most common cable type used in Ethernet
networks. It provides good specifications and can be used in high speed configurations.
ng
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109103 NAS Technology Page | 115


Exam Preparation

Exam Preparation (1)

1. Which of the following are NAS components? ( Select all that apply )

e n
a. Storage.

m/
b. Network.

co
c. Engine.

i.
d. Server.

we
2. What best describes the characteristics of a NAS solution?

ua
a. Centralized storage; Operating System dependent; Campus.

.h
b. Share folders; Multiple operating systems; Campus.
c. Centralized storage; Multiple protocols; Global.

ng
d. Share folders ; Single protocol ; Global.

ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.

ar Slide 25
le
: //
tp

Exam Preparation (2)


ht

3. Statement 1 : IEEE802.3 is a collection of standards that describe


s:

many generations of Ethernet versions.


ce

Statement 2 : CSMA/CD gives the IEEE standard no options to


give a higher priority to a specific device on the
ur

network.
so

a. Statement 1 is true ; Statement 2 is true.


Re

b. Statement 1 is true ; Statement 2 is false.


ng

c. Statement 1 is false ; Statement 2 is true.


ni

d. Statement 1 is false ; Statement 2 is false.


ar
Le

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 26


re

Answers:
Mo

1. A, B and D.
2. B.
3. A.

Page | 116 HCNA-storage V3 | OHC1109103 NAS Technology


Summary

Summary

n
• NAS structure and implementation.

e
m/
• NAS file sharing protocols, NFS and CIFS.

co
• Cabling and connectors.

i.
• NAS limitations.

we
• Ethernet standards.

ua
.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.

ar Slide 23
le
: //
tp

Network Attached Storage infrastructures are very useful where the distance is not too big between
ht

workstations; switches and NAS servers.


s:

When the distance is increased to many kilometers the limited length of each individual cable
ce

becomes a performance bottleneck. The signal has to be retransmitted and that takes time! Although it
ur

is possible to use optical cable links between two components in a NAS infrastructure we see that
copper is mostly used. That is why the scale of a NAS solution is often limited to campus style
so

environments where the distances are a couple of hundred meters.


Re
ng
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109103 NAS Technology Page | 117


e n
Thank you

m/
co
www.huawei.com

i.
we
ua
.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 27

ar
le
: //
tp
ht
s:
ce
ur
so
Re
ng
ni
ar
Le
re
Mo

Page | 118 HCNA-storage V3 | OHC1109103 NAS Technology


Mo
re
Le
ar
ni
ng

OHC1109104
Re

SAN Technology
so
ur
ce
s:
ht
tp
://
le
ar
ni
ng
.h
ua
we
i.
co
m/
en
www.huawei.com
Mo
re
Le
ar
ni
ng
Re
so
ur
ce
s:
ht
tp
://
le
ar
ni
ng
.h
ua
we
i.
co
m/
en
Mo
re
Le
ar
ni
ng
Re
so
ur
ce
s:
ht
tp
://
le
ar
ni
ng
.h
ua
we
i.
co
m/
en
Introduction

This is the chapter that will discuss the third of the ICT infrastructure types that can be used. It is
this Storage Area Network solution, or short SAN, that today is used in almost all companies. It
has many advantaged over the previous two DAS and NAS. We will also use this chapter to

n
introduce the Fibre Channel protocol as well as the fiber optic technology that is used in SAN

e
solutions a lot.

m/
co
i.
we
Objectives

ua
After this module you will be able to:

.h
• Identify the main components of a SAN.

ng
• Describe the concepts of a SAN.

ni
• Explain how a SAN is designed.
• Explain what the multipathing problem is. ar
le
• Describe how a Fibre Channel frame looks like.
//

• Understand how optical fibers work.


:

• Describe the role of zones in a Fibre Channel network.


tp

• Identify the topologies used in a Fibre Channel network.


ht

• Describe the differences between FC and IP SAN.


s:

• Identify the networking components in a host.


ce
ur
so

Module Contents
Re
ng

1. The ideal ICT infrastructure.


ni

2. Concepts of SAN design.


3. The multipathing problem.
ar

4. The Fibre Channel protocol and FC frames.


Le

5. Components of a SAN.

re

Server.
□ Switch.
Mo

□ Storage device.
□ Host Bus Adapter.
□ Transceiver.

HCNA-storage V3 | OHC1109104 SAN Technology Page | 121


6. Principles of fiber optics.
7. FC switches.
□ Concept of World Wide Name.
□ FC port types.
□ Zoning concepts.

n
Configuration.

e
8. Concepts of FC fabrics.

m/
9. Concepts of IP SANs.

co
10. Network interfaces in hosts in IP SANs.

i.
□ Network Interface Connector.

we
□ TOE card.

ua
□ iSCSI HBA.

.h
11. Converging networks.

ng
ni
ar
le
://
tp
ht
s:
ce
ur
so
Re
ng
ni
ar
Le
re
Mo

Page | 122 HCNA-storage V3 | OHC1109104 SAN Technology


The Ideal ICT Infrastructure

The ideal ICT infrastructure

e n
• Is scalable in capacity.

m/
• Can be stretched across the entire world.

co
i.
• Is very reliable.

we
ua
.h
• Offers the highest possible transportation speeds.

ng
• Is easy to manage and flexible.

ni
• Is heterogeneous.

ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 5
: //
tp

In organizations like Huawei, with more than 100,000 employees worldwide, the design of the ICT
infrastructure becomes very complex. People working from The Netherlands Office of Huawei
ht

should be able to access relevant data that is stored on a storage device in Huawei’s head office
s:

in Shenzhen.
ce

For this infrastructure to work well a design has to be made that will last for many years to come.
ur

When a huge design is needed there is also a list of requirements for the design.
so
Re

1. The design must be in such a way that it can be expanded indefinitely. There must
always be the possibility to grow the number of devices.
ng
ni

2. The design must allow the distance between the individual components to be unlimited.
ar

In practice that means 20,000 kilometers which allows a device to be on the other side of
Le

the globe.
re

3. The design must be reliable and resilient. This means that the design architect must
Mo

realize that sometimes hardware fails or people make mistakes. Still when that happens it
should not lead to serious problems for the organization.

4. The components connected to each other must be able to communicate at the highest
possible speeds available.

HCNA-storage V3 | OHC1109104 SAN Technology Page | 123


5. Even when the design becomes very complex it should be able to do maintenance and
monitoring with a limited amount of ICT staff. You can imagine that an ICT department
should not need fifty people to manage fifty or even a hundred devices. Cost
effectiveness for management is also a big design requirement.

n
6. The design should be flexible. That means that it must be possible to change; replace;

e
add components to the infrastructure without any limits. That means that if technology

m/
improves over years the new technology can be integrated in the current infrastructure.

co
i.
7. By design an ICT infrastructure should be heterogeneous. Heterogeneous means that

we
devices from different vendors should be working together just as well as devices that all

ua
come from one vendor. This is at this point not often the case but that has a reason that is

.h
mostly non-technical. Huawei devices like servers, switches and storage devices will work
well with most other vendors equipment. However most customers of Huawei will buy

ng
only Huawei’s products. The reason is often that customers want to have a service

ni
contract with one supplier of the hardware. That prevents them to contact multiple support

ar
teams of multiple vendors in case of a technical problem. Practice has shown that
le
sometimes vendors will blame the other vendor when a problem occurs.
: //
tp

A Storage Area Networks or SAN can deliver on all the points from the wish list we saw earlier.
ht

In a SAN up to 16,77 million devices can be connected to each other. The distance between
s:

components can indeed be 20,000 kilometers. The speed at which data can be transported has
ce

improved a lot from the first SAN infrastructures. Speeds of 16 Gb/s or even 40 Gb/s are now
possible. With all these functionalities and the great number of components it is still relatively
ur

easy to manage a SAN because of the many tools available for monitoring, managing and
so

reporting.
Re
ng
ni
ar
Le
re
Mo

Page | 124 HCNA-storage V3 | OHC1109104 SAN Technology


Storage Area Networks

Storage Area Networks Concepts

e n
m/
co
i.
we
ua
.h
ng
ni
SAN Components : Hosts ; Storage Devices; Switches.

ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 6
: //

Components like storage arrays, backup units etc are referred to as storage devices.
tp
ht

Currently there are no SAN’s used by companies that reach the physical limits of 16.77 million
components. However we do see that SAN’s now span the entire globe as a company’s business
s:

sometimes extends from China to America and from Europe to Africa. Then a vast number of
ce

people depend on the possibility to access data within a company wherever the employee might
ur

be. Picking up a file from an office in Shenzhen that is stored on a server in Brazil should then be
so

possible.
Re

Perhaps the most important factor in a SAN infrastructure is the reliability. A well designed
infrastructure can prevent an infrastructure to collapse when a single component fails. A good
ng

design is described as a design without a Single Point Of Failure (SPOF). That just means that
ni

any component can fail but all the functionalities of the IT infrastructure are still there.
ar
Le

The first step in creating a SAN design is the choice for the components themselves. The second
step is to make the design be reliable. When building a SAN for a big company (also referred to
re

as an Enterprise infrastructure) the quality of the individual components is very important. The
Mo

quality of equipment is often defined as:

1. For personal use at home.


2. For use in SOHO environments (Small Office Home Office).
3. Enterprise class equipment.

HCNA-storage V3 | OHC1109104 SAN Technology Page | 125


It is obvious that in an enterprise SAN the components should be enterprise class components.
An enterprise component is also defined as a device with 5x9 reliability or 99.999% uptime
classification.

Enterprise components have been tested for usage over many years in a 24 hours a day

n
production environment. Compare that with laptops; printers we use at home that are only

e
designed to be used a couple of hours a day.

m/
co
A rating of 99.999% means that statistically a component should be up 365 days, 23 hours and 45

i.
minutes per year. But of course most components will run for years without problems!

we
ua
.h
Storage Area Networks Cabling

ng
ni
ar
le
: //
tp
ht
s:
ce

Cables can be copper or optic, protocols can be FC, iSCSI or FCoE.


ur
so

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 7


Re

So all the components used are of the best possible quality. Huawei offers enterprise class
ng

equipment for all components in a SAN infrastructure. The cables, that are used to connect the
ni

many components with each other, in a SAN solution can be both copper-based as well as fiber
ar

optic-based.
Le
re
Mo

Page | 126 HCNA-storage V3 | OHC1109104 SAN Technology


Storage Area Networks Components

Huawei’s products offer everything to build this ideal infrastructure.


Components we find in a SAN are :

n
• Servers/Host where applications (Database; Email; Graphical Design) run

e
hosted by operating systems (Windows; Linux; Solaris; AIX).

m/
• Interconnect devices:

co
switches; routers.

i.
• Storage devices:

we
Disk arrays; backup devices
(tape or disk based).

ua
Of course we need cables to connect them all together.

.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 8

ar
le
However in the design we must include the scenario where a component will fail mechanically
//

after all. Also the design should include methods to make sure that human errors do not lead to
:

problems.
tp
ht
s:

In a later section of this module the design of a SAN will be explained. Now it is important to look
ce

at the details of how a SAN works.


ur
so
Re
ng
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109104 SAN Technology Page | 127


Differences between DAS and SAN

Differences between DAS and SAN

en
Item DAS SAN

m/
Multiple protocols: FC, iSCSI,
Protocol. SCSI protocol.

co
FCoE.

Mid-range and high-end storage

i.
Small and medium-sized LANs that environments such as key
Application
have only a few number of servers and databases, centralized storage,

we
scenarios.
general storage capacity requirements. mass storage, backup, and disaster
recovery.

ua
High availability, high performance,
high scalability, powerful

.h
Advantages. Easy deployment, small investment.
compatibility, centralized
management.

ng
Poor scalability, waste of resources,
Disadvantages. management difficulties, performance Comparatively large investment.

ni
bottlenecks.

ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 9
: //

A SAN works much like a DAS when we look at the form in which the data is transported from one
tp

component to the other. With both DAS as well as SAN the data is sent as SCSI blocks. Of
ht

course there is a difference because the cable limitations of DAS were in the range of 12 – 25
meters whereas a SAN can stretch over distances of hundreds or thousands of kilometers.
s:
ce

The solution used in SAN infrastructures is not to send the individual SCSI blocks over the
ur

network but to put the SCSI blocks (referred to as the user data or payload data) inside a packet
so

or frame. It is the network that now is optimized to transport the packets across great distances.
Re

Packets can be compared with envelopes that we use to send letters to someone. A letter (a
ng

sheet of A4 paper) is the user data and the envelope is the packet. It will be virtually impossible to
ni

send a letter to someone by simply throwing the sheet of paper out on the street hoping that the
ar

wind will bring it to the addressee.


Le

A better way is to put the letter inside of an envelope and put on a postage stamp. Of course you
re

will have to write the correct address information and drop the letter in a postbox. Once that is
Mo

done the national postal service will take care that the letter is picked up from the postbox and
delivered at the address of the recipient.

Of course there are other ways to bring the letter to the home of the addressee. One of the
alternatives would be a specialized delivery service like UPS or FedEx. They have their own

Page | 128 HCNA-storage V3 | OHC1109104 SAN Technology


system where you would put the letter inside of a special envelope again. It is now the transport
system of the delivery service that brings the envelope to the recipient.

To send SCSI blocks across a long SAN connection multiple methods can be used. These
methods are referred to as protocols. Each protocol has a distinct way of describing the way the

n
SCSI blocks are handled for transport.

e
m/
Three protocols are used with SAN infrastructures:

co
i.
1. FC protocol (Fibre Channel)

we
2. iSCSI protocol (Internet SCSI)

ua
3. FCoE protocol (Fibre Channel over Ethernet)

.h
ng
The first two of these protocols are mostly used in modern SAN’s (FC and iSCSI) where FCoE is

ni
an upcoming technology.

ar
le
//

SAN Storage Applications


:
tp
ht

SAN Storage Applications


s:
ce

Centralized deployment of Storage resources are


storage devices enables divided into blocks that are
ur

application servers to access mapped to application


and share data in a cost-
so

servers to achieve storage


effective manner. resource sharing.
Re

Application
ng

Data backup uses a SAN SANs employ multiple


ni

independent from the service mechanisms for automatic


network, making backup data backup, allowing data
ar

possible for data across to be immediately


Le

heterogeneous servers and recovered after occurrence


of diversified forms. of a disaster.
re

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.


Mo

Slide 10

Before we look at the various protocols used with SAN’s we will look at application scenarios for
SANs. As the total cost of a SAN solution is rather high (for the hardware as well as for the staff

HCNA-storage V3 | OHC1109104 SAN Technology Page | 129


that needs to be experienced in SAN technology) we see SAN’s in companies with 100+
employees. In this kind of company the data is typically:

 Generated by mission-critical database applications that have demanding requirements for


response time, availability, and scalability

e n

m/
Backed up centralized and with high performance, data integrity, and data reliability

co
 Massive in number. Examples of organizations that create and store huge amounts of data

i.
are libraries, banks, social media sites like YouTube, Facebook.

we
ua
A very special example:

.h
The CERN Research Institute in Geneva Switzerland uses a 7 x 9 (99,99999%) classified Huawei

ng
storage system to store all relevant data CERN collects from its experiments.

ni
ar
The design of the storage system had a number of demands that should be met:
le
1. It should be able to store the data very reliable as the data cannot be generated a second
//

time
:
tp

2. The capacity that at the beginning could be stored had to be at least 50+ PB (=
50.000.000 GB)
ht

3. The system should be extendable with at least 20 PB per year


s:
ce

For environments such as at CERN the best possible hardware is required. Still we have to
consider the risk of a hardware failure. Nothing will work forever so how do we eliminate the
ur

problem of a piece of hardware failing.


so
Re

The answer is to create a clever design. The most important concept there is redundancy.
ng

Redundancy is defined as:


ni

The inclusion of extra components of a given type in a system (beyond those required by the
ar

system to carry out its function) for the purpose of enabling continued operation in the event of a
Le

component failure.
re

In easier terms: Add extra hardware that can be used in case of a hardware failure. What that
Mo

means to a SAN design is shown in the next chapter.

Page | 130 HCNA-storage V3 | OHC1109104 SAN Technology


Redundancy in hardware

Redundancy in hardware

e n
Host Most simple design with a lot

m/
of SPOF’s !!!

co
Network Interface Card
1. Network Interface Card.

i.
2. Cable hostSwitch.

we
3. Switch.
Switch

ua
4. Cable SwitchStorage.

.h
5. Controller module

ng
Controller + Interface Storage Device Storage.

ni
disk disk disk disk
disk disk disk disk

ar
disk disk disk disk
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 11
: //
tp

In the above example the goal is to connect a host via a switch to a storage device. In the
simplest solution we need two cables and one switch to make it work. The host itself is an
ht

enterprise class device and it has dual power supplies build into the chassis. If one of them fails
s:

the other surviving power supply will keep the host powered on.
ce

Although this will work the design does not include enough reliability as a single cable breaking
ur

would disrupt the data traffic between host and storage device.
so
Re

Any component that fails, however small or cheap it is, and that disrupts the working of the total
system is called a Single Point Of failure or SPOF.
ng
ni

A good design has no single points of failure. So a much improved design would be the next one.
ar
Le
re
Mo

HCNA-storage V3 | OHC1109104 SAN Technology Page | 131


Redundancy in hardware

Host
NIC

SPOF ? Yes / No

en
NIC’s No

m/
Switch Cable Switch
Cables between

co
No
HostSwitch

i.
Cable Cable Switches No

we
Cables between
No
SwitchStorage

ua
Controller Controller

Disk Disk Disk Disk


Disk Disk Disk Disk Controllers No

.h
Disk Disk Disk Disk

ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 12

ar
le
In this design there is almost complete redundancy in hardware because almost all hardware can
//

fail (single component at a time however!) and still there would be an alternative route from host
:

to the storage device.


tp
ht

There are two more SPOF’s left that we must identify:


s:

1. What if the Operating System or the Application running on the host crashes ?
ce

2. If we store all our vital information on a physical hard disk and that hard disks fails ?
ur
so

For both problems there, of course, are solutions available.


Re

 There are a few methods to be able to survive a crash of a complete host or an Operating
ng

System failure. We often refer to an operating system crash as a Blue Screen Of Death.
ni

This is because most operating systems in those situations show a screen with a blue
ar

background that sometimes gives troubleshooting information about the system crash.
Le

The most well-known solution is a so-called cluster. With intelligent cluster software we
can arrange for an application to be shared between multiple systems or nodes. Nodes
re

communicate with each other and check their neighbor’s health continuously. As soon as
Mo

a host goes down the other nodes notice this and automatically take over the role of the
crashed system.

Page | 132 HCNA-storage V3 | OHC1109104 SAN Technology


 The simplest solution to prevent this is not to store the data on a single disk but spread
the data across multiple disks. Then they have designed methods to protect the data.
Using a clever method they make it possible for the remaining disks to recalculate all data
from a failed disk. Optionally systems will automatically recalculate the data and store it
on a spare disk which is already inserted in the system.

n
The technology where we intelligently distribute the data across multiple disk drives and

e
have the opportunity to recalculate failed disks is called RAID which is short for

m/
Redundant Array of Independent Disks.

co
i.
we
ua
RAID will be explained in much detail in later modules

.h
ng
ni
Multipathing problem
ar
le
//

Multipathing problem
:
tp
ht

Host From the host perspective


NIC there are multiple paths that
lead from the host to the
s:

storage device where the 100


GB volume “lives”.
ce

1 2 3 4
Switch Cable Switch
ur
so

Cable Cable
Re

The redundant paths provide


reliability but for the host
Controller Controller operating system it leads to
ng

Disk Disk Disk Disk confusion called the


Disk Disk Disk
100 GB Disk multipathing problem.
ni

Disk Disk Disk Disk


ar
Le

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 13


re

Now we know what has to be done so the host can use the volume when we encounter a problem.
Mo

The host discovers multiple routes through the network towards the storage device that holds the
volume. Of course the redundant cables are there by design but it is confusing for many operating
systems because each of these paths (indicated in blue) appears to the operating system as
independent routes to a total of up to four volumes!

HCNA-storage V3 | OHC1109104 SAN Technology Page | 133


This confusing situation has been given a name: the multipathing problem.

Multipathing problem

n
• Operating systems that have/had multipath problems: Windows, AIX,

e
Solaris, HP-UX, Unix, Linux.

m/
• Operating systems that handle multipathing well: Tru64, OpenVMS,

co
Vmware, ESX.

i.
• Vendors sometimes build their own specific software module to handle

we
multipathing :

ua
□ Huawei UltraPath.
□ Dell EqualLogic DSM.

.h
□ EMC PowerPath.

ng
ni
ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 14
: //

In a host running the latest versions of the server version of Windows (2008 and 2012) we do not
tp

see these problems as much as before. Older versions, like Windows 2000 and 2003, would
ht

show the newly discovered disks multiple times. The next image shows four 100 GB volumes
where in fact there was just one volume created in the storage device.
s:
ce
ur

Multipathing problem
so
Re

Example of a Windows host without multipathing software.


ng
ni
ar
Le
re
Mo

Each of the paths is represented with a separate volume.

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 15

Page | 134 HCNA-storage V3 | OHC1109104 SAN Technology


Without the intelligence of the multipathing software every volume created within the storage
device will be represented as multiple independent volumes for the operating system. Out of the
detected volumes (in this case 4) the operating system will not detect which one of them is
actively moving data. If the path (or better: the cable) is broken the operating system cannot use
any of the alternative paths to continue accessing the volume. So although there is redundant

n
hardware it is not understood and used by the operating system.

e
m/
It needed extra software installed on the host to make clear to the operating system that it was a

co
single volume but with multiple physical paths to it.

i.
we
With the correct multipathing software installed a single volume will be displayed in disk

ua
management.

.h
ng
Multipathing problem

ni
ar
le
: //
tp
ht
s:
ce
ur
so
Re

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 16


ng
ni

The picture above now shows the same 100 GB volume but now only once. At the same time the
ar

multipathing software is now intelligent enough to redirect the data over another cable in case the
Le

current active path fails. The multipathing software is so fast in this redirection that the operating
system is not even aware that the data was redirected. The operating system had a continuous
re

access to the data on the volume.


Mo

HCNA-storage V3 | OHC1109104 SAN Technology Page | 135


New volumes in disk management

n
e
m/
co
i.
we
ua
.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 17

ar
le
For any operating system the newly discovered storage capacity is what is called raw capacity.
//

The host will have to initialize the volume and then format it creating a file system partition. Once
:

this is done files can be stored on the volume.


tp
ht

This finishes the design of the SAN, we can now afford to lose a hardware component and still be
able to access our data. In the next section we will look at the protocols used to transport the data.
s:

First we will discuss the protocol that is already used for a long time: the Fibre Channel protocol
ce
ur
so
Re
ng
ni
ar
Le
re
Mo

Page | 136 HCNA-storage V3 | OHC1109104 SAN Technology


Network Topology: Fibre Channel

In this section we will look at the Fibre Channel protocol which is one of the possible protocols
that can be used with SAN infrastructures.

e n
m/
Network topology: Fibre Channel

co
i.
Point-to-point Arbitrated loop Fibre Channel switched fabric

we
ua
.h
ng
ni
ar
Most widely used topology
le
Two devices only Up to 127 devices Up to 16 million devices
(Direct connection). (Fibre Channel hub). (Fibre Channel switches).
: //
tp

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 18


ht
s:

It was already mentioned that the Fibre Channel protocol is used for a long time (starting in the
1990’s). In these days the SAN infrastructures were much smaller and there were a couple of
ce

ways to physically connect the components to form the SAN.


ur
so

1. Point-to-point
Re

Two devices are directly connected to each other. This is the simplest topology, with
limited connectivity.
ng
ni

2. Arbitrated loop
ar

All devices are connected in a loop or a ring. Adding or removing a device to or from the
Le

loop interrupts all activities on the loop. The failure of a device on the loop causes the
loop to break. By adding a device called a hub it was possible to connect multiple devices
re

to a logical loop and bypass faulty nodes so that the communication on the loop is not
Mo

interrupted.

Arbitrated loops were used in the first small scale SAN’s but nowadays it is no longer
used. Reason is the fact that an Arbitrated Loop can only hold a maximum of 127 devices.
Today SAN’s should be able to include many more devices that 127.

HCNA-storage V3 | OHC1109104 SAN Technology Page | 137


3. Switched network

This is the modern way how FC SAN’s are built. It uses switches to connect hosts to
storage devices. Maybe it is better to state that modern SAN’s use at least two switches
for redundancy reasons!!

e n
A switch in itself is an intelligent device that is not only used to interconnect a device with

m/
another but it can do much more. Switches, especially if there are many of them, can be

co
configured in such a way that data going from one device can find the optimal path

i.
through the big network of interconnected switches.

we
ua
.h
ng
Fibre Channel Protocol

ni
ar
le
Fibre Channel protocol
: //

High-level protocols
tp
ht

SCSI-3 IP ATM

FC-4 IPI-3 SCSI-3 FC-LE FC-ATM


s:

Command set Command set Link


mapping mapping encapsulation
ce

FC-3 General equipment


ur

FC-2 Structure agreements. FC-AL FC-AL2


so

FC-PH
FC-1 Coding and decoding. FC-PH2
FC-PH3
Re

FC-0 Physical transformation. 8/10 bit/s copper and optical fiber.


ng
ni

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 19


ar
Le

Fibre Channel was developed in 1988. At that time, Fibre channel was primarily concerned with
simplifying the connections and increasing distances, as opposed to increasing speeds. Later, it
re

was used to increase the transfer bandwidth of the disk data transfer protocol to provide fast,
Mo

efficient, and reliable data transfer. By the end of 1990s, Fibre Channel SAN had been used
extensively. The most important layer of the Fibre Channel protocol is FC-2. FC-0 to FC-2 are
referred to as FC-PH, or the physical layer. Fibre Channel mainly uses FC-2 for data transfer. As
a result, Fibre Channel is also known as "Layer 2 Protocol" or "Ethernet-like Protocol".

Page | 138 HCNA-storage V3 | OHC1109104 SAN Technology


A frame is the data unit of Fibre Channel. Though Fibre Channel has several other layers, it uses
FC-2 in most cases. A Fibre Channel frame contains a maximum of 2148 bytes. The header of a
Fibre Channel frame is different from that of an Ethernet packet. Fibre Channel uses only one
frame format to accomplish various tasks on multiple layers. The functions of a frame determine
its format.

e n
A Fibre Channel frame starts from the Start Of Frame (SOF) delimiter, which is followed by the

m/
frame header. We will talk about the frame header later. Then comes data, or Fibre Channel

co
content. Finally, it is the End Of Frame (EOF) delimiter.

i.
we
Relationship between Fibre Channel and SCSI:

ua
.h
Fibre Channel is not a substitute of SCSI. Fibre Channel can transfer the instructions, data, status

ng
messages of SCSI by using frames. SCSI is an upper-layer protocol of FC-4 and is a subset of
Fibre Channel.

ni
To transmit large amounts of data we still need a lot of frames to be sent. When a group of frames
are sent as a batch is we call this an exchange.
ar
le
: //

Fibre Channel Frames


tp
ht

Frame Frame Frame Frame Frame Frame


0 1 2 3 4 5
s:
ce

SEQUENCE X SEQUENCE y
ur

EXCHANGE X
so
Re

F0 = Start of exchance, start of sequence.


F1 – F3 = Middle of exchange, middle of sequence.
ng

F4 = Middle of exchange, end of sequence and added to that is a Transfer


Sequence Initiative.
ni

F5 = Middle of exchange, start of new sequence.


ar
Le

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 20


re
Mo

Inside an exchange there are sequences of frames that are sent. In each frame there should be
information about the exchange and sequence the frame belongs to. Also the number of the
frame itself and its source and destination is listed. This is what a frame looks like.

HCNA-storage V3 | OHC1109104 SAN Technology Page | 139


Fibre Channel Frames

A Fibre Channel frame consists of multiples of Transmission Words


of 4 bytes each. The maximum number of TW’s is 537 which makes
the maximum frame size 2148 bytes.

ne
m/
Header
SOF xx

EOF xx

co
Idles

CRC

Idles
Optional headers + PAYLOAD

i.
6 TW 1 TW 6 TW 0 - 528 TW or 0 -2112 bytes 1 TW 1 TW 6 TW

we
537 TW or 2148 bytes

ua
.h
A full payload of data is 2048 bytes with 64 bytes reserved for optional
headers.

ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 21

ar
le
The next picture shows the layout of the header in a Fibre Channel frame.
: //
tp

Fibre Channel Frames


ht

The frame header is used by both the fabric (for routing) and the
s:

receiving port (for re-assembling the messages).


ce

Bit 32 24 16 8 0
ur

Byte 0 Byte 1 Byte 2 Byte 3


so

Word 0 R_CTL DESTINATION_ID


Re

1 RSVD SOURCE_ID

2 TYPE F_CTL
ng

3 SEQ_ID DF_CTL SEQ_CNT


ni

4 OX_ID RX_ID
ar

5 PARAMETER
Le

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 22


re
Mo

Note: this information is very detailed and is here for reference only.

Page | 140 HCNA-storage V3 | OHC1109104 SAN Technology


Storage device with FC interface

Storage device with FC interface

e n
The Fibre Channel interface modules on a storage device provide

m/
service interfaces for connecting to application servers and receiving
data exchange requests from the application servers.

co
Module Power

i.
Indicator

we
Module
handle

ua
Fibre Channel
host ports 8 Gbit/s Fibre

.h
Channel port
Link/Speed indicator

ng
of an 8 Gbit/s Fibre
Channel port

ni
ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 23
: //
tp

In many cases the Fibre Channel frames are transported via fiber optic cables. That means that a
light pulse is used to indicate a logical one signal. By switching the light on and off we can
ht

indicate one and zero signals. All devices involved therefore must have the appropriate
s:

equipment to send the optical signals and receive them.


ce

The Huawei storage devices for that reason have interface module or I/O cards. Hosts will
ur

typically have a dedicated card installed that allows fiber optic connections. Of course the
so

switches in the middle must be equipped with optic modules too.


Re

The special cards inserted in hosts are so-called Host Bus Adapters (HBAs). Essentially a Fibre
ng

Channel HBA converts the electrical signals into light pulses that will be emitted by a laser source
ni

in the HBA. The light pulses that are received by the host will then be detected by photoelectric
ar

sensors and converted into electrical signals that the computer can use again internally.
Le
re
Mo

HCNA-storage V3 | OHC1109104 SAN Technology Page | 141


HBA

HBA: Various HBAs

HBA is short for host bus adapter, which is


the I/O adapter that connects the host I/O

n
bus to the computer memory system.

e
m/
Categories:

co
Fibre Channel HBA, SCSI HBA, SAS HBA,
iSCSI HBA, and so on

i.
Function:

we
Enables bidirectional or serial data

ua
communication between servers and
storage devices through hubs, switches, or

.h
point-to-point connections.

ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 26

ar
le
The actual component with the light source and the photoelectric sensor is a module referred to
//

as a transceiver. A transceiver is a module in itself that will be inserted in a slot called an SFP port.
:

This is a so-called Small Form factor Port or SFP for short.


tp
ht

Transceiver
s:
ce

Transmitter + Receiver = Transceiver.


ur

• Contains a laser or a LED to create the light pulses.


so

• Contains an optical sensor that can detect light.


Re

• Transceivers are present in storage devices; switches


and server HBAs.
ng

• Can individually be removed/replaced.


ni
ar
Le
re
Mo

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 25

Page | 142 HCNA-storage V3 | OHC1109104 SAN Technology


Transceivers are available for different transmission speeds, for different distances the signal has
to travel and there are different versions of physical interfaces. The most common interface type
for HBAs now is the PCI-E slot which is present in almost all enterprise class servers.

n
Connecting a host to a FC switch

e
m/
Host Bus Adapter is put in a PCI slot.

co
i.
we
ua
.h
ng
ni
ar
le
A fibre channel transceiver is put in a SFP slot in the switch.
//

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 26


:
tp
ht

There are different vendors for HBAs like Emulex, Brocade and Qlogic. They have HBA models
with different numbers of ports. In the above image a 2-port FC HBA is used as an example.
s:
ce

With the correct HBA installed and the appropriate cable type used a signal can be transported
ur

via an optical cable over a distance of 50 km.


so
Re
ng
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109104 SAN Technology Page | 143


Common optical connection medium

Media Type Transmitter Rate Distance

1 Gbit/s 2 m to 50 km
1550 nm long-wave laser.

n
2 Gbit/s 2 m to 50 km

e
9 µm single-mode
1 Gbit/s 2 m to 10 km
optical fiber.

m/
1300 nm long-wave laser.
2 Gbit/s 2 m to 2 km

co
4 Gbit/s 2 m to 2 km

1 Gbit/s 0.5 m to 500 m

i.
50 µm multi-mode
2 Gbit/s 0.5 m to 300 m
optical fiber.

we
4 Gbit/s 0.5 m to 170 m
850 nm short-wave laser.
1 Gbit/s 0.5 m to 300 m

ua
62.5 µm multi-mode
2 Gbit/s 0.5 m to 150 m
optical fiber.

.h
4 Gbit/s 0.5 m to 70 m

ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 27

ar
le
With single mode cables it is much more difficult to get the light inside of the cable because the
//

diameter of the cable. That is why the light source should be very bundled. This means that the
:

best light source for these situations is a laser source.


tp
ht

Multimode cables are 5 – 7 times the diameter and the demands for the light source are less strict.
That is why in some lower cost solutions the light source is a LED (Light Emitting Diode). Those
s:

are much cheaper to produce but generate light in multiple colors (or better a range of colors) and
ce

LED light is not bundled the way laser light is.


ur
so

Note: multi-mode cables are used mostly in datacenters as the distances there are limited to a
Re

maximum of a couple of hundred meters. The multi-mode cables used are the ones with a core
diameter of 62.5 m. In comparison: a human hair typically has a diameter of 75 m.
ng
ni
ar
Le
re
Mo

Page | 144 HCNA-storage V3 | OHC1109104 SAN Technology


Fiber optics

Fiber optics

 n1 coating
=n

n
Snell’s law:

foam

e
2
n2 cladding

m/
 

co
core
 n1
= n1

i.
 nair

we
n2

ua
light bundle

.h
n = refractive index of the optical medium.

ng
Note: n for vacuum is set to 1; n for air ≈ 1.

ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 28

ar
le
There is a lot of physics needed to explain how it is that a light signal can be transported over
//

these distances. The most important physical law with fiber optics is Snell’s law. That law states
:
tp

that light moving from one matter to another will be refracted. In the above picture we see a light
bundle come in at an angle  and then hit the optical material of the cable. At the surface of the
ht

cable there is refraction and that results in the fact that the signal continues with an angle .
s:

Snell’s law now teaches us what determines the change in the angle. He found out that it is
ce

depending on a property of a material called the refractive index.


ur

Fiber optics
so
Re
ng

n2
1
ni

n1
ar

1
Le

n2
re

light bundle
Mo

If the light hits the surface at a angle <= 1 then the light beam will
bounce off at the same angle.

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 29

HCNA-storage V3 | OHC1109104 SAN Technology Page | 145


When the angle is incorrect (here shown with 1) the signal will not be refracted and “enter” the
cable but it will reflect or bounce back from the surface. The angle of reflection is than again 1
which makes the effect look like the light bundle hits a mirror and reflects from it with the same
angle.

n
The pictures shown before show the physical construction of the fiber optic cable. The core of the

e
m/
cable is made of a plastic like (so not glass) material that carries light very well. The better a
material carries light the lower its refractive index is. The refractive index for vacuum is set to be

co
one. Air has a refractive index of almost one. Optical cables use materials with refractive indexes

i.
in the range of 2.2 to 3.0.

we
ua
Directly outside the core there is another layer of optical material with a slightly different refractive

.h
index: the cladding. Then a layer of foam is used to protect the fragile optical parts. The actual

ng
outer layer is a plastic sheath that is often orange or yellow.

ni
FC optical cables ar
le
//

Macro bends
:
tp
ht

cladding
s:
ce
ur
so
Re

minimum radius 0.05 m ( 5 cm)


ng
ni

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 31


ar
Le

As it is important to keep the light inside of the core and have it bounce back against the surface
where the core and the cladding meet. Light that leaves the cladding (indicated with the red arrow)
re

has hit the surface in an unfavorable angle. That part of the signal will then be lost. That would
Mo

mean that the signal is less bright which at the end may result in a weak signal that cannot be
detected by the photoelectric sensors. All the theory above is used to make clear that handling
the cable is very important. An engineer should not bend the cable too much and also should he
keep the ends of the cable and the transceivers dust free.

Page | 146 HCNA-storage V3 | OHC1109104 SAN Technology


FC optical cables

Possible problems resulting in power loss (attenuation):

• Macro bends: minimal radius 1½ inch. Even though bends are

n
according to specs light paths differ leading to a

e
distorted signal.

m/
• Micro bends: pinching of cables leads to loss of signal.

co
• Scattering : impurities have a different refractive index.

i.
Light is scattered when it passes impurities.

we
• Absorption : light hits the cladding in an unfavorable angle and is

ua
absorbed in the cladding.

.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 30

ar
le
It is important to handle the cables in such a way that the optimal amount of light stays in the
//

cable making the success rate of detecting the light pulses as high as possible. Fiber optic cables
:

should be laid out without sharp bends. Also any dirt that is collected on the optic material of the
tp

cable or inside the transceivers impact the amount of light transported.


ht
s:

FC multimode
ce
ur

Multimode fiber exists in:


so
Re

Step-Index multi-mode.
• supports thousands of nodes.
ng

• high dispersion.
ni

• lowest bandwidth.
ar
Le

Graded-Index multi-mode.
• reduced dispersion.
re

• increases bandwidth.
Mo

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 32

HCNA-storage V3 | OHC1109104 SAN Technology Page | 147


The more light is trapped inside the cable the higher the intensity of the light pulse will be at the
end of the cable. To improve the quality of the optical cable they changed the way the core itself
is built. Using multiple layers with slightly different refractive indexes they have arranged that the
light pulses will be pushed in to the center of the cable. This type of cable is called a step-index
cable.

n
e
Nowadays almost all cables used are graded index cables. In such cables the density of the

m/
optical material is changed in such a way that the refractive index changes continuously from the

co
inside of the core towards the cladding. This is the optimal construction to keep the light directed

i.
towards the inside of the core.

we
ua
.h
ng
Fibre Channel switch

ni
ar
le
Fibre Channel switch
//


:

Directly connected to a Fibre Channel network.


tp

• Directly connected to an initiator and a target.


ht

• Exclusive use of all optical bandwidths.

• Zoning.
s:
ce
ur
so
Re
ng
ni

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 33


ar
Le

Fibre Channel (FC) switches are considered to be the core of a SAN. FC switches connect hosts
to storage devices.
re
Mo

In order to disable unwanted traffic between certain fabric nodes in FC SAN we define zones in
the Fibre Channel switches. A zone is similar to a VLAN with Ethernet switches. Devices in
different zones cannot communicate with each other.

Page | 148 HCNA-storage V3 | OHC1109104 SAN Technology


Fibre Channel switch ports

Fibre Channel switch ports

e n
Fibre Channel switch 1

m/
Node N_Port F_Port F_Port N_Port Node

co
i.
E_Port

we
ua
FL_Port NL_Port Node
E_Port

.h
G_Port FL_Port FL_Port

ng
FL_Port NL_Port Node

ni
ar
Fibre Channel switch 2
Fibre Channel hub
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 34
: //
tp

Fibre Channel switches house various ports. The ports provide different functions depending on
the types of devices connected to them.
ht
s:

The following types of ports are defined by Fibre Channel:


ce

• F_Ports (also known as Fibre Channel network ports) are ports on the switch that connect to
ur

a node point-to-point (for example, connects to an N_Port). In the case of the arbitrated loop
so

topology, the node is regarded as an NL_Port. Fibre Channel switches identify these nodes
Re

by the names of N_Ports or NL_Ports.

• E_Ports (also called expansion ports) are connection between two Fibre Channel switches.
ng
ni

• FL_Port is a port on the switch that connects to an FC-AL loop (for example, to NL_ports). A
ar

switch port on a Fibre Channel switch can be part of a loop and data can be transferred from
Le

the switch to the loop. The switch port working correctly in a loop is referred to as an FL_Port.
re

• G_Ports are generic ports, which can operate as F_Ports or E_Ports depending on the
Mo

implementation mode. Thanks to its adaptability, G_Ports can deliver flexibility to Fibre
Channel switches and cut down the administrative costs of each port on a multi-switch Fibre
Channel SAN.

Currently, Fibre Channel switches can support a port rate of 1, 2, 4, 8 or 16 Gbit/s.

HCNA-storage V3 | OHC1109104 SAN Technology Page | 149


World Wide Name

WWNs of Fibre Channel HBAs

ne
• WWNN (World Wide Node Name).

m/
• WWPN (World Wide Port Name).

co
i.
we
Input • P2P

ua
Fibre Channel • FC-AL
WWPN network • FC-SW
output

.h
WWNN Link

ng
ni
ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 35
: //
tp

Because a SAN can have thousands of components there must be a way to identify each one of
them with a unique code. Compare this with a home address that should be unique so only one
ht

person will receive a letter with that address written on it.


s:
ce

For the Fibre Channel protocol they use an identifier called the World Wide Name or WWN. All
Fibre Channel compatible equipment has a unique WWN up to the single interfaces of the I/O
ur

modules in storage devices. For that reason different WWN’s are defined:
so
Re

1. World Wide Node Name (WWNN)


The globally unique node name. Each upper-layer node is assigned a unique 64-bit identifier.
ng

All ports on an HBA share the same WWNN. A WWNN is allocated to a node (or terminal, for
ni

example, a device) on a Fibre Channel network. The WWNN can be used by one or multiple
ar

ports that have different WWPNs and belong to the same node.
Le

2. World Wide Port Name (WWPN)


re

The globally unique port name. Each Fibre Channel port is assigned a unique 64-bit identifier
and has an exclusive WWPN. The application of WWPNs in a SAN is similar to that of an
Mo

Ethernet MAC address.

An example of a World Wide Name could be: 2000-C29C-34FA-BC0D


In the WWN each character is a so-called hexadecimal number that represent 4 bits.

Page | 150 HCNA-storage V3 | OHC1109104 SAN Technology


Fibre Channel zoning

Fibre Channel Zoning

n
BLUE ZONE

e
m/
RED ZONE

co
BACKUP1

i.
HOST 2

we
HOST1

ua
.h
STOR2
STOR1

ng
STOR3
HOST 3

ni
GREEN ZONE

ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 36
: //
tp

Mostly because of security reasons the manager of the SAN wants to restrict access to
specific devices. This is done using the concept of zones. In a zone of a switch the equipment
ht

can only communicate with the other equipment in the same zone. In the above example the
s:

green zone contains two storage devices (STOR1 and STOR2) and a host (HOST3). That
ce

means that HOST3 can detect the devices STOR1 and STOR2 and can communicate with
them. Although the other devices are connected to the same switch HOST3 will not be able to
ur

communicate with the other hosts or the backup device (BACKUP1). STOR3 is not in any
so

zone and therefore cannot be detected by any other device.


Re

It is possible to add a device to multiple zones. In the picture STOR1 is in two zones (RED
ng

and BLUE). Also in two zones is STOR2. It is in the BLUE and the GREEN zone.
ni
ar

The picture above is a symbolic representation of the zones. In practice the devices are all
Le

connected to a Fibre Channel switch. The zones can then be represented like shown in the
next image.
re
Mo

HCNA-storage V3 | OHC1109104 SAN Technology Page | 151


Fibre Channel Zoning

RED ZONE STOR1


BLUE ZONE
HOST2
HOST1

ne
m/
co
i.
we
BACKUP1

ua
STOR3

.h
HOST3 STOR2
GREEN ZONE

ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 37

ar
le
Zones are defined within the switch using a graphical interface or in a command line mode. With
//

the command line mode (also called CLI) special commands have to be typed to make all the
:

settings for the zones. Multiple zones can exist inside of a switch. Zones can be active or inactive.
tp

Which zones are active is defined in so-called configurations. Multiple configurations can exist in
ht

a switch, but only one configuration can be active!


s:

Two major methods can be used to define the zones in a switch:


ce
ur

1. Port zoning. For each of the zones the number of the port the devices are connected to are
so

listed. This requires the switch administrator to know exactly where each cable connected to
Re

the switch is connected with.


ng

2. Soft zoning. This is also called World Wide Name zoning. In the switch the zones are
ni

defined by listing all WWN’s of the devices that should be in the same zone. As WWN’s are
ar

identifiers that are not easy to memorize; usually aliases are defined for each WWN.
Le

The following pictures show sections of the graphical user interface Huawei uses inside its Fibre
re

Channel switch model SNS2124


Mo

Page | 152 HCNA-storage V3 | OHC1109104 SAN Technology


Zone Basic configuration

1. Configure.
2. Zone Admin.

n
3. enter Zone Administration.

e
m/
co
i.
we
ua
.h
ng
Note:Screenshots are for the FC switch model SNS2124

ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 38

ar
le
The Configure menu has an item that will open the special Zone Administration window. There
//

the user can create aliases for the WWN’s in the various devices connected to the switch. Note
:

that this is typically done when soft zoning is used.


tp
ht

New Alias
s:
ce
ur

Step 3
so

Step 1
Re
ng
ni
ar
Le

Step 2
re
Mo

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 39

HCNA-storage V3 | OHC1109104 SAN Technology Page | 153


New Zone

Step 3

n
e
Step 1

m/
co
i.
we
ua
Step 2

.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 40

ar
le
Once the aliases are defined the next step is to create the individual zones. Step 1 is to give the
//

zone a symbolic name and then add aliases (or port numbers) to them.
:
tp

After the creation of all required zones the configuration(s) must be defined. Again this starts with
ht

a symbolic name for the configuration. Then the zones that should be active, when the
configuration is enabled, are added to the configuration.
s:
ce
ur

Creating and enable Zone Config


so
Re

Step 4 Step 3
ng

Step 1
ni
ar
Le
re
Mo

Step 2

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 41

Page | 154 HCNA-storage V3 | OHC1109104 SAN Technology


Inside of a switch there can be a number of zones and a number of configurations. Only one
configuration can be active at any time. This is the running configuration. Every time a change is
made to the zones or the configurations the new changes will be applied in the running
configuration.

n
However: it is important to save the configuration! When a switch reboots or gets powered off and

e
powered on it will not use the running configuration. A switch always starts with the startup

m/
configuration and that is the last saved version of the configuration.

co
i.
A Fibre Channel SAN typically has at least two Fibre Channel switches. The reason is not only

we
redundancy but also the design of Fibre Channel SAN’s demands it. A FC SAN must consist of

ua
two separate networks called fabrics.

.h
ng
ni
Fibre Channel fabrics
ar
le
//

Fibre Channel fabrics


:
tp

Fabric:
ht

• Separate network within a


s:

FC SAN.
ce

FC FC
• Can consist of multiple
ur

Fabric A Fabric B
switches.
so

FC FC
Re
ng
ni
ar
Le

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 42


re

Depending on the size of the required SAN infrastructure the choice for FC switch models ranges
Mo

between entry level FC switches and high end switches called core switches.

The difference between them is mostly based on the number of physical ports that are present in
the switch. For an entry level switch this could be 24 ports where core switches can have
hundreds of ports.

HCNA-storage V3 | OHC1109104 SAN Technology Page | 155


When a switch has not enough ports the option could be to replace it with a bigger one. But there
is an alternative. Two switches that are connected together using a Fibre channel link between
them will from that moment function as one switch! So one could keep the old switch and buy a
second switch. In both switches one port is used to put the interconnecting cable in. With this
method two 24 port switches combined with the interconnect cable act like a 46 port switch. (2 x
24 – 2).

en
m/
The next picture shows a few possibilities for connecting switches together.

co
i.
we
Fibre Channel fabrics

ua
.h
ng
ni
ar
le
: //

Ring network Meshed network Core – Edge Design


tp
ht
s:
ce

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 43


ur
so
Re
ng
ni
ar
Le
re
Mo

Page | 156 HCNA-storage V3 | OHC1109104 SAN Technology


IP SAN

What is an IP SAN?

e n
An IP SAN is an approach to using the Internet Protocol in a

m/
storage area network usually over Gigabit Ethernet.

co
The typical protocol that implements an IP SAN is Internet SCSI
(iSCSI), which defines the encapsulation mode of SCSI instruction

i.
sets in IP transmission.

we
User A User B User C

ua
LAN
Server Server

.h
HBA HBA

ng
TCP/IP network

ni
Storage device Ethernet switch Storage device

ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 45
: //
tp

The title of this section is IP SAN and that may be confusing as the next topic will be the iSCSI
protocol. However this is correct because the iSCSI protocol is one of the options we have to
ht

move the SCSI blocks across an IP based (maybe we should say Ethernet based) network. The
s:

other options are FCIP and iFCP but they are not used nearly as much as iSCSI. So iSCSI will be
ce

the protocol we focus on next.


ur

An iSCSI SAN puts the SCSI block in Ethernet packets and sends them over the network.
so
Re

The iSCSI was initiated by Cisco and IBM and then advocated by Adaptec, Cisco, HP, IBM,
Quantum, and other companies. iSCSI offers a method of transferring data through TCP and
ng

saving them on SCSI devices. The iSCSI standard was drafted in 2001 and submitted to IETF in
ni

2002 after numerous discussions and modifications. In Feb. 2003, the iSCSI standard was
ar

officially released. The iSCSI technology is developed based traditional technologies and inherits
Le

their advantages. On one hand, we have SCSI technology which is a storage standard widely
applied by storage devices including disks and tapes. It has been developing at a rapid pace
re

since 1986. On the other, we have TCP/IP which is the most universal network protocol with an
Mo

advanced IP network infrastructure. These two provide a solid foundation for iSCSI development.

HCNA-storage V3 | OHC1109104 SAN Technology Page | 157


Advantages of IP SANs

Advantages of IP SANs

ne
 IP SANs do not need dedicated HBAs or FC switches but

m/
Standard access common NICs and switches for connecting storage devices
to servers can be used.

co
 IP SANs are available wherever IP networks exist. In fact,

i.
Long transmission
distance IP networks are now the most widely used networks in the

we
world.

ua
 Networking experience is generally already present in
Enhanced
maintainability many ICT departments. FC switch knowledge is not.

.h
ng
 With the development of the 40 Gbit/s Ethernet, IP SANs
Scalable bandwidth will soon be faster than the 16 Gb/s of Fibre Channel.

ni
ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 46
: //
tp

1. The minimal hardware configuration needed is widely available which makes IP SANs
cheaper to implement than FC SANs. Most hosts already have suitable network interfaces
ht

and switches are often also suitable (but not ideal) for iSCSI traffic. High performance IP
s:

SANs however are usually equipped with special iSCSI HBAs and high end switches.
ce

2. Setting up an IP SAN is easy because the IP infrastructure is already spanning the entire
ur

globe. The Ethernet cables that are used to “run” the internet are considered to form the
so

biggest network in the world.


Re

3. To manage an IP SAN the knowledge required is not much more than what most IT
ng

employees already have. Basic Ethernet networking skills are required plus some iSCSI
ni

specific knowledge.
ar

Fibre Channel technology is new to most organizations and that requires a lot of training to
Le

bring every SAN administrator on the right knowledge level.


re

4. The development of Ethernet is a continuous process and at this point 10 Gbit/s is widely
Mo

available. Also the development of 40 Gbit/s and even 1 Tbit/s are well on the way. Fibre
Channel has been upgrade from 8 to 16 Gbit/s just a few years ago.

Page | 158 HCNA-storage V3 | OHC1109104 SAN Technology


Fibre Channel SAN vs. IP SAN

Indicator Fibre Channel SAN IP SAN


Transmission speed. 4 Gbit/s, 8 Gbit/s, 16 Gbit/s. 1 Gbit/s, 10 Gbit/s, 40 Gbit/s.

Network architecture. Dedicated Fibre Channel networks and HBAs. Existing IP networks.

e n
Transmission Limited by the maximum transmission distance Unlimited theoretically.
distance. of optical fibers.

m/
Management and Complicated technologies and management. As simple as operating IP devices.
maintenance.

co
Compatibility. Poor. Compatible with all IP network
devices.

i.
Performance. Very high transmission and read/write 1 Gbit/s (mainstream) and 10 Gbit/s.
performance.

we
Cost. High purchase cost (of Fibre Channel switches, Lower purchase and maintenance
HBAs, Fibre Channel disk arrays, and so on) costs and higher return on
and maintenance cost (of staff training, system investment (ROI) than Fibre Channel

ua
configuration and supervision, and so on). SANs.

Disaster recovery. High hardware and software costs for disaster Local and remote DR available on

.h
recovery (DR). existing networks at a low cost.

Security. High. Medium/Low.

ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 48

ar
le
: //

Networking in IP SANs
tp
ht
s:

Networking in IP SANs
ce
ur

Single switch Dual switch


so

Application Application
server server
Re
ng

Ethernet
ni

switch Stack/ISL/Trunk
ar
Le

Storage device Storage device


re
Mo

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 49

The network of IP SANs usually exists out of multiple switches. That is because of the
redundancy in hardware or because of the number of switch ports required. But even with two or

HCNA-storage V3 | OHC1109104 SAN Technology Page | 159


more switches used there will be just one fabric. For IP SANs there is no official need for two
separate fabrics like with FC.

The picture above shows the single switch next to the dual switch solution. Both solutions
however consist of one fabric:

n
The dual switch networking mode features high scalability and allows multiple hosts to share

e
m/
the storage resources offered by the same storage device. And even, when a switch fails, the

co
storage resources are still available.

i.
The way the individual switches are connected together to form that one fabric varies. Three

we
options are available in modern switches.

ua
1. Use a cable to connect two ports on different switches together

.h
2. Many switches have dedicated ports called uplink ports just for connecting them to other

ng
switches

ni
3. With midrange and high end switches there is the option to install a so-called stacking

ar
module. Together with a special stacking cable two switches can be stacked together using
le
the stacking modules in them. Stacking allows for high performance interconnection of two or
//

more switches.
4.
:
tp
ht

iSCSI connection modes


s:
ce

iSCSI connection modes


ur
so

Three adapter types can be used with iSCSI communication.


Re
ng
ni
ar

NIC + initiator TOE NIC + initiator iSCSI HBA


Le

software software
re
Mo

SCSI USER DATA - 1 iSCSI INFO - 2 TCP INFO - 3 IP INFO - 4

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 50

Page | 160 HCNA-storage V3 | OHC1109104 SAN Technology


iSCSI devices use IP ports as their host ports, through which iSCSI devices are connected to
Ethernet switches to form a TCP/IP-based SAN. Depending on the connection mode adopted by
hosts, there are three iSCSI connection modes:

 NIC + initiator software: The host uses standard NICs to connect to the network. The

n
functions of the iSCSI and TCP/IP protocols are processed by the host CPU. This mode

e
requires the lowest cost because it uses the universally integrated NICs of hosts. However,

m/
this mode requires CPU resources for iSCSI and TCP/IP processing, deteriorating host

co
performance.

i.
we
 TOE + initiator software: The host incorporates a TOE NIC. The functions of the iSCSI

ua
protocol are processed by the host CPU, but those of the TCP protocol are processed by the

.h
TOE NIC, reducing the workload of the host CPU.

ng

ni
iSCSI HBA: The functions of the iSCSI and TCP/IP protocols are processed by the iSCSI
HBA installed on the host. The host CPU has the least overhead.

ar
le
//

NIC + initiator software


:
tp
ht

The initiator software converts


iSCSI packets into TCP/IP
packets, which consumes host
s:

resources.
ce

NIC
ur
so

TCP/IP-based Ethernet
IP SAN
Re

connection.

Internal bus
ng

Ethernet
ni

Storage device
ar

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 51


Le
re

Host devices such as servers and workstations use standard NICs to connect to Ethernet
Mo

switches. iSCSI storage devices also connect to the Ethernet switches or to the NICs of the hosts.
The initiator software installed on hosts virtualizes NICs into iSCSI cards. The iSCSI cards are
used to receive and transmit iSCSI data packets, implementing iSCSI and TCP/IP transmission
between the hosts and iSCSI devices. This mode uses standard NICs and switches, eliminating
the need for adding other adapters. Therefore, this mode is the most economical. However, this

HCNA-storage V3 | OHC1109104 SAN Technology Page | 161


mode consumes host resources during iSCSI to TCP/IP packet conversion, increasing operating
overhead and decreasing system performance. The NIC + initiator software mode is applicable to
the scenarios that require moderate I/O and bandwidth performance for data access.

TOE NIC + initiator software

en
m/
The initiator software
implements the functions of the

co
iSCSI layer, which consumes
host resources.

i.
The TOE NIC implements

we
TCP/IP encapsulation, which
TOE NIC does not consume host
resources.

ua
.h
TCP/IP-based Ethernet
IP SAN
connection.

ng
Internal bus

ni
Ethernet

ar
Storage device le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 52
: //
tp

TOE NICs process the functions of TCP/IP protocol while hosts process the functions of the iSCSI
protocol. As a result, the data transfer rate is remarkably improved. Compared with the software
ht

mode, this mode greatly reduces host operating overhead and requires only a little additional
s:

network construction cost. This is a trade-off solution.


ce
ur

iSCSI HBA
so
Re
ng
ni

The iSCSI HBA converts iSCSI


ar

packets into TCP/IP packets,


iSCSI HBA which does not consume host
Le

resources.
re

TCP/IP-based Ethernet
connection.
IP SAN
Mo

Internal bus

Ethernet

Storage device

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 53

Page | 162 HCNA-storage V3 | OHC1109104 SAN Technology


An iSCSI HBA is installed on the host to implement efficient data exchange between the host and
switch or between the host and storage device. The iSCSI and TCP/IP protocol functions are
handled by the host HBA, consuming the least CPU resources. This mode delivers the best data
transfer performance but requires the highest cost.

n
The iSCSI communication system inherits part of SCSI's features. The iSCSI communication

e
involves an initiator that sends I/O requests and a target that responds to the I/O requests and

m/
executes I/O operations. Acting as the primary device, the target controls the entire process after

co
a connection is set up between an initiator and a target. Targets include iSCSI disk arrays and

i.
iSCSI tape libraries.

we
ua
The iSCSI protocol defines a set of naming and addressing methods for the iSCSI initiator and

.h
target. All iSCSI nodes are identified by their iSCSI names. The naming method distinguishes

ng
iSCSI names from host names.

ni
iSCSI uses iSCSI qualified names (IQN’s) to identify initiators and targets. Addresses change with

ar
the relocation of initiator or target devices, but their names remain unchanged. An initiator delivers
le
a request. After the target receives the request, it checks whether the iSCSI name contained in
//

the request is consistent with that bound with the target. If the iSCSI names are consistent, the
:

connection is set up. Each iSCSI node has a unique IQN name. One IQN name is used while
tp

connecting one initiator to multiple targets. Multiple IQN names are used while connecting one
ht

target to multiple initiators.


s:
ce

iSCSI encapsulation model


ur
so

All SCSI commands are encapsulated into iSCSI PDUs. iSCSI uses
Re

the TCP protocol at the transport layer of the TCP/IP protocol stack
to provide reliable transmission mechanisms for connections.
ng

Ethernet header IP header TCP header Data (iSCSI) FCS


ni
ar

Source port Destination port Basic header segment (BHS)


Le

Serial number Additional header segment (AHS)


Acknowledgment number Header checksum
Reserved Flags
HELN Windows size Data segmentation
re

(4 bits) (8 bits)
Checksum Urgent pointer Data checksum
Mo

Options and padding

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 54

HCNA-storage V3 | OHC1109104 SAN Technology Page | 163


All SCSI instructions are encapsulated into iSCSI Protocol Data Units or PDU’s. A PDU is the
basic information unit that is sent. The iSCSI protocol uses the TCP protocol at the transport layer,
providing a reliable transmission mechanism for connections. After TCP segment headers and IP
packet headers are encapsulated, the encapsulated SCSI instructions and data are transparent to
network devices. As a result, network devices send them as common IP packets.

e n
One of the things that gave a lot of SAN administrators an excuse not to use iSCSI is the fact that

m/
Ethernet is not a lossless system (and Fibre Channel is lossless). With a lossless system we

co
mean that each packet that is transmitted will be guaranteed to arrive at the destination or target.

i.
For Ethernet that was not the case and as iSCSI relies on Ethernet technology it meant that data

we
sent from an iSCSI initiator not always reached the destination.

ua
.h
Why is that ?

ng
In the concept of Ethernet there are no limitations on the amount of packets that may be
transmitted. Also there is no way of regulating the number of packets transmitted. When the

ni
number is so high that it reaches the maximum throughput of the physical network components
problems exist.
ar
le
//

An unsuccessful transmission may lead to a new attempt to send the same packets again (and
:

again). If the capacity of the network remains a bottleneck than the delivery of packets cannot be
tp

guaranteed.
ht

The last couple of years improvement on the 10 Gbit/s Ethernet standard have lead to the fact
s:

that Ethernet now can be a lossless protocol. The improvements all are described in a number of
ce

IEEE802.3 additions but the general name of the group of additions that make Ethernet lossless
ur

is the term Data Center Bridging (DCB). DCB is only available from 10 Gbit/s speeds (and
so

higher) so many traditional 1 Gbit/s iSCSI solutions are still not lossless.
Re

The hardware for 10 Gb/s has become cheaper over the last year so iSCSI is now a true
ng

competitor for the traditional Fibre Channel protocol.


ni
ar
Le
re
Mo

Page | 164 HCNA-storage V3 | OHC1109104 SAN Technology


Huawei IP SAN storage applications

e n
m/
Module Power Module Handle Module Power Module
Indicator

co
indicator Handle
Speed indicator of a
1 Gb/s 1 Gb/s iSCSI port 10 Gb/s

i.
iSCSI TOE port
port

we
Link/Active indicator Link/Speed
of a 1 Gb/s iSCSI indicator of a 10
port Gb/s TOE port

ua
.h
1 Gb ETH 10 Gb ETH

ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 55

ar
le
To demonstrate that Huawei fully supports iSCSI in most of their storage devices the above
//

picture shows an OceanStor S5500 storage array with iSCSI modules.


:
tp

A 1 Gb/s iSCSI interface module provides service ports to the storage system for receiving data
ht

read/write requests from application servers. Each 1 Gb/s iSCSI interface module houses four 1
Gb/s iSCSI ports to receive data exchange commands sent by application servers.
s:
ce

A 10 Gb/s TOE interface module provides service ports to the storage system for receiving data
ur

read/write requests from application servers. Each 10 Gb/s TOE interface module houses four 10
so

Gb/s TOE ports to receive data exchange commands sent by application servers.
Re

In the above picture we see a Huawei storage array with two controllers where each controller
ng

has two 10 Gb/s Ethernet I/O modules. Optionally the configuration can be changed in such a
ni

way that the same S5500 storage array has both FC and 10 Gb/s IO modules. This offers the
ar

possibilites to mix the technologies.


Le

Two examples
re
Mo

1. An infrastructure where the local datacenter needs to have high perfomance specifications
but there should also be a copy of all data in a datacenter on a second site.10 kilometers
away.
For the optimal performance the localdatacenter might be equipped with FC components. The
data could then be copied to a remote site using cost effective Ethernet based networks.

HCNA-storage V3 | OHC1109104 SAN Technology Page | 165


2. The infrastructure demands that the data generated on one site (Main datacenter) gets copied
for security reasons to a second site thousands of kilometers away.
Locally the iSCSI solution might be applied and for the connection to the remote site a high
speed (but very expensive) Fibre Channel based link might be used.

n
e
m/
co
Convergence of Fibre Channel and TCP/IP

i.
we
ua
Convergence of Fibre Channel and TCP/IP

.h
ng
Fibre Channel and TCP/IP can be converged in two ways:

ni
1. Fibre Channel channels carried over a TCP/IP network.

ar
• FCIP.
• iFCP.
le
• FCoE.
//

2. TCP/IP data carried over Fibre Channel channels.


:

• IPFC.
tp

Ethernet technologies and Fibre Channel technologies are both


ht

developing fast. IP SANs and Fibre Channel SANs currently coexist


and will continue to serve as complements to each other for the
s:

foreseeable future.
ce
ur

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 56


so
Re

The term convergence is here used to indicate a system that both uses the FC as well as the
iSCSI protocol. A couple of combinations are possible: put iSCSI packets inside of a FC frame or
ng

put FC packets inside an Ethernet frame.


ni
ar

Out of the four methods (FCIP, IFCP, FCoE and IPFC) the one that is used most is FCoE. This
Le

stands for Fibre Channel over Ethernet. The FCoE standard is getting more popular as the
technology allows both Fibre Channel as well as IP technology to be used at the same time. The
re

fact that now one switch (Ethernet) can be used to transport both FC as well as IP information is a
Mo

cost effective solution.

Page | 166 HCNA-storage V3 | OHC1109104 SAN Technology


FCoE protocol

The FCoE protocol is used to transmit Fibre Channel signals over a


lossless enhanced Ethernet.

n
FCoE encapsulates Fibre Channel data frames into Ethernet packets

e
and allows service traffic on a LAN and a SAN to be concurrently

m/
transmitted over the same physical interface.

co
Ethernet data link layer frame

i.
 Service flow IP address

we
 Block storage FCoE

ua
 Internet telephony VoIP

.h
 Video stream VoIP

ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 57

ar
le
Fibre Channel over Ethernet (FCoE) provides services specified by Fibre Channel standards,
//

including discovery, global naming, and zoning. These services run in the same way as the
:

original Fibre Channel services with low latency and high performance.
tp
ht

Note:
s:

VoIP = Voice over IP. A method to transmit audio and or video for digital telephony over an
ce

Ethernet network.
ur
so
Re
ng
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109104 SAN Technology Page | 167


Questions

Questions

n
e
1. What five specifications identify a Storage Area Network?

m/
2. What methods can be used to define zoning in a FC switch?

co
3. What is a transceiver?

i.
4. What are the differences between an IP SAN and a Fibre Channel SAN?

we
5. What are the main components of an IP SAN?

ua
6. How many connection modes does an IP SAN have? What are their

.h
characteristics?

ng
7. What are the functions of the iSCSI initiator and target?

ni
ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 58
: //
tp

1. Scalable in number of components; scalable geographical, reliable, flexible, heterogeneous,


easy to manage
ht

2. Port zoning; World Wide Name zoning and Alias zoning


s:

3. A module in a switch; Host Bus Adapter or storage device that holds a light source and a
ce

photoelectric sensor. It is used to create an optical signal from an electrical signal and vice
versa.
ur

4. IP SANs use a single fabric; are Ethernet based; requires little training to master ; speeds of
so

up to 40 Gbit/s; relatively cheap to implement. FC SANs use dual fabrics with dedicated
Re

networks; requires training to master; speeds up to 16 Gbit/s; FC components are more


expensive
ng

5. Host with Ethernet network interface; multiple Ethernet switches that are connected with each
ni

other; Ethernet type CAT cable; Storage devices with Ethernet interfaces.
ar

a. Network Interface Connector. Already present in most hosts. Software, running on the
Le

hosts CPU, is used to encapsulate the payload with iSCSI+TCP+IP information


b. TCP/IP Offload Engine. A dedicated I/O card that performs the encapsulation of TCP +
re

IP.The software in the host still is involved in iSCSI encapsulation


Mo

c. iSCSI Host Bus Adapter. A dedicated I/O card that performs all encapsulation tasks and
forwards the relevant SCSI data to the host CPU
6. The initiator is responsible for the selection of the destination device in a IP connection. The
target is the device that controls the connection after it has been established.

Page | 168 HCNA-storage V3 | OHC1109104 SAN Technology


Exam Preparation

Exam Preparation

e n
1. Statement 1: In IP SANs two switches are used for redundancy

m/
and for creating two fabrics.

co
Statement 2: A host can be part of multiple zones in a FC switch.

i.
a. Statement 1 is true; Statement 2 is true.

we
b. Statement 1 is true; Statement 2 is false.
c. Statement 1 is false; Statement 2 is true.

ua
d. Statement 1 is false; Statement 2 is false.

.h
2. Which of the following characteristics are applicable to FC SANs.

ng
Select all that apply.
a. Lossless protocol. d. Up to 16,77 million devices.

ni
b. Single fabric. e. Speeds up to 10 Gb/s.
c. IQN zoning. f. Design should include SPOF’s.

ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 59
: //
tp
ht

Exam Preparation
s:

3. iSCSI Host Bus Adapters are used because they offload the
ce

CPU of the host of all the work needed to encapsulate iSCSI


packets in Ethernet frames. True or false?
ur

4. Statement 1 : E_Ports are FC ports in a host that connects to a


so

switch.
Re

Statement 2 : Every interface in a FC switch has a unique


World Wide Port Name assigned to it. The
switch chassis itself has a unique World Wide
ng

Node Name.
ni

5. Statement 1 is true; Statement 2 is true.


ar

6. Statement 1 is true; Statement 2 is false.


7. Statement 1 is false; Statement 2 is true.
Le

8. Statement 1 is false; Statement 2 is false.


re

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 60


Mo

Answers:

1 C, 2. A + D, 3. True, 4.C

HCNA-storage V3 | OHC1109104 SAN Technology Page | 169


Summary

Summary

n
e
• Essential parameters of a SAN.

m/
scalable in size and distance, reliable, flexible.

co
• Components and networking of a FC SAN.
dual fabric, zoning, fiber optical cable, HBA/transceiver.

i.
• Fibre Channel protocol, FC Frame, Port types (F, N, L, FL, E, G).

we

ua
Components and networking of an IP SAN.
single fabric, NIC / TOE / iSCSI HBA.

.h
• iSCSI frame.

ng
• Convergence of Fibre Channel and TCP/IP.

ni
FCoE.

ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 61
: //
tp
ht
s:
ce
ur
so
Re
ng
ni
ar
Le
re
Mo

Page | 170 HCNA-storage V3 | OHC1109104 SAN Technology


e n
Thank you

m/
co
www.huawei.com

i.
we
ua
.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 62

ar
le
: //
tp
ht
s:
ce
ur
so
Re
ng
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109104 SAN Technology Page | 171


e n
m/
co
i.
we
ua
.h
ng
ni
ar
le
: //
tp
ht
s:
ce
ur
so
Re
ng
ni
ar
Le
re
Mo

Page | 172 HCNA-storage V3 | OHC1109104 SAN Technology


Mo
re
Le
ar
ni
ng

OHC1109105
Re
so

RAID Technology
ur
ce
s:
ht
tp
://
le
ar
ni
ng
.h
ua
we
i.
co
m/
en
www.huawei.com
Mo
re
Le
ar
ni
ng
Re
so
ur
ce
s:
ht
tp
://
le
ar
ni
ng
.h
ua
we
i.
co
m/
en
Introduction

In this chapter the focus is on the data protection methods used in storage devices based on hard
disks. As the data generated in an organization is important data protection must be implemented in
case the physical disk, on which the data is stored, fails.

n
e
m/
co
Objectives

i.
we
After this module you will be able to:

ua
 Explain the most common RAID types.

.h
 Understand what level of data protection is offered with the various RAID types.

ng
 Understand the relation between the RAID levels and properties like performance, security

ni
and cost.

ar
le
//

Module Contents
:
tp
ht

1. Traditional RAID.

2. Basic concepts and implementation modes of RAID.


s:

3. RAID technology and application.


ce
ur

4. RAID data protection.


so

5. Relationship between RAID and LUNs.


Re
ng
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109105 RAID Technology Page | 175


e n
m/
co
i.
we
ua
.h
ng
ni
ar
le
: //
tp
ht
s:
ce
ur
so
Re
ng
ni
ar
Le
re
Mo

Page | 176 HCNA-storage V3 | OHC1109105 RAID Technology


Traditional RAID

In this module we will look at the data protection system called Redundant Array of
Independent Disks (RAID). RAID has two different version or generations. This module
covers the traditional version of RAID. Here the RAID is based on protecting data that is disk

n
based. In other words: if a disk fails how can I make sure that the data on that disk is

e
m/
recovered.

co
The advanced RAID 2.0+ technology used in Huawei’s enterprise class storage arrays is

i.
covered in module 9.

we
ua
Basic concepts and implementation modes of RAID

.h
ng
ni
Basic concepts and implementation modes of RAID
ar
le
RAID: short for redundant array of independent disks
//

also referred to as disk array.


:
tp
ht

RAID
s:
ce
ur

Implementation methods:
• Hardware RAID
so

• Software RAID
Re
ng

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 4


ni
ar

The first idea behind RAID was to combine multiple smaller disks together to get a bigger capacity.
Le

Today the term RAID is used more in relation to data protection, in other words RAID can be used to
prevent data loss in case a physical device fails.
re

Over the years there have been a number of RAID types, but just a small number is still in use. In this
Mo

module we will discuss the most commonly used RAID types. We will also look at other factors than
data protection because choosing a RAID type has consequences for the performance and/or for the
cost of the RAID solution.

HCNA-storage V3 | OHC1109105 RAID Technology Page | 177


In practice RAID can be implemented in two modes: hardware RAID and software RAID.

 Hardware RAID uses a dedicated RAID adapter, RAID controller or Storage Processor. The
RAID controller has its own processor, I/O processing chip, and memory, improving resource
utilization and data transfer speed. The RAID controller manages routes, the buffer, and data

n
flow between hosts and the disk array.

e
m/
 Software RAID does not have its own processor or I/O processing chip and is fully dependent

co
on the host CPU. Therefore, low-speed CPUs can hardly meet the requirements for RAID

i.
implementation. Software RAID is not used much in Enterprise solutions as the performance
of hardware RAID is typically better than the performance of software RAID.

we
ua
.h
Data Organization modes of RAID

ng
ni
Data organization modes of RAID
ar
le
Stripe unit or Chunk: smallest amount of data written on a disk before
//

selecting another disk.


:

Strip: logical grouping of a number of stripe units or chunks


tp

Stripe: strips with the same stripe numbers (i.e. D3, D4, D5) on multiple disks
ht

in a disk array.

Stripe depth or Stripe width: the amount of disk that form the stripe or the
s:

total amount of space stored in a stripe


ce
ur

Disk 1 Disk 2 Disk 3


D6 D7 D8 Stripe 2 Stripe depth
so

D3 D4 D5 Stripe 1
Re

D0 D1 D2 Stripe 0

Data strips on disks Data strips on disks Data strips on disks


ng

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 5


ni
ar
Le

 Stripe unit or chunk: amount of data that will be written in one instance before the next instance
re

gets written on another disk


Mo

 Strip: a number of stripe units that are logically grouped together.

 Stripe: all strip in a RAID set that are on the same stripe i.e. have the same stripe number.

 Stripe depth or stripe width: Capacity of a stripe or the amount of disks that form the stripe.

Page | 178 HCNA-storage V3 | OHC1109105 RAID Technology


Parity mode of RAID

Parity mode of RAID

n
XOR or eXclusive OR is a logical function used with digital electronics

e
and in computer science. The output is true if only one of the inputs is

m/
true. If both inputs are the same (true or false) than the output is false.

co
XOR: true whenever the inputs differ and false whenever the inputs
are the same. The symbol for the XOR operation is ⊕

i.
we
0 ⊕ 0 = 0, 0 ⊕ 1 = 1, 1 ⊕ 0 = 1, 1 ⊕ 1 = 0

ua
Disk 1 Disk 2 Parity disk

.h
1 1 0

ng
0 1 1

0 0 0

ni
XOR redundancy backup

ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 6
: //
tp

There are two different ways RAID can be used to protect data. One way is to keep identical copies of
the data on another disk. The second way is using a concept called Parity. The parity is extra
ht

information calculated using the actual user data. For the RAID types that use parity it means that
s:

extra disks are needed. Parity is calculated using the exclusive or (XOR the symbol is⊕) function.
ce

The output of an XOR system is shown in the following table.


ur

Input A Input B A⊕B


so
Re

0 0 0
ng
ni

1 0 1
ar
Le

0 1 1
re

1 1 0
Mo

HCNA-storage V3 | OHC1109105 RAID Technology Page | 179


RAID status

RAID status

en
RAID

m/
group Creation succeeded
created

co
i.
RAID
Reconstruction succeeded group

we
working
correctly

ua
RAID
group
failed

.h
A member disk offline or failure
RAID group

ng
degraded

ni
More failed disks than hot spare disks

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.


ar Slide 7
le
//

Providing there are multiple disks used together to form a RAID protected group (sometimes called a
:
tp

RAID set) this group has a status.


ht

1. Everything is working as planned. The status is referred to as NORMAL.


s:

2. A hardware failure has occurred, but the system is able to present all the data. No recovery
ce

procedures have started (yet). The status is called DEGRADED.


ur

3. After a hardware failure the recovery process has started, but it has not finished yet. The status is
so

referred to as REBUILDING (or reconstructing).


Re

4. After a hardware failure there are no recovery options available and the data cannot be presented
in a correct way anymore. The status is called FAILED.
ng

Whether or not a degraded RAID group can be reconstructed depends on the RAID type used, the
ni

number of hardware failures and the availability of recovery hardware.


ar
Le
re
Mo

Page | 180 HCNA-storage V3 | OHC1109105 RAID Technology


RAID technology and application

Common RAID levels and classification criteria

e n
m/
co
Common RAID levels and classification criteria

i.
we
RAID technology combines multiple independent physical disks into

ua
a logical disk in different modes. Corresponding to these modes,

.h
RAID levels are formed. This mechanism improves the read/write
performance of disks while increasing data security.

ng
RAID 6

ni
RAID 0

Common RAID
ar RAID 10
le
levels
RAID 1
//

RAID 50
RAID 3
:
tp

RAID 5
ht

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 8


s:
ce
ur

Advantages of RAID technology:


so

 Combines multiple disks into a logical disk to provide storage capacity as one entity.
Re

 Divides data into data blocks and writes/reads data to/from multiple disks in parallel,
ng

improving disk access speed.


ni

 Provides fault tolerance by offering mirroring or parity check.


ar
Le
re
Mo

HCNA-storage V3 | OHC1109105 RAID Technology Page | 181


Working principle of RAID 0

Working principle of RAID 0

ne
m/
co
D0

i.
D5 D0, D1, D2, D3, D4, D5

we
D4

D3

ua
Disk 1 Disk 2
D2 D6
D4 D5 Stripe 2

.h
D1
D2 D3 Stripe 1
D0

ng
D0 D1 Stripe 0

ni
Data blocks on disks Data blocks on disks

Logical disk Striped disk array without error control

ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 9
: //
tp

RAID 0 (also referred to as striping) has the highest storage performance of all RAID levels. RAID 0
uses striping technology to distribute data among all disks in the RAID group.
ht

A RAID 0 group contains at least two member disks. RAID 0 group divides data into data blocks of
s:

sizes ranging from 512 bytes to megabytes (usually integral multiples of 512 bytes), and writes them
ce

onto different disks in parallel. For example: The first data block is written onto disk 1, and the second
ur

onto disk 2 of Stripe 0. After the data block is written onto the last disk of Stripe 0, the next data block
so

is written onto the next stripe (Stripe 1) on disk 1. In this way, I/O’s are load balanced to all disks in
the RAID group.
Re

The disk appears to offer a single big capacity and still has the benefits of being very fast. Before
ng

RAID 0 was used there was a technique which was similar to RAID 0 called JBOD. A JBOD (short for
ni

Just a Bunch Of Disks) is a group of disks combined to form a virtual bigger disk. The big difference
ar

with RAID 0 is that with a JBOD the blocks are not written to disks at the same time. In a JBOD the
Le

first disks will be used until it is full. Then the second disk will be used. So the total available capacity
is the sum of the capacity of the individual disks, but the performance is the performance of a single
re

disk!
Mo

Page | 182 HCNA-storage V3 | OHC1109105 RAID Technology


Data write of RAID 0

Data write of RAID 0

e n
m/
Writing D2, D3,...

Writing D1

co
Writing D0

i.
D5 D0, D1, D2, D3, D4, D5

we
D4

ua
D3
D2 Disk 1 Disk 2

.h
D1 D4 D5

ng
D0 D2 D3 Stripe 1

D0 D1 Stripe 0

ni
Logical disk

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.


ar Slide 10
le
//

RAID 0 uses striping technology to write data onto all disks. It divides the data into data blocks and
:
tp

evenly distributes them among all disks in the RAID group. Data is written onto the next stripe only
ht

when the data is written onto all blocks in the previous stripe. In the figure, data blocks D0, D1, D2,
D3, D4, and D5 are waiting to be written onto disks in RAID 0. D0 will be written onto the block in the
s:

first stripe (Stripe 0) on disk 1 and D1 onto the block in the first stripe on disk 2. Then, data will be
ce

written onto all blocks in the second stripe. D2 will be written onto the next stripe, that is, the block in
ur

the second stripe (Stripe 1), on disk 1, D3 will be written onto a block in stripe1 on Disk 2. The same
so

method will be applied for D4 and D5 but now of course on stripe 2 across the two disks.
Re

The write performance of a RAID 0 set is proportional to the number of disks.


ng
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109105 RAID Technology Page | 183


Data read of RAID 0

Data read of RAID 0

ne
m/
Reading D2, D3,...

Reading D1

co
Reading D0

i.
D5 D0, D1, D2, D3, D4, D5

we
D4

ua
D3
D2 Disk 1 Disk 2

.h
D1 D4 D5

ng
D0 D2 D3

D0 D1

ni
Logical disk

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.


ar
Slide 11
le
: //
tp

When a RAID 0 receives a data read request, it searches for the target data blocks on all disks and
ht

reads data across stripes. In the figure, we can see the entire read process.
s:

A request of reading data blocks D0, D1, D2, D3, D4, D5 is received. D0 is read from the disk 1, D1
ce

from the disk 2, and the other data blocks are also read. After all data blocks are read from the disk
array, they are integrated by using the RAID controller and then sent to the host.
ur
so
Re

The read performance of a RAID 0 set is proportional to the number of disks.


ng
ni
ar
Le
re
Mo

Page | 184 HCNA-storage V3 | OHC1109105 RAID Technology


Data loss of RAID 0

Data loss of RAID 0

n
Data on the disk array is lost if any of the disks in the disk array fails.

e
m/
co
i.
we
ua
.h
Disk 1 Disk 2 Disk 3

D6 D7 D8

ng
D3 D4 D5

ni
D0 D1 D2

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.


arSlide 12
le
: //
tp

RAID 0 only organizes data in a certain way but does not provide data protection. If any of the disks in
ht

the RAID group becomes faulty, the entire RAID group fails. This is of course not a physical failure of
the RAID group but a logical. If files are stored on a RAID 0 based volume it means the data blocks
s:

that form that file are stored on all disks of the RAID 0 set. If a single disk fails the other disks still
ce

have their data blocks. The file itself now is no longer complete because some of the blocks it uses
ur

are no longer available. So maybe it is better to say that the data is incomplete. For most files and file
so

systems however we would not be able to access the files anymore. These files would be most likely
Re

be reported as being corrupt files.


ng
ni

In enterprise solutions the use of RAID 0 is very limited. The data is often so important that a form of
data protection is needed. Yes, of course there is always the necessity for physical backups but these
ar

take time to make and it takes time for the data to be restored.
Le

A use for RAID 0 would be were file access performance should be very high and at the same time
re

the restore time, in case of a problem, is allowed to be long. (Text documents, public images, audio
Mo

files that can easily be recreated or recovered)

HCNA-storage V3 | OHC1109105 RAID Technology Page | 185


Working principle of RAID 1

Working principle of RAID 1

ne
m/
co
i.
D0, D1, and D2 passing through a mirror
D2

we
D1

ua
Disk 1 Disk 2

D2 D2

.h
D0
D1 D1

ng
D0 D0

ni
Logical disk Disk array with mirroring

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.


ar
Slide 13
le
//

RAID 1 (also referred to as mirroring) aims to build a RAID level with super high security. RAID 1 uses
:
tp

two identical disk systems and builds a mirror setup. Data is written onto one disk and a copy of the
ht

data is stored on the mirror disk. When the source disk (physical) fails, the mirror disk takes over
services from the source disk, ensuring service continuity. The mirror disk acts as a backup and as a
s:

result, the highest data reliability is offered.


ce
ur

Another limitation is the fact that a RAID 1 set can only store data based on the capacity of the single
so

disk. The other disk simply holds the copy of the data. For every gigabyte stored there is 2 gigabyte of
Re

hard disk space used. This so-called overhead is 100%.


ng

The two disks in a RAID 1 set must be identical in size. If they are different in size the available
ni

capacity is the capacity of the smallest of the two disks.


ar
Le
re
Mo

Page | 186 HCNA-storage V3 | OHC1109105 RAID Technology


Data write of RAID 1

Data write of RAID 1

e n
Writing D2

m/
Writing D1
Writing D0

co
i.
D0, D1, D2

we
D2

ua
D1
Disk 1 Disk 2

.h
D0 D2 D2

ng
D1 D1

ni
Logical disk D0 D0

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.


ar Slide 14
le
//

Unlike RAID 0, which uses striping technology to write data onto all disks, RAID 1 simultaneously
:
tp

writes the same data onto each disk so that data is identical on all member disks. In the figure, data
ht

blocks D0, D1, and D2 are waiting to be written onto the disks. D0 and D1 are both simultaneously
written onto the two disks (disks 1 and 2). Then, other data blocks are written onto the two disks in the
s:

same manner.
ce
ur

The write performance of a RAID 1 system is the performance of the single disk.
so
Re
ng
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109105 RAID Technology Page | 187


Data read of RAID 1

Data read of RAID 1

ne
Reading D2

m/
Reading D1

co
Reading D0

i.
D0, D1, D2

we
D2

ua
D1
Disk 1 Disk 2

.h
D0 D2 D2

ng
D1 D1

ni
Logical disk D0 D0

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.


ar
Slide 15
le
: //
tp

RAID 1 simultaneously reads data from the data and mirror disks, improving read performance. If one
of the disks fails, data can be read from the other disk.
ht
s:

The read performance of a RAID 1 system is equal to the performance of both disks combined. In
ce

case the RAID set is degraded the performance is halved.


ur
so
Re
ng
ni
ar
Le
re
Mo

Page | 188 HCNA-storage V3 | OHC1109105 RAID Technology


Data recovery of RAID 1

Data recovery of RAID 1

e n
m/
co
i.
D0, D1, D2

we
Replacing/Recovering Reading/Writing
the disk the backup disk

ua
Disk 1 Disk 2

.h
D2 D2

ng
D1 D1

D0 D0

ni
Disk damaged Backing up disk data

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.


ar Slide 16
le
//

Member disks of RAID 1 are mirrored and have the same content. When one of the disks becomes
:
tp

faulty, data can be recovered using the mirror disk. In the figure, disk 1 fails and data on it is lost. We
ht

can replace disk 1 with a new one and replicate data from disk 2 to the new disk 1 to recover the lost
data. In most storage solutions this rebuild process after the faulty disk has been replaced is an
s:

automatic process.
ce

An important consideration is that the RAID 1 set is in degraded state as long as the new disk has not
ur

been rebuilt completely. Especially in these days where the capacity of individual disks is very high
so

this rebuild time can be long. The table below shows some examples of rebuild times.
Re

DISK SIZE REBUILD TIME (HOURS)


ng

72 GB < 1 hr
ni

146 GB < 4 hrs


ar
Le

600 GB <8

1 TB < 20 hrs
re
Mo

4 TB < 48 hrs

Note: These rebuild times are depending on RAID controller type and workload on the system!

HCNA-storage V3 | OHC1109105 RAID Technology Page | 189


Working principle of RAID 4

Working principle of RAID 4

ne
m/
co
D0, D1, D2, D3, D4, D5, D6, D7, D8

i.
Parity codes generated

we
Disk 1 Disk 2 Disk 3 Parity disk

ua
D6 D7 D8 P3

.h
D3 D4 D5 P2

ng
D0 D1 D2 P1

ni
Striped disk array with parity codes

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.


arSlide 17
le
: //
tp

RAID 4, which is loosely based on RAID 0, is referred to as striping with dedicated parity. RAID 4
ht

differs from RAID 3 as it uses blocks instead of bits/bytes. In a RAID 4 set, a dedicated disk is used to
save the parity of the data in the corresponding stripes on other disks. If any incorrect data is detected
s:

or disk becomes faulty, we can recover the data on the faulty disk using the parity check information.
ce

RAID 4 is applicable to data-intensive or single-user environments that need to access long and
ur

continuous data blocks. RAID 4 distributes data write operations to multiple disks. However, RAID 4
so

needs to recalculate and possibly rewrite the information on the parity disk no matter onto which disk
Re

new data is written. As a result, for the applications that produce a large number of write operations,
the parity disk will have heavy workloads. That may have a consequence for the performance when
ng

one has to wait for the parity disk. Also, because it has much higher workloads, it is often the disk that
ni

fails first in a RAID 4 set. That is why the parity disk in RAID 4 is often called a hot spot.
ar
Le
re
Mo

Page | 190 HCNA-storage V3 | OHC1109105 RAID Technology


Data write of RAID 4

Data write of RAID 4

e n
Writing C
Logical disk

m/
Writing B
Writing A

co
C

i.
A0, A1, A2, B0, B1, B2, C0, C1, C2

we
B

ua
Disk 1 Disk 2 Disk 3 Parity disk

.h
A C0 C1 C2 P3

B0 B1 B2 P2

ng
A0 A1 A2 P1

ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.
ar Slide 18
le
//

RAID 4 adopts single-disk fault tolerance and parallel data transfer. In other words, RAID 4 employs
:
tp

stripping technology to divide data into blocks, implements the XOR algorithm for these blocks, and
ht

writes the parity data onto the last disk. One of the disks in the RAID group functions as the parity disk.
When a disk becomes faulty, data is written onto other disks that are not faulty and the parity check
s:

continues.
ce
ur

The performance of a RAID 4 set is not a fixed number. In principle RAID 4 is an N+1 data protection
so

method. That means that when there are N disks, with user data you want to protect, one extra disk is
Re

needed to store the parity information. In that situation new data blocks will be written to N disks
simultaneous. After the parity information is calculated that will be written to the parity disk.
ng
ni

However: there is a situation that happens quite often. This situation is when there is so little new data
it can fit on one or two disks. Normally all N disks would cooperate in the striping process, now there
ar

is just a few disks involved. The problem now is that we still have to read all disks (or better the data
Le

in the stripe of the disks) to be able to recalculate the new parity value. This of course makes that
re

writing small amounts of data does not benefit from having many disks in the RAID 4 set. This is
known as the write-penalty with RAID 4.
Mo

The write performance of a RAID 4 set is depending on the amount of changed data; the number of
disks minus the time needed to calculate and store the parity information.

HCNA-storage V3 | OHC1109105 RAID Technology Page | 191


Data read of RAID 4

Data read of RAID 4

ne
Logical disk

m/
co
Reading data

i.
C
A0, A1, A2, B0, B1, B2, C0, C1, C2

we
B

ua
Disk 1 Disk 2 Disk 3 Parity disk

.h
A C0 C1 C2 P3

B0 B1 B2 P2

ng
A0 A1 A2 P1

ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.
ar
Slide 19
le
: //
tp

In RAID 4, data is read in stripes across the disks. The disk motor of each disk in a RAID group is
ht

controlled such that data blocks in the same stripe on all disks can be read at the same time. By doing
so, each disk is fully utilized and read performance is boosted. RAID 4 uses the parallel data read
s:

(and write) mode.


ce
ur
so

The read performance of a RAID 4 set is depending on the amount of data read and the number of
disks in the set.
Re
ng
ni
ar
Le
re
Mo

Page | 192 HCNA-storage V3 | OHC1109105 RAID Technology


Data recovery in RAID 4

Data recovery of RAID 4

e n
Logical disk

m/
co
C

i.
A0, A1, A2, B0, B1, B2, C0, C1, C2

we
B

ua
Disk 1 Disk 2 Disk 3 Parity disk

.h
A
C0 C1 C2 P3

ng
B0 B1 B2 P2

A0 A1 A2 P1

ni
Disk failure Data recovery

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.


ar Slide 20
le
//

For data recovery, RAID 4 implements XOR operations for all the disks including the parity disk to
:
tp

recover the lost data on the faulty disk.


ht

As shown in the figure, when disk 2 fails, data blocks A1, B1, and C1 on disk 2 are lost. To recover
these data blocks, we should first recover A1, which can be obtained by applying XOR operations to
s:

A0, A2, and P1 on disk 1, disk 2 and the parity disk. B1 and C1 are also recovered using the same
ce

method. In the end, all the lost data on disk 2 is recovered.


ur

However, all parity check operations run on a single disk causing heavy write pressure onto the parity
so

disk during data recovery and decreasing RAID group performance.


Re
ng
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109105 RAID Technology Page | 193


Working principle of RAID 5

Working principle of RAID 5

e n
m/
co
i.
D0, D1, D2, D3, D4, D5

we
ua
Disk 1 Disk 2 Disk 3

.h
P2 D4 D5

D2 P1 D3

ng
D0 D1 P0

ni
Independent disk structure with distributed parity check codes

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.


ar Slide 21
le
//

RAID 5 is the improved version of RAID 4. It also uses striping and it also calculates parity information.
:
tp

In RAID 4 the parity had to be written to (or read from) a dedicated disk. That lead to the hot spot
ht

situation we mentioned before and an impact on the performance. In RAID 5 they use so-called
distributed parity. It means that each disk will be used to store user data ánd parity information. Then
s:

writing new data involves all disks for user data and also involves all disks for storing parity
ce

information. So there are no bottlenecks or hotspots.


ur

In RAID 5 out of N disks in a RAID 5 group the capacity of N-1 disks is available. As with other RAID
so

systems the disks in a RAID 5 set should be identical.


Re

In both RAID 4 and RAID 5, if a disk fails, the RAID group transforms from its online state to the
degraded state until the failed disk is rebuilt. However, if another disk in a degraded RAID group fails,
ng

all data in the RAID group will be lost.


ni
ar
Le
re
Mo

Page | 194 HCNA-storage V3 | OHC1109105 RAID Technology


Data write of RAID 5

Data write of RAID 5

e n
m/
Logical disk

co
i.
D5
D0, D1, D2, D3, D4, D5

we
D4

D3

ua
D2
Disk 1 Disk 2 Disk 3

.h
D1
P2 D4 D5
D0

ng
D2 P1 D3

D0 D1 P0

ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.
ar Slide 22
le
: //
tp

In RAID 5, data is also written in stripes across the disks. Each disk in the RAID group stores both
ht

data blocks and parity information. After data blocks are written onto a stripe, the parity information is
written onto the corresponding parity disk. For each consecutive write to other stripes the disk used to
s:

store the parity is a different one.


ce

Just as with RAID 4 there is a write penalty with RAID 5 when a small amount of data is written.
ur
so
Re

The write performance of a RAID 5 set is depending on the amount of data written and the number of
disks in the RAID 5 set.
ng
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109105 RAID Technology Page | 195


Data read of RAID 5

Data read of RAID 5

ne
m/
co
Logical disk

i.
D5 D0, D1, D2, D3, D4, D5

we
D4
D3

ua
D2
Disk 1 Disk 2 Disk 3

.h
D1
P2 D4 D5
D0

ng
D2 P1 D3
D0 D1 P0

ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.
ar
Slide 23
le
//

Data is stored as well as read in stripes across the disks. For each read N-1 disks can be used to
:
tp

retrieve the data.


ht
s:

The read performance of a RAID 5 set is depending on the amount of data written and the number of
ce

disks in the RAID 5 set.


ur
so
Re
ng
ni
ar
Le
re
Mo

Page | 196 HCNA-storage V3 | OHC1109105 RAID Technology


Data recovery of RAID 5

Data recovery of RAID 5

e n
m/
Logical disk

co
i.
D5
D0, D1, D2, D3, D4, D5

we
D4

D3

ua
D2 Disk 1 Disk 2 Disk 3

.h
D1 P2 D4 D5
D0

ng
D2 P1 D3

D0 D1 P0

ni
Disk failure Data recovery

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.


ar Slide 24
le
//

When a disk in RAID 5 fails, XOR operations are implemented for the other member disks to recover
:
tp

data on the failed disk.


ht
s:

However, with RAID 5 it is not so that all parity check operations run on a single disk like with RAID 4.
ce

So rebuilding a new disk to replace the faulty disk does not cause the heavy write pressure that RAID
4 has.
ur
so
Re
ng
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109105 RAID Technology Page | 197


Overview of RAID 6

Overview of RAID 6

n
RAID 6:

e
m/
• is an independent disk structure with two parity modes.

co
It requires at least N+2 (N > 2) disks to form an array.
• is applicable to scenarios that have high requirements for data reliability

i.
and availability.

we
Frequently used RAID 6 technologies are:

ua
• RAID 6 P+Q

.h
• RAID 6 DP.

ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.
ar Slide 25
le
//

The RAID types discussed until now provided data protection when a single disk is lost. That is of
:
tp

course with the exception of RAID 0. Over the years the capacities of disks have increased a lot and
ht

with that the rebuild times. If there are many big disks combined to form a RAID 5 set then the rebuild
of the failed disks may take days instead of hours. In this period the system is in a degraded state and
s:

any additional disk failure will result in a failed RAID set and loss of data.
ce

That is why some organizations require a system that is dual redundant. In other words: two disks
ur

should be allowed to fail and still all data should be accessible. There are a few implementations of
so

such dual redundant data protection types:


Re

N-way mirroring is the method where each written block to the main disks leads to multiple copies of
the blocks on multiple disks. This of course means a lot of overhead.
ng
ni

RAID 6 offers protection against two disks failing in a RAID 6 set. These disks can even fail exactly at
ar

the same time.


Le

The official name for RAID 6 is striping with distributed dual parity. In essence it is an improved
re

version of RAID 5 that also did striping and distributed parity. Now in RAID 6 there is dual parity. That
Mo

means two things:


1. In additional to writing the user data two parity calculations have to be made. RAID 6 is in that
respect the “slowest” of all RAID types.

2. This additional parity information costs space. That is why we refer to RAID 6 as an N+2 type.

Page | 198 HCNA-storage V3 | OHC1109105 RAID Technology


Currently, RAID 6 does not have a uniform standard. Companies implement RAID 6 in different ways.
The following two are the major implementation modes:

 RAID P+Q: Huawei, HDS

 RAID DP: NetApp

n
These two modes differ in the methods of obtaining parity data. Nevertheless, they can both ensure

e
m/
data integrity and support data access in case of double-disk failure in the RAID group.

co
i.
we
Working principle of RAID 6 P+Q

ua
.h
Working principle of RAID 6 P+Q

ng
ni
For RAID 6 P+Q, two parity data, P and Q, are calculated. When two

ar
data blocks are lost, they can be recovered by using the parity data.
le
P and Q are calculated using the following formulas:
• P = D0 ⊕ D1 ⊕ D2…
//

• Q = (α ⊕ D0) ⊕ (β ⊕ D1) ⊕ (γ ⊕ D2)…


:
tp

Disk 1 Disk 2 Disk 3 Disk 4 Disk 5


ht

P1 Q1 D0 D1 D2 Stripe 0

D3 P2 Q2 D4 D5 Stripe 1
s:

D6 D7 P3 Q3 D8 Stripe 2
ce

D9 D10 D11 P4 Q4 Stripe 3

Q5 D12 D13 D14 P5 Stripe 4


ur
so

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 26


Re
ng

In RAID 6 P+Q, P and Q are two parity values independent from each other. They are obtained using
different algorithms to data in the same stripe on all the disks.
ni
ar

P is obtained from the simple XOR operation implemented for the user data blocks in a single stripe.
Q is calculated using a process called GF conversion (GF = Galois Field). In the picture above the
Le

Galois field values are represented with α, β and γ. The resulting value is a so-called Reed-Solomon
re

code. The algorithm converts all data in the same stripe on all data disks and implements XOR for
Mo

those converted data.

As shown in the figure, P1 is obtained from the XOR operation implemented for D0, D1, and D2 in
stripe 0, P2 from the XOR operation implemented for D3, D4, and D5 in stripe 1, and P3 from the
XOR operation implemented for D6, D7, and D8 in stripe 2.

HCNA-storage V3 | OHC1109105 RAID Technology Page | 199


Q1 is obtained from the XOR operation implemented for GF-converted D0, D1, and D2 in stripe 0, Q2
from the XOR operation implemented for GF-converted D3, D4, and D5 in stripe 1, and Q3 from the
XOR operation implemented for GF-converted D6, D7, and D8 in stripe 2.

If a disk in a stripe fails, only the value P is required to recover data on the failed disk. XOR
operations are performed between P and data on the other disks. If two disks in a stripe fail, handling

n
methods will vary according to two scenarios. If Q is on either of the failed disks, data can be can

e
m/
recovered on the data disk first and then the parity information on the parity disk. If Q is on neither of

co
the failed disks, the two formulas are used to recover data on both failed disks.

i.
we
ua
Working principle of RAID 6 DP

.h
ng
Working principle of RAID 6 DP

ni
DP means double parity. RAID 6 DP adds a diagonal XOR parity disk based ar
le
on the row XOR parity disk used by RAID 4.
//

P0 to P3 on the row parity disk are the parity information of row data blocks
on all data disks. For example, P0 = D0 XOR D1 XOR D2 XOR D3
:
tp

DP0 to DP3 on the diagonal parity disk are the parity information of diagonal
data on all data disks and the row parity disk. For example, DP0 = D0 XOR
ht

D5 XOR D10 XOR D15


s:

Row parity Diagonal


Disk 1 Disk 2 Disk 3 Disk 4 disk parity disk
ce

D0 D1 D2 D3 P0 DP0 Stripe 0
D4 D5 D6 D7 P1 DP1 Stripe 1
ur

D8 D9 D10 D11 P2 DP2 Stripe 2


so

D12 D13 D14 D15 P3 DP3 Stripe 3


Re

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 27


ng
ni

RAID 6 DP also has two types of independent parity data blocks. The first parity information is
identical with RAID 6 P+Q. The second one is different from RAID 6 P+Q: the parity is calculated
ar

diagonally. Both the row and diagonal parity data blocks are obtained using XOR operations. For row
Le

parity, P0 is obtained from the XOR implemented for D0, D1, D2, and D3 in stripe 0, P1 from the XOR
implemented for D4, D5, D6, and D7 in stripe 1, and so on. That is, P0 = D0 ⊕ D1 ⊕ D2 ⊕ D3, P1 =
re

D4 ⊕ D5 ⊕ D6 ⊕ D7 etc.
Mo

Diagonal parity implements XOR operations to diagonal data blocks. The data block selection process
is complicated. DP0 is obtained from the XOR operation implemented for D0 on disk 1 in stripe 0, D5
on disk 2 in stripe 1, D10 on disk 3 in stripe 2, and D15 on disk 4 in stripe 3. DP1 is obtained from the
XOR operation implemented for D1 on the disk 2 in stripe 0, D6 on disk 3 in stripe 1, D11 on disk 4 in

Page | 200 HCNA-storage V3 | OHC1109105 RAID Technology


stripe 2, and P3 on the parity disk in stripe 3. DP2 is obtained from the XOR operation implemented
for D2 on the disk 3 in stripe 0, D7 on the disk 4 in stripe 1, P2 on the parity disk in stripe 2, and D12
on the disk 1 in stripe 3, and so on. That is, DP0 = D0 ⊕ D5 ⊕ D10 ⊕ D15, DP1 = D1 ⊕ D6 ⊕ D11
⊕ P3 etc.

RAID 6 DP is tolerant to double-disk failure in an array. For example, If disks 1 and 2 fail in the above

n
figure, D0, D1, D4, D5, D8, D9, D12, and D13 are lost.

e
m/
Data and parity information on other disks are valid. Let's have a look at how data is recovered. First,

co
recover D12 by using DP2 and diagonal parity (D12 = D2 ⊕ D7 ⊕ P2 ⊕ DP2).

i.
Then recover D13 by using P3 and row parity (D13 = D12 ⊕ D14 ⊕ D15 ⊕ P3), D8 by using DP3

we
and diagonal parity (D8 = D3 ⊕ P1 ⊕ DP3 ⊕ D13), D9 by using P2 and row parity (D9 = D8 ⊕ D10

ua
⊕ D11 ⊕ P2), D4 by using DP4 and diagonal parity, D5 by using P1 and row parity, and so on.

.h
These operations are repeated until all data on disks 1 and 2 is recovered.

ng
ni
The performance of a RAID 6 system is relative slow for all types DP or P+Q. It is therefore that RAID
6 is used in two situations:
ar
le
1. The data is very valuable and needs to be online and available as long as possible.
//

2. The disks used are very big (typically over 2 TB). At those capacities the rebuild times
:

become so long that the chance of losing a second disk is a real threat. With RAID 6 there is
tp

the option to lose a second disk while a faulty disk is being reconstructed. Some vendors
ht

force the users of their storage arrays to use a dual protection RAID type as soon as big disks
are discovered.
s:
ce
ur
so
Re
ng
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109105 RAID Technology Page | 201


Hybrid RAID - RAID 10

Hybrid RAID - RAID 10

n
RAID 10 combines mirroring and striping. RAID 1 is implemented

e
before RAID 0. RAID 10 is also a widely used RAID level.

m/
co
User data D0, D1, D2, D3, D4, D5

i.
Disk mirror Disk mirror

we
ua
D4 D4 D5 D5
D2 D2

.h
D3 D3
D0 D0 D1 D1

ng
Physical disk 1 Physical disk 2 Physical disk 3 Physical disk 4

ni
RAID 1 RAID 1
RAID 0

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.


ar
Slide 28
le
//

RAID 0 was not a real option for most enterprise customers and RAID 1 was limited to the capacity of
:
tp

the disk. The combination of RAID 1 and RAID 0 however offers the best of both worlds!
ht

In a RAID 10 set there is always an even number of disks. Half of the disks have the user data written
to it and the other half holds the mirror copy of the user data. Mirroring is performed before striping.
s:
ce

In the figure, physical disks 1 and 2 form one RAID 1 group, and physical disks 3 and 4 form another
RAID 1 group. These two RAID 1 groups form RAID 0.
ur

A write to a RAID 10 system will mean that the data i.e. D0 will be written to physical disk 1 and a
so

copy will be written to physical disk 2.


Re

When two disks in different RAID 1 groups fail (for example disks 2 and 4), data access of the RAID
ng

10 group is not affected. This is because the other two disks (1 and 3) will have a complete copy of
ni

data on disks 2 and 4 respectively. However, if two disks in the same RAID 1 group (for example,
disks 1 and 2) fail at the same time, data access becomes unavailable.
ar
Le

Theoretically there is the chance that half the physical disks may fail and there still would be no data
loss. However, looking at it from a worst case scenario, the RAID 10 guarantee is against a single
re

drive failing.
Mo

Page | 202 HCNA-storage V3 | OHC1109105 RAID Technology


Hybrid RAID - RAID 50

Hybrid RAID - RAID 50

n
RAID 50 is a combination of RAID 5 and RAID 0. RAID 5 is

e
implemented before RAID 0.

m/
D0, D1, D2, D3, D4, D5, D6, D7…

co
i.
D0, D1, D4, D5, D8, D9 D2, D3, D6, D7, D10, D11

we
ua
P4 D8 D9 P5 D10 D11 Stripe 2

D4 P2 D5 D6 P3 D7 Stripe 1

.h
D0 D1 P0 D2 D3 P1 Stripe 0

ng
Physical Physical Physical Physical Physical Physical
disk 1 disk 2 disk 3 disk 4 disk 5 disk 6

ni
RAID 5 RAID 5
RAID 0

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.


ar Slide 29
le
//

RAID 50 is a combination of RAID 5 and RAID 0. RAID 5 is implemented across two RAID 5 arrays
:
tp

which are configured with RAID 0. The two RAID 5 sets are totally independent from each other.
ht

RAID 50 requires at least six disks as the minimum for a RAID 5 is three disks.

Physical disks 1, 2, and 3 form one RAID 5 group, and physical disks 4, 5, and 6 form another RAID 5
s:

group. The two RAID 5 groups form RAID 0.


ce

RAID 50 can sustain simultaneous failure of multiple disks in different RAID 5 groups. However, once
ur

two disks in the same RAID 5 group fail at the same time, data in the RAID 50 group will be lost.
so
Re
ng
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109105 RAID Technology Page | 203


Comparison of common RAID levels

Comparison of common RAID levels

n
RAID Level RAID 0 RAID 1 RAID 5 RAID6 RAID 10 RAID 50

e
m/
Fault tolerance No Yes Yes Yes Yes Yes

Parity Parity

co
Redundancy type No Replication Replication Parity check
check check

Hot spare disk No Yes Yes Yes Yes Yes

i.
Read performance High Low High High Medium High

we
Random write
High Low Low Low Medium Low
performance

ua
Sequential write
High Low Low Low Medium Low

.h
performance

Min. number of disks 2 2 3 4 4 6

ng
Available capacity
(Capacity of a single Nx 1/N x (N - 1) x (N - 2) x N/2 x (N - 2) x

ni
disk)

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.


ar
Slide 30
le
//

Conclusion: the ideal RAID type does not exist. Users must select the RAID depending on the
:
tp

demands they have for speed; security or cost.


ht

RAID sets should not contain too many physical disks as statistically the number of failures will
s:

increase as the group gets bigger. RAID 5 maximum is typically 12 or less. RAID 6 supports up to 42
ce

disks mostly.
ur
so
Re
ng
ni
ar
Le
re
Mo

Page | 204 HCNA-storage V3 | OHC1109105 RAID Technology


Application scenarios of RAID

Typical application scenarios of RAID

n
RAID level Application scenario

e
RAID 0 A scenario requiring fast reads and writes but not high security,

m/
such as graphic workstations

co
RAID 1 A scenario featuring random writes and requiring high security,
such as servers and databases

i.
RAID 5 A scenario featuring random transfer and requiring medium
security, such as video editing and large databases

we
RAID 6 A scenario featuring random transfer and requiring high security,

ua
Such as mail servers, file servers
RAID 10 A scenario involving large amounts of data and requiring high

.h
security, such as banking or finance field

ng
RAID 50 Random data transmission, security requirements, concurrency
requirements, such as mail servers, web servers

ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.
ar
Slide 31
le
//

With most vendors the storage administrator has the option to create multiple LUNs (or sometimes
:
tp

also referred to as volumes) with each a different protection system. Still the selection of the RAID
ht

type is important as the previous slides prove that there are differences in properties with each
selected RAID type.
s:

Fortunately with most vendors it is even possible to change the RAID type assigned to a LUN. That
ce

can then be done on the fly which means that the LUN stays accessible for the users of the LUN while
ur

the conversion takes place.


so
Re
ng
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109105 RAID Technology Page | 205


RAID Data Protection

Hot spare disk

ne
Hot spare = When one of the disks in a RAID group fails and an idle or

m/
standby disk immediately replaces the failed disk, this disk is known as the

co
hot spare.

i.
Hot spare disks are classified as global hot spare disks or as dedicated hot
spare disks.

we
ua
RAID 1 / RAID 5 / RAID 6 / …

.h
……

ng
Disk 1 Disk n Hot spare disk

ni
ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 32
: //
tp

In most storage solutions there are many disks present and often they are different types of disks.
Each of the disk types has its specific qualities (capacity, rotational speed, access speed, reliability).
ht

By creating multiple RAID groups we can assign RAID levels to each of these groups and create
s:

storage capacity with the exact right specifications. Imagine 4 RAID groups are used. The question is
ce

now how to address the problem of hot spare disks. How many do you need? The answer is not 100%
fixed. Normally each RAID group would have its own hot spare disk. So in case of a failure there
ur

would be a spare disk available. On the other hand: how often will it happen that in four different RAID
so

groups a drive fails. One spare for all four groups would then be enough.
Re

This one spare should then be configured as a global hot spare disk. It will replace any failed disk in
ng

any RAID group. Of course there is a requirement: the hot spare disk used should be the same size,
ni

or bigger, than the failed disk!


ar

In the situation that hot spares are really meant to be used by one RAID group the hot spare disk
Le

should be a dedicated hot spare. Now, if in other RAID groups a disk fails the hot spare disk will not
be used.
re
Mo

Page | 206 HCNA-storage V3 | OHC1109105 RAID Technology


Pre-copy

Pre-copy

n
Pre-copy: When the system detects that a member disk in a RAID

e
group is about to fail, data on that disk is copied onto a hot spare

m/
disk, reducing risks of data loss.

co
i.
RAID 1 / RAID 5 / RAID 6 /…

we
ua
Disk 1 Disk 2 Hot spare disk

.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.
ar Slide 33
le
//

The pre-copy option is a real nice addition that makes life much easier (or more relaxed) for storage
:
tp

administrators. Most enterprise class disks are fitted with a technology called SMART. This stands for
ht

Self-Monitoring Analysis and Reporting Tool. It basically means that the disk itself monitors his own
health situation. It does this as it checks the rotational speed of the disk and the “quality” of the
s:

magnetic surface of the disk platters.


ce
ur

Providing we use the correct tools we can receive the message from the SMART disk and act quickly.
so

So when a SMART disks reports it is not doing very well it means it is not dead yet, but we can
assume it may die pretty soon.
Re
ng

As soon as the tool receives the SMART message it starts copying the data from the disk onto (one of)
ni

the hot spare disk(s). When the drive later actually fails the majority of data is already present on the
hot spare disk and the rebuild will take much less time!
ar
Le
re
Mo

HCNA-storage V3 | OHC1109105 RAID Technology Page | 207


Reconstruction

Reconstruction

n
Reconstruction: It is a process of recovering user data and parity

e
data on a failed disk in a RAID group onto a hot spare disk of the

m/
RAID group.

co
i.
we
D0, D1, D2, D3, D4, D5

ua
.h
Disk 1 Disk 2 Parity disk Hot spare disk
D4 D5 P3 D4

ng
D2 D3 P2 D2

ni
D0 D1 P1 D0

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.


ar
Slide 34
le
//

RAID of course is a great concept that helps protect the data. Hot spare disks can add to that
:
tp

protection level by automatically rebuilding or reconstructing a failed disk. Reconstruction of course


ht

must not impact the behavior of the RAID group. So for optimal reconstruction to work:

 The hot spare disk should be ready.


s:

 All disks should be configured in RAID 1, 3, 5, 6, 10 or 50.


ce

 Reconstruction must not interrupt system services.


ur
so
Re
ng
ni
ar
Le
re
Mo

Page | 208 HCNA-storage V3 | OHC1109105 RAID Technology


Relationship between RAID and LUNs

Relationship between RAID and LUNs

n
RAID is like a large physical volume composed of multiple disks.

e
We can create one or multiple logical units of a specified capacity on the physical

m/
volume. Those logical units are referred to as LUNs. They are the basic block devices
that can be mapped to hosts.

co
i.
we
Logical volume LUN 1 Logical volumes LUN 2 LUN 3

ua
Physical volume Physical volume

.h
ng
ni
One logical volume created on a physical Multiple logical volumes created on a

ar
volume le physical volume

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 35


: //
tp

Data is stored as files on a volume that are visible from within an operating system. For the Windows
operating system these volumes are represented with drive letters (C:\ , F:\ etc.). In Unix\Linux based
ht

operating systems there would be mount points. The relation between a drive letter (or a mount point)
s:

and the physical disks is like this:


ce

1. Physical disks combined form a RAID group.


ur

2. A RAID group has a specific RAID type associated to it.


so

3. A LUN is made up of (a section of) the storage capacity a RAID group presents.
Re
ng
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109105 RAID Technology Page | 209


Creating RAID groups and logical volumes

LUN 1 LUN 2 LUN 3


Logical
volumes

e n
m/
co
RAID Segmentation

i.
we
ua
Physical
disks

.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 36

ar
le
An example:
//

There are 4 physical disks each of which is 300 GB in size. When we put them together in a RAID
:

group this group represents 4 x 300 GB = 1.2 TB of raw disk capacity. Assuming we want to use
tp

RAID 5 for data protection the actual available space would be 3 x 300 GB = 900 GB. We “loose” the
ht

capacity of 1 disk because of the parity information that has to be stored across the 4 disks.
s:

From the perspective of the storage administrator there now can be 1 big LUN occupying the 900 GB
of space or multiple smaller LUNs that partly use the 900 GB capacity.
ce

For each of the LUNs the data protection system would be RAID 5.
ur
so
Re
ng
ni
ar
Le
re
Mo

Page | 210 HCNA-storage V3 | OHC1109105 RAID Technology


Questions

Questions

n
1. Explain the difference between stripe unit and stripe width

e
m/
2. Describe the statuses a RAID group can be in

co
3. Explain the basic principles of RAID 5

i.
4. Explain the differences between the application scenarios of RAID

we
5 and those of RAID 1.

ua
5. If a customer is concerned with reliability and performance, what

.h
RAID schemes will you recommend?

ng
6. What is the relationship between RAID and LUNs?

ni
ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 37
: //
tp

Answers:
ht

1. Stripe unit (or chunk) is the smallest amount written to a physical disk. The stripe width is the total
s:

number of disks in a RAID group or the total capacity in a stripe.


ce

2. Good: everything is operational and hot spares are available; reconstructing: there is a disk
failure and at this point the hot spare disk is being reconstructed; degraded: there is a disk failure
ur

but no hot spare disks are available; failed: too many disk failures have occurred and the data
so

cannot be represented anymore=> data loss is inevitable.


Re

3. RAID 5 uses striping with distributed parity. Data is split up in chunks (selectable in size); then a
parity block is calculated. Data blocks and parity blocks and in parallel written to all the disks of
ng

the RAID group.


ni

4. RAID 1 is used when the capacity does not exceed the size of one single disk and when the data
ar

is very important. RAID 5 has a single disk protection level and has less performance than RAID1.
Le

5. RAID 10
6. LUNs are logical space allocations taken from the total disk capacity available in a RAID group. A
re

RAID group is a number of disk working together to provide storage capacity.


Mo

The free space available in a RAID group is calculated by the formula:


(number of disks x disk capacity) – overhead for the RAID type.

HCNA-storage V3 | OHC1109105 RAID Technology Page | 211


Exam Preparation

Exam Preparation

n
1. Which of the following RAID levels provide redundancy?

e
a. RAID 0 ( check all that apply)

m/
b. RAID 1

co
c. RAID 5

i.
d. RAID 10

we
2. Statement 1: Failure of any two disks in a RAID 10 group does not affect
data access.

ua
Statement 2 : Rebuilding a global hot spare disk is faster than rebuilding

.h
dedicated hot spare disk

ng
a. Statement 1 is true; Statement 2 is true
b. Statement 1 is true; Statement 2 is false

ni
c. Statement 1 is false; Statement 2 is true
d. Statement 1 is false; Statement 2 is false
ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 38
: //
tp
ht

Answers:
s:

1. B, C, D.
ce

2. D.
ur
so
Re
ng
ni
ar
Le
re
Mo

Page | 212 HCNA-storage V3 | OHC1109105 RAID Technology


Summary

Summary

n
• RAID levels and principles.

e
m/
• Characteristics of all mentioned RAID levels.

co
• Data protection technologies of RAID.

i.
we
• Application of RAID types.

ua
.h
ng
ni
ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 39
: //
tp

RAID nowadays is intended to protect data loss if physical disks fail in a so-called RAID group. Each
RAID level has characteristics like the performance of the RAID group; the number of disk that can fail
ht

before data loss occurs and the cost involved to implement the RAID type. The cost is expressed in
s:

the overhead i.e. the amount of disk (space) that is used to have the data protection. Two methods
ce

are used: make a copy of the data (RAID 1 and RAID 10) and add extra parity information that can
help reconstruct the data (RAID 4, RAID 5; RAID 6, RAID 50).
ur
so

RAID 0 is not used in enterprises very often because it offers no data protection. RAID 0 groups
Re

however are very fast and have no overhead. That means that all disk capacity available can be used
to store user data.
ng
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109105 RAID Technology Page | 213


e n
Thank you

m/
co
www.huawei.com

i.
we
ua
.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 40

ar
le
: //
tp
ht
s:
ce
ur
so
Re
ng
ni
ar
Le
re
Mo

Page | 214 HCNA-storage V3 | OHC1109105 RAID Technology


Mo
re
Le
ar
ni
ng

OHC1109106
Re
so

Basics of Big Data


ur
ce
s:
ht
tp
://
le
ar
ni
ng
.h
ua
we
i.
co
m/
en
www.huawei.com
Mo
re
Le
ar
ni
ng
Re
so
ur
ce
s:
ht
tp
://
le
ar
ni
ng
.h
ua
we
i.
co
m/
en
Introduction
In this module you will be learning about Big Data. Big Data is now a topic that is very hot. In the
module we will show some of the concepts used to handle big data sets. The module is intended
to give a brief overview and will not go in much detail. The reason for that is simple: there is a
complete course that was created especially around the Big Data phenomenon.

e n
m/
co
i.
Objectives

we
After this module you will be able to:

ua
 Describe the concepts of Big Data.

.h
 Mention reasons why there is a Big Data problem.

ng
Understand the difference between structured and unstructured data.

ni
Explain how Object Based Storage can help us manage Big Data.
 List the main specifications of Huawei’s OceanStor 9000.
ar
le
: //

Module Contents
tp
ht

1. What is the definition of Big Data?


2. Why do we have Big Data?
s:

3. Characteristics of Big Data


ce

4. How to handle Big Data: Hadoop solution.


ur

5. Huawei OceanStor 9000 Big Data solution.


so
Re
ng
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109106 Basics of Big Data Page | 217


e n
m/
co
i.
we
ua
.h
ng
ni
ar
le
: //
tp
ht
s:
ce
ur
so
Re
ng
ni
ar
Le
re
Mo

Page | 218 HCNA-storage V3 | OHC1109106 Basics of Big Data


What is Big Data?

What is Big Data?

n
Big Data is high-volume, high-velocity and high-variety information

e
assets that demand cost-effective, innovative forms of information

m/
processing for enhanced insight and decision making.

co
i.
we
ua
.h
SNIA definition: Big Data is a characterization of datasets that

ng
are too large to be efficiently processed in their entirety by the

ni
most powerful standard computational platforms available.

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.


ar
Slide 3
le
//

In this module we will discuss Big Data. Big Data has everything to do with data and more
:

importantly: with the amount of data that is generated. In the first module of this course we
tp

discussed the fact that data is very important for the business processes of a company. So what
ht

is Big Data?
s:

Gartner states “Big Data is high-volume, high-velocity and high-variety information assets that
ce

demand cost-effective, innovative forms of information processing for enhanced insight and
ur

decision making.
so
Re

SNIA’s definition: Big Data is a characterization of datasets that are too large to be efficiently
processed in their entirety by the most powerful standard computational platforms available.
ng
ni

Although Big Data doesn't refer to any specific quantity, the term is often used when speaking
ar

about Petabytes and Exabytes of data.


Le
re
Mo

HCNA-storage V3 | OHC1109106 Basics of Big Data Page | 219


Two things are important to take away from these definitions:

1. It is about an enormous amount of data (Petabytes / Exabyte’s) and the data is of different
types (structured/unstructured).

2. Inside of the Big Data is important information that can help my business work (better).

e n
m/
The practical consequences from this will be twofold again:

co
i.
1. How can we arrange for such amounts of data to be stored and kept?

we
2. How do we understand what data we have and how do we extract the right information from it?

ua
.h
ng
Why do we have so much Big Data?

ni
Why do we have Big Data? ar
le
//

What causes the amount of data to explode?


:
tp

• Increased amount of multimedia devices


ht

like smart phones and social media

• The Internet of Things


s:

• High resolutions images


ce

• More bandwidth available


ur

• Increased push to work with online (public) services


so
Re
ng
ni

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 4


ar
Le

The picture above lists a number of causes of Big Data. Of course for a company some of these
causes may not be applicable. Fact still remains that companies store huge amounts of data and
re

at that point the Big Data “problem” can occur.


Mo

The amount of smart phones has risen in the last couple of years. Statistics show that at this point
there are six billion mobile phones used in the world. For a country like Holland there are one
hundred-twenty-five telephone connections for every one hundred persons. That means that
many Dutch own and use two phones!

Page | 220 HCNA-storage V3 | OHC1109106 Basics of Big Data


Smart phones and tablet PC’s are more and more used for creating social media data. It is now
possible to share images and audio as well as text with other persons. The resolution of the
images is increasing as nowadays a mobile phone has a built in high resolution camera. The size
of a single image taken is now 10 times bigger than 5 years ago!

n
Quantities of data are traditionally measured in Terabytes (1.000.000.000.000 Bytes or 1000 GB).

e
m/
With Big Data new “sizes” are used:

co
Petabyte 1,000 Terabyte

i.
Exabyte 1,000,000 Terabyte or 1,000 Petabyte

we
Zettabyte 1,000,000 Petabyte or 1,000 Exabyte

ua
Yottabyte 1,000 Zettabyte or 1,000,000,000,000,000,000,000,000 bytes

.h
ng
There are many applications used for Social Media. Examples of popular sites in Asia are Alibaba
(like eBay you can buy and sell almost anything there), Youko (Small online videos just like with

ni
YouTube) and Sina (as a Twitter like smart messaging system)

ar
le
It was estimated that 3.5 Zettabyte of information was stored all over the world in 2013. The data
//

that was generated over the last two years is now forming 95% of all data ever created.
:
tp

Estimations have been made that say that in 2020 there will be more than 40 Zettabyte of data!
ht
s:

Another thing that adds to the problem is the fact that it is now easy to generate large amounts of
data and send them as the network has been upgraded continuous. Now almost everybody has
ce

access to broadband networks; 3G or even 4G wireless networks so sharing even big images is
ur

not time consuming and expensive anymore.


so
Re

What in the nearby future may lead to even more Big Data is described as the Internet of Things
ng

With that we mean that more and more devices will have intelligence on board and they will then
ni

be connected to the global network. It is no longer just computers that are connected. Think of the
ar

huge numbers of webcams and internet printers. In the future more of these devices will be
Le

introduced. Think of refrigerators with internet connection that automatically order groceries;
domotica applications where one can control the heating systems; lights; garage doors in your
re

house from a remote location using a smartphone application. The electricity and gas meters in
Mo

houses will in the future send their information to the electricity and gas board where today a
person comes and writes down the values. The car of the future will be one more example of
where the internet of things will be. Cars at this point may have on-board computers for navigation;
diagnostics and configuring features like air-conditioning, audio etc.

HCNA-storage V3 | OHC1109106 Basics of Big Data Page | 221


e n
m/
co
i.
we
ua
.h
ng
ni
ar
le
: //
tp
ht
s:
ce
ur
so
Re
ng
ni
ar
Le
re
Mo

Page | 222 HCNA-storage V3 | OHC1109106 Basics of Big Data


In the future each car will generate data about the fuel consumption; the location; speed averages;
how many people are in the car and maybe even information about the driver. Imagine that a car
will report automatically when a driver is too tired or has fallen ill. Automatically the car could then
do an emergency stop and even call for medical assistance.

e n
m/
Value of Big Data

co
i.
Characteristics of Big Data

we
ua
.h
Written once, few modifications
Videos

ng
ni
Music
Uncertain value

25%
ar
le
Unstructured Pictures
data 75%
//

Large capacity, rapid growth


Data composition Emails
:
tp

Data files
ht

Long-term storage required


s:

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 5


ce
ur

According to researches, 75% of data is unstructured data typically from videos, music, pictures,
so

emails, and data files. Most of the massive data has the following characteristics:
Re

 Written once, few modifications


ng

For example, many videos and pictures are typically read but seldom edited.
ni

 Uncertain value
ar

The value of a picture or video may increase due to a certain event. For example, the
Le

childhood picture of a person in the spotlight has value. Video surveillance data also has
re

similar characteristics. No one knows when such data becomes useful, but the data cannot be
abandoned.
Mo

 Large capacity and rapid growth


The number of images taken with digital cameras and smart phones has grown explosively. At
the same time the resolution of the cameras has increased too.

HCNA-storage V3 | OHC1109106 Basics of Big Data Page | 223


 Long-term storage required
Some data may need to be stored for dozens of years or even longer. This requires a storage
medium that can hold the data for that many years.

Another example of the need to filter out what is valuable data within the huge amount of data that

n
could be collected is the LHC project in Geneva. In the Large Hadron Collider project there is

e
research done on the behavior of atomic particles. They have experiments where they accelerate

m/
atomic particles to speeds close to the speed of light. At that speed they have the particles collide

co
with other particles. They then look at the results of the collision. In such collisions new particles

i.
might appear. In the LHC they are trying to create (and then study) a very special particle called

we
the Higgs boson.

ua
.h
The Large Hadron Collider experiments represent about 150 million sensors delivering data 40

ng
million times per second. There are nearly 600 million collisions per second. After filtering and
refraining from recording more than 99.999% of these streams, there are 100 collisions of interest

ni
per second.

ar
le
 As a result, only working with less than 0.001% of the sensor stream data, the data flow
//

from all four LHC experiments represents 25 petabytes annual rate before replication (as
:

of 2012). This becomes nearly 200 Petabytes after replication.


tp
ht

 If all sensor data were to be recorded in LHC, the data flow would be extremely hard to
work with. The data flow would exceed 150 million petabytes annual rate, or nearly 500
s:

Exabyte’s per day, before replication. To put the number in perspective, this is equivalent
ce

to 500 quintillion (5×1020) bytes per day, almost two hundred times more than all the
ur

other sources combined in the world.


so
Re

An even bigger project is about to be started. The Square Kilometers Array is a telescope which
consists of millions of antennas is expected to be operational by 2024. Collectively, these
ng

antennas are expected to gather 14 Exabyte’s and store one petabyte per day. It is considered to
ni

be one of the most ambitious scientific projects ever undertaken.


ar
Le
re
Mo

Page | 224 HCNA-storage V3 | OHC1109106 Basics of Big Data


How can we handle Big Data?

Because it is very difficult to limit the growth of data, the solution is to organize the data we have
as good as possible. Before this is possible it is important to identify the way in which data (files)
are stored today.

e n
m/
OBS: Object Based Storage

co
i.
iSCSI/FC protocol layer NFS/CIFS HTTP/REST/S3

we
ua
Storage layer File system Object system Objects
Object
Object Key Data

.h
Object
Object
Object Metadata

ng
User-
... ...
defined
meta

ni
... data

ar
Block storage File storage OBS
Direct access, minimum Easy to manage and easy to Flat structure with almost unlimited scalability.
le
  
overhead and maximum interwork with applications.
 Intelligent self-management.
efficiency.
 Moderate scalability but many
//
 Use of standard Internet protocols and cross-
 Highest cost and poor restrictions.
region data transfer capability.
scalability.
 Scenarios: application
 Scenarios: Internet service-oriented storage
:

 Scenarios: enterprise integration and file sharing in


and enterprises' internal archiving and backup.
databases (ie Oracle). an enterprise.
tp
ht

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 6


s:

Block storage directly accesses the storage layer, featuring fast speed, minimum overhead, and
ce

maximum efficiency. However, block storage has the highest cost and poor scalability. Block
ur

storage uses protocols like iSCSI and Fibre Channel.


so
Re

File storage creates a file system on the basis of block storage. Data is organized in the
directory-directory-file mode, facilitating data management. The objects operated by most
ng

application programs are files. Therefore, file storage enables easier interworking with application
ni

systems. File systems are restricted by directory trees. Therefore, a file system can be typically
ar

expanded to dozens of PB at most. The scalability is limited. File systems are applicable to
Le

application integration and file sharing in an enterprise.


re

OBS(object-based storage) is a new emerging storage technology. OBS creates an object


Mo

management layer on the basis of block storage. Compared with a file system, the object system
layer is flat with almost unlimited scalability. An object consists of a unique key, file, data (file),
metadata, and user-defined metadata. Objects contain self-management information and
therefore are more intelligent. OBS employs interfaces that are compatible with standard Internet

HCNA-storage V3 | OHC1109106 Basics of Big Data Page | 225


protocols. OBS does not use traditional directory structures and there is no need to be involved in
the creation of volumes on the underlying hardware. This is all shielded from the OBS system.

In an OBS system, the MDS (Meta Data server) stores the mapping between files and OSDs
(Object Storage Device) and the organization relationship between files and directories. The MDS

n
provides operations, such as file search, file creation, and file/directory property processing. From

e
the perspective of a client, an MDS is similar to the logical window of a file, and an OSD is similar

m/
to the physical window of a file. When a user operates a file, the file system obtains the actual

co
storage address of the file from the MDS. Then, the file system operates the file on the

i.
corresponding OSD. In subsequent I/O operations, the MDS will not be accessed, greatly

we
reducing the burden of the MDS. In this way, system scalability becomes possible.

ua
.h
ng
Data access model

ni
ar
File names / inode le Objects/OIDs
: //
tp
ht

Object Object
0 1 2 3 4

5 6 7 8 9 Object Object
s:

10 11 12 13 14 Object
ce

15 16 17 18 19 Object Object

Object
ur

Traditional storage OBS


so

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 7


Re
ng

The file system of traditional storage employs tree directories. If there are a lot of files and file
ni

layers, the root node has a great pressure and file search is time-consuming. As a result, the
ar

performance is will become poor. OBS employs a flat structure based on decentralization. Even if
Le

there are massive files, data access performance is not affected and with that it is still easy to add
more capacity.
re
Mo

Page | 226 HCNA-storage V3 | OHC1109106 Basics of Big Data


Advantaged of Object Based Storage

Advantages of OBS

• Object interfaces, dividing data flexibly

e n
• Flat objects, allowing easy access and expansion

m/
• Automated management

co
• Multiple tenants

i.
• Data integrity and security

we
ua
.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 8

ar
le
Object interfaces, dividing data flexibly
//

OBS systems do not need to know about the physical way data is stored. Traditional storage
:
tp

devices store SCSI blocks and with that comes the chunk size of the storage device. Chunks are
typically 512 bytes to 4 kB. OBS can use any object size to store the objects with the support for
ht

an object size ranging from several bytes to several terabytes.


s:

Flat objects, allowing easy access and expansion


ce

Flat data structures allow the OBS capacity to be expanded from a TB level to an EB level. An
ur

OBS system typically builds a global namespace based on a scale-out (or grid hardware)
so

architecture. This makes OBS applicable to could computing environments. Some OBS systems
Re

even support seamless upgrade and capacity expansion.


ng

Automated management
ni

OBS allows users to configure attribute (metadata) policies for objects based on service needs
from the application perspective.
ar
Le

Multi tenants
The multi-tenant feature can use the same architecture and the same system to provide storage
re

services for different users and applications. Besides, specific data protection and storage
Mo

policies can be configured for each user and application. Data is isolated from each other.

Data Integrity and Security


OBS can have systems to protect objects and underlying hardware can have data protection in
place.

HCNA-storage V3 | OHC1109106 Basics of Big Data Page | 227


Procedure of Big Data processing

Procedure of Big Data processing

e n
m/
co
i.
Data collection Data storage Data management Analysis

we
ua
.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.
ar
Slide 9
le
//

Big data is processed similar to common data, which includes: data capturing, storage,
:

management and analysis.


tp
ht

Data capturing leverages multiple means: methods and tools to capture data for later analysis.
s:

What data should we capture? How do we capture them? What tools are required? What tools
can be more efficient? These are questions we must pay close attention to.
ce
ur

Data storage is to transfer and store the captured data. As the data increases exponentially,
so

traditional data storage methods fail to meet Big Data requirements. New technologies are
Re

needed.
ng

Data management is an extension of data storage. With regards to data storage, data
ni

management refers to deep data processing and categorization so that useful metadata is
ar

provided for subsequent data analysis.


Le

Data analysis involves the use of data analysis methods, models, and tools in order to make
re

correlations. More in-depth data mining based on the preceding analysis and acquired data can
Mo

meet higher-level requirements.

This chapter focuses on data storage and the extension of data storage (namely data
management) to introduce key technologies of Big Data.

Page | 228 HCNA-storage V3 | OHC1109106 Basics of Big Data


Content types of big data

Content types of big data

n
~ 23%

e
Content data: items, photos, videos, texts

m/
Individual ~10 %
user behaviors

co
User ~5 %
Profile

i.
Collective social network data ~35%

we
Web Page & Log ~27%

ua
.h
ng
Mostly structured In every PB data Semi-structured

(e.g. Internet-based company) Structured Unstructured

ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.
ar Slide 10
le
//

Content types of big data


:
tp
ht

~25 % captured by probe, including


Network XDP
historical data
s:

Bill CDR ~15% including historical data

Internet web pages and


ce

logs ~13% including historical data


ur

SND ~3% Social Network Data

~7%
so

Content data Photos, videos, texts

Primary data ~12% subscription + contact


Re

Analysis and summary data ~18% including historical data


ng

CUBE and
unified view ~7%
ni

In every PB data Semi-structured

Mostly structured (e.g. telecom operator) Structured Unstructured


ar
Le

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 11


re

By structure, data can be categorized as structured data, semi-structured data, and unstructured
Mo

data.

Structured data is expressed as a two-dimensional table structure. Simply put, structured data is
information in a database. For example, an ERP system, a financial system, a Customer Relation
Management database all store structured data.

HCNA-storage V3 | OHC1109106 Basics of Big Data Page | 229


Unstructured data cannot be expressed in a two dimensional database logic conveniently. Such
data includes office documents, text, graphics, XML pages, HTML pages, various reports, photos,
audio files and video files. For example, medical imaging system, campus video-on-demand
system, video surveillance, GIS of national land bureaus, design institutes, file servers
(PDM/FTP), and media resource management all store unstructured data.

e n
Semi-structured data is data that has not been organized into a specialized repository, such as

m/
a database, but that nevertheless has associated information, such as metadata

co
i.
Whatever solution is selected it is important to realize that it takes a lot of computing power to

we
have software investigate; organize and filter great amounts of data. That is why Big Data solution

ua
management software is run not on a single host, but on multiple hosts that work in parallel.

.h
ng
ni
Hadoop: Internet Big Data solution
ar
le
How to handle Big Data : Hadoop solution
: //
tp
ht

Analysis platform
s:
ce

MapReduce HBase
ur

distributed parallel processing architecture non-relational database


so
Re

HDFS — distributed file system


ng
ni

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 12


ar
Le

One of the options is Hadoop. Hadoop is an open-source technical framework for distributed Big
re

Data processing. The Hadoop project was started in 2005 and was later adopted into the Apache
Mo

community. Hadoop was designed to run complex data management tasks with relatively simple
hardware. It can use virtually all storage devices for storing data and it can use multiple hosts
(referred to as nodes) for computing tasks. Therefore Hadoop has distinct performance and cost
advantages in unstructured data processing compared with the traditional mainframe computers
needed before.

Page | 230 HCNA-storage V3 | OHC1109106 Basics of Big Data


Hadoop contains three components:

1. The Hadoop Distributed File System (HDFS),


2. The non-relational Hadoop Database (HBase),
3. The MapReduce distributed parallel processing architecture.

n
The architecture difference between structured Big Data solutions and unstructured Big Data

e
m/
solutions lies in database management.

co
i.
The traditional relational databases have been used for a long time, there are multiple auxiliary
tools, and database applications are very stable and reliable. However, (relational) databases

we
have a complex hierarchy. As a result, data processing takes a long time. It is difficult for a

ua
traditional relational database to process over 1 TB of data and support high-level data analysis.

.h
ng
The Parallel Database System is a new-generation high-performance database system. It

ni
breaks down the complex hierarchy into independent units. The units are isolated from each other,

ar
and their relationship hierarchy is simple, which is the core of parallel database systems. By
dividing a large database into small ones and storing them on different nodes, Parallel Database
le
Systems process data in a parallel manner. The failure of one unit does not affect other units. In
//

addition, the Parallel Database Systems inherit all advantages from a relational database.
:
tp

With parallel databases, we can create more data categories when data is carried and stored.
ht

During data analysis, the Business Intelligence analysis tool does not require data categorization.
s:

Instead, the tool directly analyzes the data and provides the results, greatly improving the data
ce

analysis efficiency.
ur

Apache Hadoop is currently used by many companies that have to store large amounts of data.
so

The data can be stored in local datacenters or the data can be store in the Cloud. Facebook,
Re

Yahoo and Google all store their data using a Hadoop based system. Other companies have also
adopted Hadoop but have created their own applications to work together with Hadoop :
ng
ni

 Amazon: It uses Amazon’s S3 (Simple Storage Services)


ar

 Microsoft: Especially created for use in Cloud storage solutions there is Microsoft’s Azure.
Le
re
Mo

HCNA-storage V3 | OHC1109106 Basics of Big Data Page | 231


Huawei OceanStor 9000

OceanStor 9000 Big Data storage architecture

n
Video surveillance HPC Web Billquery Net-surfing Precision Business
disk behavior analysis marketing promotion

e
Application

m/
layer
Files Objects Query/Retrieval Data analysis

co
NFS CIFS HDFS Object SQL MR/Hbase

i.
Distributed database Enterprise-class &
Data WushanSQL FusionInsight Hadoop

we
processing
layer
Distributed file system / WushanFS

ua
Archiving

.h
ng
Hardware
node layer

ni
Node Node Node Node Node Node Node Node Node Node

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.


ar Slide 13
le
//

The Huawei OceanStor 9000 model is the solution that Huawei provides for storing Big Data.
:

Within the OceanStor 9000 Big Data Solution there is everything you need to manage Big Data.
tp
ht

It combines data storage, backup and analysis (unified management, hardware platform and
s:

networking) in one product that is easy to manage. The file system directly manages underlying
disks, eliminating complex RAID configuration and LUN division steps. The OceanStor 9000
ce

product is highly scalable with up to 288 nodes to be configured to work together.


ur
so

All nodes are integrated into the OceanStor 9000 hardware platform. The internal network mode
Re

can be 10GE or InfiniBand High-Speed. Therefore, the OceanStor 9000 delivers excellent
performance while ensuring a low latency, high bandwidth, and high concurrency. To meet
ng

various application scenarios, the OceanStor 9000 provides such nodes as high-OPS nodes,
ni

large-bandwidth node, and large-capacity node. Users can configure a flexible number of various
ar

nodes based on performance and capacity requirements.


Le

The OceanStor 9000 supports multiple interface and data types, including NAS interfaces (NFS,
re

CIFS, and POSIX), target interfaces (REST and SOAP), database interfaces (JDBC and ODBC),
Mo

and backup and archiving interfaces (VTL and OST). The OceanStor 9000 solution is perfectly
qualified for storage of core production data, and business data storage and analysis.

Page | 232 HCNA-storage V3 | OHC1109106 Basics of Big Data


File system key technologies – Unified namespace

File system key technologies — Unified namespace

Independent namespace

n
Unified namespace

e
Domain Namespace

m/
Namespace Namespace

co
vs.

i.
Dir Dir Dir File Dir Dir Dir File

we
File File File File File File

ua
Description
• A unified file system namespace is provided externally. The namespace can use

.h
and manage all the available capacity of a system.

ng
File system space is presented externally as directories.
• A namespace is automatically created along with system startup and is named

ni
after the system name.

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.


ar Slide 14
le
//

Although the data can be stored on various storage devices and access to the data is arranged
:

through multiple nodes; the total amount of data appears to be on one location. The intelligent file
tp

system within the OceanStor 9000 called Wushan presents all data (or files because that is what
ht

you really access) as stored on one single namespace. A namespace is the symbolic reference to
s:

the physical location of an object. Normally files are stored in directories. Directories are parts of a
file system. Multiple directories are grouped into a namespace. Multiple hardware (the physical
ce

location of the files) leads to multiple namespaces.


ur
so

In an OceanStor 9000 there can be up to 288 nodes, where each node can have its own storage
Re

capacity, which allows an OceanStor 9000 to store up to 50 PB of data. However, when


accessing the information, it will appear to be in one single namespace and the data appears to
ng

be stored on one storage device. Metadata and data are stored on each node that acts as both a
ni

metadata server and a data server. When accessing file data, the Wushan distributed file system
ar

locates the metadata server to which the target file belongs, obtains data distribution of the target
Le

file from the metadata server and then accesses the nodes to complete data access.
Managing the metadata is one of the strong points of the OceanStor 9000. It does this very
re

efficient so even in a Big Data system with many petabytes of data the performance of the system
Mo

is outstanding. Metadata is organized based on a dynamic subtree structure. All metadata in a


namespace is grouped into name subtrees. Each name subtree is allocated to one Meta Data
Service or MDS. One MDS can manage multiple subtrees. Multiple nodes running multiple MDS’s
provide high performance.

HCNA-storage V3 | OHC1109106 Basics of Big Data Page | 233


Overview of OceanStor 9000 key technologies

Overview of OceanStor 9000 key technologies

Load balancing

ne
Dynamic storage tiering

m/
Global

co
cache

i.
Quota management

we
ua
Erasure Code

.h
ng
File system

ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.
ar Slide 15
le
//

The above picture shows the key technologies of the OceanStor 9000. In this module we only
:

want to introduce the OceanStor 9000. There is another course designed that explains the
tp

workings of the OceanStor 9000 in more detail. A short explanation of the key technologies
ht

follows next.
s:

Load Balancing.
ce

A service called InforEqualizer divides the workload across multiple nodes.


ur
so

Dynamic Storage Tiering.


Re

With this function the data that is accessed often is automatically placed on high performance
storage devices. Lesser accessed data is moved to slower (and cheaper) storage devices.
ng
ni

Quota Management.
ar

The administrator can monitor and control the usage of storage capacity and of the number of
Le

files for individual users of the OceanStor 9000.


re
Mo

Page | 234 HCNA-storage V3 | OHC1109106 Basics of Big Data


Erasure Code

Erasure code is the technical term for the storage virtualization technique Huawei uses for storing
files on physical disks of their NAS devices and protecting the data. In module 5 the RAID
technology was discussed. That is traditional protection of failing disks. In module 9 the

n
technology Huawei uses, RAID 2.0+, will be explained. The next pictures show that the erasure

e
m/
code offers a better protection of files and also a better performance in case data has to be

co
recovered.

i.
we
Overview of OceanStor 9000 key technologies

ua
.h
Erasure Code

ng
• Main technology designed to prevent file loss

ni
• Big files are chopped in 4 GB parts


ar
Parts can be spread over multiple disks, over multiple OceanStor
le
9000 systems across multiple racks
//

• Offers a very high, selectable, protection of files


:

• On first glance erasure code resembles RAID technology


tp
ht
s:
ce

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 16


ur
so

Internally the OceanStor of course stores the data in SCSI blocks on physical disk drives.
However from the outside it looks like the OceanStor 9000 chops files into smaller parts and uses
Re

a RAID like technology to store them internally.


ng

All the advantages of RAID can now be applied on files that are stored on the OceanStor 9000.
ni

Big difference is now that with RAID we think about protecting data when disks fail and with
ar

Erasure code it can be even better. Files can be protected against loss even if a complete
Le

OceanStor 9000 fails or even a full rack with several OceanStor 9000’s!

Added to these obvious advantages the RAID approach also helps the rebuilding of the system
re

when a disk, or a node (a single OceanStor 9000) or a rack of nodes fails.


Mo

HCNA-storage V3 | OHC1109106 Basics of Big Data Page | 235


Inter-Node distributed RAID

Traditional RAID 5
tolerates a concurrent

n
Failed
failure of one disk or node.

e
m/
Failed

Traditional RAID 6

co
Failed tolerates a concurrent failure
of two disks or nodes.

i.
Failed

we
Failed

N+1, N+2, N+3, N+4

ua
Failed redundancy tolerates a

.h
concurrent failure of up to
Failed four disks or nodes.

ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 17

ar
le
The above picture is representation of an animated slide that is shown during the course. Here
the concept of Erasure code is shown clearly.
: //
tp

Inter-Node distributed RAID storage 1 — N+M


ht

File data is divided into N (3  Writing data fragments to different


for example) fragments and M nodes improves data read/write
s:

File data (2 for example) redundant performance, ensures high data


fragments are calculated. reliability and service availability,
N ranges from 2 to 16. maintains optimal disk utilization,
ce

M ranges from 1 to 4. and maximizes return on


investment (ROI).
ur

Source data Source data Source data Redundant Redundant


fragment fragment fragment data data
fragment fragment  As long as the number of failed
so

disks in the cluster is less than M


(the number of redundant data
fragments), the OceanStor 9000
Re

implements data reconstruction


Disk Disk Disk Disk Disk across nodes to quickly restore lost
data, thereby ensuring data
ng

Disk Disk Disk Disk Disk reliability of the system.


... ... ... ... ...
 Any available space can serve as
ni

hot spare space, eliminating the hot


Disk Disk Disk Disk Disk spare disk problem in traditional
ar

RAID and improving storage


utilization.
Node 1 Node 2 Node 3 Node 4 Node 5
Le

Storing three data fragments and two redundant fragments on five nodes is used as an example.

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 18


re
Mo

The above image is a more technical explanation of the Erasure code technology. It can be used
to determine how much hardware (disks and/or nodes) are needed to get a specific level of
redundancy.

Page | 236 HCNA-storage V3 | OHC1109106 Basics of Big Data


OceanStor 9000 hardware structure

OceanStor 9000 hardware structure

e n
High Performance Large Capacity

m/
Storage Node Archiving Node Analysis Node
P Series C Series I Series

co
i.
P12 C36 I 25
2U, 3.5, 12 drives 2U, 2.5, 25 drives

we
Big data analysis,
P25 4U, 3.5, 36 Drives
video analysis

ua
On-line media assets,
2U, 2.5, 25 drives HPC, video surveillance

.h
P36 C72

ng
4U, 3.5, 36 Drives 4U, 3.5, 72 Drives

ni
HD editing, news Near-line media assets,
production, high-end HPC video surveillance

ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 19
: //
tp

The OceanStor 9000 hardware contains storage nodes, network devices, keyboard, video, and
mouse (KVM) devices, and modems. Storage nodes include 3 series. P Series is short for High
ht

Performance Storage Node, C Series is short for Large Capacity Archiving Node, and I Series is
s:

short for Analysis node. The application sceneries for these different nodes have been explained
ce

in the slide.
ur

Optional Model Description


so

P12 2 U, 12 data disks (Typical configuration: 12 SATA disks, or 1 SSD +


Re

11 SATA disks)
P25
ng

2 U, 25 data disks (Typical configuration: 1 SSD + 24 SAS disks)


P36 4 U, 36 data disks (Typical configuration: 1 SSD + 35 SATA disks)
ni

C36 4 U, 36 data disks (Typical configuration: 36 SATA disks)


ar

C72 4 U, 72 data disks (Typical configuration: 72 SATA disks)


Le
re
Mo

HCNA-storage V3 | OHC1109106 Basics of Big Data Page | 237


The OceanStor 9000 network architecture contains the front-end service network and rear-end
storage network.

Recommend networking: Front and Back End 10Gb

ne
m/
Recommended networking: Front & Back End 10Gb

co
i.
Networking Features

we
Application layer
Application servers Management • Front-end 10GE + back-

ua
server end 10GE (default typical
networking)

.h
...
• Separation between
front-end and back-end

ng
networks
10GE switch 10GE switch

ni
Fully redundant
networking

ar
Storage layer GE switch • The GE switch
connected to
OceanStor 9000
le
management network
ports on each node and
also connected to the
//

management server
...
:

Node Node Node Node


tp

GE network
10GE network
ht

10GE switch 10GE switch


Stack cable
s:

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 18


ce
ur

OceanStor 9000 networking structure:


so

 The front-end service network is used for connecting the OceanStor 9000 to a user network.
Re

 The back-end storage network is used to internally interconnect all nodes on the OceanStor
9000.
ng

 The IPMI network is used for connecting the OceanStor 9000 to customers' maintenance
ni

network.
ar
Le

The OceanStor 9000 supports multiple types of networks containing the 10GE network, InfiniBand
network, and GE network for meeting different network requirements.
re
Mo

Note: 10GE = 10 Gbit/s and GE = 1 Gbit/s

Page | 238 HCNA-storage V3 | OHC1109106 Basics of Big Data


Questions

Questions

n
1. What are the main differences between traditional data and big

e
data.

m/
2. Name five reasons why we now have so much data that needs

co
to be collected.

i.
3. Describe the concepts of Hadoop and OBS.

we
ua
.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.
ar
Slide 19
le
: //
tp

Answers
ht
s:

1. Big Data is largely unstructured data. Traditional data is stored as blocks or as files. Big Data
ce

is stored as objects. Big data solutions work independent from the underlying hardware.
ur

2. Five answers are:


so

 Social media is used more and more.


Re

 Bandwidth available.
 Images are generated in much higher resolutions.
ng

 Many tasks have been converted into digital tasks (taxes, webshops, travel arrangements,
ni

bookings).
ar

 The Internet of Things.


Le

3. Hadoop uses a structure built on top of physical storage hardware and organizes the data as
re

objects. Using a distributed file system data is no longer dependent on its physical location.
Mo

Also with the use of a MapReduce function the tasks (searching for metadata that tells the
system where the physical data is) can be split up in subscale tasks. The task are then
forwarded to multiple nodes that all together process these subscaletask in parallel.

HCNA-storage V3 | OHC1109106 Basics of Big Data Page | 239


Exam Preparation

Exam Preparation

n
1. Big Data solutions are primarily used to store what type of data?

e
a. Mostly structured data

m/
b. Mostly unstructured data
c. Both structured data and unstructured data

co
d. None of the above

i.
2. What are characteristics of HUAWEI’s OceanStor 9000 big data

we
solution?

ua
a. Integration of data storage, backup, and analysis
b. Support for multiple namespaces only

.h
c. Can support up to 128 nodes
d. Support for dynamic storage tiering

ng
e. Quota management for capacity and/or number of files

ni
f. Support for CIFS and NFS

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.


ar Slide 20
le
: //
tp

Answers
ht
s:

1. C.
ce

2. A, D, E, F.
ur
so
Re
ng
ni
ar
Le
re
Mo

Page | 240 HCNA-storage V3 | OHC1109106 Basics of Big Data


Summary

Summary

n
• Definition and characteristics of big data.

e
m/
□ Key big data technologies.

co
Object Based Storage.
□ Parallel computing.

i.
□ Hadoop.

we
ua
• Architecture and features of HUAWEI OceanStor 9000 big data

.h
product.

ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.
ar
Slide 20
le
: //
tp
ht
s:
ce
ur
so
Re
ng
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109106 Basics of Big Data Page | 241


e n
Thank you

m/
co
www.huawei.com

i.
we
ua
.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 22

ar
le
: //
tp
ht
s:
ce
ur
so
Re
ng
ni
ar
Le
re
Mo

Page | 242 HCNA-storage V3 | OHC1109106 Basics of Big Data


e n
m/
co
i.
we
ua
.h
ng
ni
ar
le
: //
tp
ht
s:
ce
ur
so
Re
ng
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109106 Basics of Big Data Page | 243


Mo
re
Le
ar
ni
ng

OHC1109107
Re
so
ur
ce

Backup and Recovery


s:
ht
tp
://
le
ar
ni
ng
.h
ua
we
i.
co
m/
en
www.huawei.com
Mo
re
Le
ar
ni
ng
Re
so
ur
ce
s:
ht
tp
://
le
ar
ni
ng
.h
ua
we
i.
co
m/
en
Introduction

In this module we will look at the options a customer has to implement a disaster recovery method.
The method will allow the ICT staff to recover data when it has been lost. For most companies
this disaster recovery is not so dramatic as the title suggests. It is not about real disaster as in

n
most cases the “disaster” is caused by one of the companies employees accidentally deleting files

e
(and with that losing some data). A backup strategy is typically implemented to recover for these

m/
scenarios. As roughly 80% of all data loss is caused by human “intervention” it is important for a

co
company to have a backup strategy in place.

i.
we
ua
.h
Objectives

ng
ni
After this module you will be able to:

ar
Describe the backup concepts and topologies.
 Understand backup technologies.
le
 Explain the steps required to set up a backup strategy.
//

 Know about Huawei backup solutions and applications.


:


tp

Know the concepts of Disaster Recovery.


ht
s:
ce

Module Contents
ur

1. Backup concepts and topologies: LAN-based and LAN-free.


so

2. Backup structures: D2T, D2D and D2D2T.


Re

3. Backup strategy.
ng

4. Deduplication.
5. Huawei Backup Solutions and application.
ni

6. Disaster Recovery introduction.


ar
Le
re
Mo

HCNA-storage V3 | OHC1109107 Backup and Recovery Page | 245


e n
m/
co
i.
we
ua
.h
ng
ni
ar
le
: //
tp
ht
s:
ce
ur
so
Re
ng
ni
ar
Le
re
Mo

Page | 246 HCNA-storage V3 | OHC1109107 Backup and Recovery


What is a backup?

In previous modules we have discussed the importance of data for an organization. It is therefore
very important to understand the risks of not having the data anymore. If we understand the risks
it is logical that we have to try to prevent losing the data. For that we have to implement a backup

n
strategy. Any backup strategy has to be made with the assumption that the amount of data that

e
m/
can be lost is known. This is the so-called Restore Point Objective or RPO. For each company

co
there can be different RPO requirements ranging from minutes (banks, airline companies,

i.
government) up to hours or even days. This module focusses on the traditional backup strategy
using backup servers and backup software.

we
At the end of the module there will be a short introduction of disaster recovery methods.

ua
.h
ng
What is a backup?

ni
In information technology, a backup, or the process of backing up,
ar
le
refers to the copying and archiving of computer data so it may be
used to restore the original after a data loss event.
: //
tp

Workstation
LAN
ht

Agent Application
s:

Backup
server server
ce

Tape library
ur

Storage device
so
Re

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 3


ng

A backup system usually consists of the following components:


ni

1. Backup server
ar

The backup server is the PC or a UNIX server where the backup software resides.
Le
re

2. Backup software
Backup software is the core of a backup system. It is used to make and manage copies of the
Mo

production data on the storage media. Typical backup software includes Symantec Backup
Exec, NetBackup and CommVault.

HCNA-storage V3 | OHC1109107 Backup and Recovery Page | 247


3. Storage device
A storage device is used to store backup data. It can be a disk array, a physical tape library,
or a virtual tape library (VTL).

en
There are two methods used to build a backup environment. The first one is the LAN-based

m/
backup topology. In a LAN-based backup topology the network is used for moving the data from

co
the application server to the backup server, but also for the command flow. With the command

i.
flow we mean the communication between the components of the backup system. For instance

we
the command send from the backup server to tell an agent (running on an application server) to

ua
send data. Another example of a command could be the request send from the backup server to

.h
the backup device to select a specific tape from the tape library.

ng
ni
LAN-based backup topology
ar
le
: //

LAN
tp

Data flow
ht

Data flow

Backup server
Agent Agent
s:

Media server
ce

File Application
server server
ur

Backup storage
so

device

Data flow
Re

Command flow
ng

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 4


ni
ar

In the above picture it is clear that the same network is used for data and commands. In many
Le

cases the infrastructure uses just a single network. In that case making backups during office
hours is an extra load on the network traffic. The users of applications that access their data via
re

the network could then find that the network is becoming overloaded (=slow). That is also the
Mo

reason that many backup jobs are run outside working hours. That of course can be a problem
when the RPO is set in such a way that multiple backups have to be made in working hours!

Page | 248 HCNA-storage V3 | OHC1109107 Backup and Recovery


The data backup process involves these steps:
1. The backup server sends a control command to the application server that runs the agent
program.
2. The agent on the application server receives these commands and sends the backup data to
the backup server.

n
3. The backup server then moves the data to the backup device and has it backup up on the

e
correct media (i.e. tape)

m/
4. Optionally the data is not stored locally on the application server but on a file server. An agent

co
on the file server will then send the data.

i.
5. The backup server receives data and stores it on the storage device.

we
ua
The whole process will be executed over a LAN connection.

.h
ng
Advantages:

ni
- The backup system is separate from the application system. The backup process does

ar
not occupy hardware resources on the application server.
le
Disadvantages
//

- A backup server is needed, increasing the investment.


:
tp

- The backup agent program affects the performance of the application server.
ht

- Data backup is based on a LAN, affecting the network performance.


- Backup services must be independently managed and maintained.
s:

- A demanding requirement is posed on the processing capability of the users' applications.


ce
ur
so

The next method of building a backup system is the LAN-free backup topology. There, as the
name suggests, backup data flows and command flows use different physical networks.
Re
ng

This of course eliminates the impact of one flow on the other.


ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109107 Backup and Recovery Page | 249


LAN-free backup topology

LAN-free backup topology

LAN

en
m/
Application Application Backup server
server server Media server

co
i.
we
SAN
Backup

ua
Storage
device

.h
ng
Storage device Storage device
Data flow

ni
Command flow

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.


ar
Slide 5
le
//

In LAN-free backup, control flows are transmitted over a LAN while data flows are not. As a result,
:

this backup mode does not occupy LAN resources.


tp
ht

The data backup process:


s:

1. The backup server sends a control flow to the application server that runs the agent program.
2. The application server receives the command and reads production data.
ce

3. The media server reads data directly from the application server and sends the data to the
ur

backup media.
so

4. Optionally the data will be transported from the storage device to the backup server, again
Re

directly via the SAN network.


ng

Advantages
ni

- Backup data flows do not occupy LAN resources, improving the backup performance without
ar

impacting the network performance.


Le

- Using LAN-free backups allows backups to be run even in working hours as the data
movement will not impact the LAN performance.
re
Mo

Disadvantages
- The backup agent program affects the performance of the application server.
- The method demands a SAN infrastructure to work. This makes the solution more expensive
than a LAN-based solution which can be applied in smaller NAS or DAS infrastructures.

Page | 250 HCNA-storage V3 | OHC1109107 Backup and Recovery


Components of a backup system

Components of a backup system

e n
Backup software Backup media Backup server

m/
 Creates the backup  Tape library  Houses the backup

co
policy.  Disk array software.

Backs up data to the

i.
Manages the 
  Virtual tape library
backup media. (VTL) storage media

we
according to a
 Performs other  CD-ROM
preset backup
extended functions.

ua
tower/library
policy.

.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.
ar
Slide 6
le
//

A complete backup system usually consists of backup software, backup media, and backup
:

server(s).
tp
ht

The backup software is used to implement a backup strategy, manage the backup media and
s:

perform the data backup. Using backup software offers the possibility to protect application data,
application programs, and if desired complete application systems.
ce
ur

Some advanced backup software can realize more functions. Complete backup and recovery
so

solutions are designed to protect, back up, archive, and recover data in various computing
Re

environments which include large enterprise data centers, remote groups, desktops, and laptops.
Backup software can provide management solutions spanning the entire lifecycle of the data.
ng

Data stored on heterogeneous media, including disks, tapes, and optical storage media, can also
ni

be managed on site or remotely. With the help of backup software, data can be easily recovered
ar

from device faults, virus attacks, or unexpected data loss. Examples of advanced backup
Le

applications are NetBackup, CommVault and Backup Exec.


re

Tape libraries have been the traditional backup medium for many years, however nowadays, we
Mo

can also use disks and Virtual Tape Libraries for data backup.

HCNA-storage V3 | OHC1109107 Backup and Recovery Page | 251


1. Disk-to-Disk backup structure

Disk-to-Disk backup structure

ne
SAN (Fiber Channel/iSCSI)

m/
co
i.
we
ua
Primary disk array Primary disk array Backup disk array

.h
ng
Backup data flow

ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.
ar Slide 7
le
//

The design of a backup system has many factors to consider:


:
tp

1. Amount of data to be backed up.


ht

2. Frequency of the backups (Recovery Point Objective).


3. Time allowed to make the backups in.
s:

4. Retention period, or how long the backup data should be kept for recovery purposes.
ce

5. Granularity and integrity. With that we mean how detailed the backups should be. Should data
ur

be restorable on volume level, on folder level or file level? Also: Is the restore requirement
so

such that recovered data should be application ready?


Re

6. How much time is allowed to restore data? (Restore Time Objective).


ng

All these factors combined there is of course the final question: How much money do I have to
ni

invest in a backup strategy in order to prevent losing money (or really data that represents
ar

money)?
Le

This last question can be answered when we have established the Cost Of Downtime or COD.
re

The COD is a value in Dollars, Euro’s or Yuan that shows how much money is lost if the data is
Mo

not available.

Page | 252 HCNA-storage V3 | OHC1109107 Backup and Recovery


Depending on all factors and also taking in the consideration of the total cost of ownership (TCO)
there are a few backup system methods we can choose from:

 Disk-to-tape library backup (D2T).


 Disk-to-disk backup (D2D).
 Disk-to-VTL backup (D2V).

n
 Disk-to-disk-to-tape data backup (D2D2T).

e
m/
co
D2D backup is a solution that uses disk arrays as both the primary and backup storage media.

i.
The disk-to-disk backup can be implemented by the following two methods:

we
 Users deploy a disk array on the backup system as backup media. With the help of the

ua
backup software, the application data is backed up to the disk array connected to the backup

.h
server.

ng
 Users deploy a new disk array for the backup system as backup media. The new disk array

ni
and the existing online disk array should be of the same brand and model. The data

ar
protection functions provided by the disk arrays, such as LUN copy, snapshot, and remote
le
replication, copy data from the existing disk array to the new backup disk array.
: //
tp

2. Disk-to-tape backup structure


ht

Disk-to-Tape backup structure


s:
ce
ur

SAN (Fiber Channel/iSCSI)


so
Re
ng
ni

Disk array
Physical tape library
ar
Le
re

Backup data flow


Mo

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 8

D2T backup is the most widely used backup structure. Although D2T is the most commonly used
method with companies to back up their data, there are also those who think that this method has

HCNA-storage V3 | OHC1109107 Backup and Recovery Page | 253


potential risks and challenges on the running and management of the backup system. The
combination of a physical tape library and backup software can facilitate the planning of backup
policies. However, faults caused by the physical tape library usually affect the implementation of
the backup policy and the backup plan of the entire system. According to the IDG, the annual
maintenance cost of a physical tape library is 15% of its deployment cost. The physical tape

n
library is comprised of many high-precision mechanical parts. Damage to any of these parts may

e
result in a system breakdown. Faults caused by physical tape drives and mechanical arms are

m/
primary causes of physical tape library faults. Once the physical tape library is faulty, users have

co
to return it to the manufacturer or replace it with a new one. This may take anything from a few to

i.
several days or even longer. During this period, no backup can be made and the backup policy is

we
affected greatly.

ua
.h
The I/O bottleneck on the physical tape library also considered a problem. Physical tapes are built

ng
for sequential reads and writes and do not allow random reads and writes. Therefore, the I/O
performance of a physical tape drive is fixed. If the existing I/O performance cannot meet the

ni
requirement, users can only add more physical tape drives in an attempt to enhance the

ar
performance. Since the cost of deploy physical tape drives is high, the stability of the whole
le
backup system is decreased with the increase of the physical tape drives. The reliability of the
//

physical tapes in a physical tape library decreases with each used. Many users suffer from the
:

data loss due to the physical tape damage or inaccessibility.


tp
ht

The capacity of each physical tape is fixed. The backup policy usually selects several tapes for
incremental or differential backup and the other tapes for full backup. However, since the usage
s:

limit of tapes used for incremental backup and differential backup is low, the user investment is
ce

often wasted.
ur
so
Re
ng
ni
ar
Le
re
Mo

Page | 254 HCNA-storage V3 | OHC1109107 Backup and Recovery


3. Disk-to-VTL backup structure

Disk-to-VTL backup structure

e n
SAN (Fiber Channel/IP/SAS)

m/
co
i.
we
Disk array

ua
VTL

.h
ng
Backup data flow

ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.
ar
Slide 9
le
//

In the D2V backup structure, the VTL uses physical disks as storage media, virtual engines
:

integrate mechanical arms, disk drives and tape slots. Without mechanical parts, the reliability
tp

and maintainability of the VTL are equal to those of disk arrays and much better than those of the
ht

physical tape library. A VTL uses physical disks as its storage medium. When compared with the
s:

sequential read/write performance of physical tapes, physical disks deliver higher performance in
random reads/writes as well as high-speed addressing. The I/O performance of a VTL is
ce

determined by its external bandwidth, instead of the types and quantity of the physical tape drives
ur

inside it.
so
Re

A VTL uses virtual engines and the connected servers also regard the VTL as a physical tape
library. However, a physical tape library must run specific backup software before being accessed.
ng

A VTL uses physical disks to store data but does not use them as the storage medium, protecting
ni

data from accidental deletion and viruses.


ar
Le

The VTL improves the backup efficiency and ensures the reliability of the backup system, but
does not increase the system investment. However, some issues must be taken into
re

consideration. First, the VTL stores all the data on physical disks, and these disks are scattered in
Mo

RAID groups. The need to archive important backup data imposes challenges on the VTL,
because users cannot locate which physical disk the data is stored on unlike on a physical tape
library, where one can easily locate the correct tape.

HCNA-storage V3 | OHC1109107 Backup and Recovery Page | 255


Secondly, the VTL cannot compress data as in the same way that the physical tape library does.
Thirdly, the VTL does not provide the on-demand storage function, that is, the VTL can only
provide fixed space for incremental or differential backup, but cannot provide only the space that
is actually required.

ne
m/
4. Two Stage backup structure – D2D2T

co
i.
Two Stage backup structure - D2D2T

we
ua
.h
D2D2T: Disk-to-disk-to-tape backup

ng
ni
SAN (Fibre Channel/IP/SAS)

ar
Offline archiving
le
//

VTL
:

Online disk array Online disk array Tape library


tp
ht

Backup data flow


s:

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 10


ce
ur

The D2D2T is the most suitable backup method, Meeting the requirements on reliability,
so

manageability and performance.


Re

A VTL is safe, reliable and of high performance while a physical tape library can support the
ng

media movement. The best solution must combine their advantages as follows:
ni
ar

 Use physical disks as a level-1 backup medium and protect them with RAID.
Le

 Use the VTL technology on host clients to ensure the manageability and security of the
backup system.
re

 Employ the on-demand storage function to fully utilize storage resources.


Mo

Allow data to be exported from virtual tapes to physical tapes, facilitating the archiving and remote
storage of backup data.

Page | 256 HCNA-storage V3 | OHC1109107 Backup and Recovery


Deduplication

Deduplication

e n
m/
C A B C D

co
BIndexAand metadata
B A A De-dupe

i.
we
D B B C A

ua
Original data Duplicates removed

.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.
ar Slide 11
le
//

The deduplication technology eliminates duplicate data by using hardware or software to reduce
:

the occupied storage space.


tp
ht

The deduplication process is as follows:


s:

 Stores original data on the storage media.


ce

 Compares data of a fixed size of data block.


ur

 Stores the unique data in the deduplicated space. Compares new data with the unique data in
so

the space, deletes the duplicate data, and stores the index and metadata in the specified
space.
Re
ng

Benefits to backup:
ni

 Saves great amounts of storage space, leverages storage resources, and lowers users' TCO.
ar

 Reduces the required backup window.


Le
re
Mo

HCNA-storage V3 | OHC1109107 Backup and Recovery Page | 257


Comparison between deduplication and compression

Comparison between deduplication and compression

en
m/
Item Function Implementation Data Content Condition

co
Compares blocks and Retains only Has blocks
Deduplication retains only unique unique data available for

i.
data sources. sources. comparison.
Saves storage

we
space.
Has the

ua
Implements a
Does not modify compression
Compression compression
original data. software
algorithm.

.h
available.

ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. ar Slide 12
le
: //

Deduplication can be regarded as a special type of compression. A deduplication algorithm


tp

divides data into blocks (each of 4 KB, 16 KB, or 32 KB) and compares the blocks to find
ht

duplicates. Unique data blocks are then saved on to the physical disk space.
s:

Deduplication is primarily used to delete duplicate data before backup, so it requires basic data
ce

blocks for comparison.


ur
so

Compression is implemented by a compression algorithm to reduce the file size. Deleting


Re

duplicate data is only one of the file compression methods.


ng
ni
ar
Le
re
Mo

Page | 258 HCNA-storage V3 | OHC1109107 Backup and Recovery


Deduplication categories

Deduplication categories

Deduplication can be divided into multiple categories by location,

e n
time, granularity, and scope.

m/
co
i.
we
ua
.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.
ar
Slide 13
le
//

 Deduplication at source end.


:

Deletes duplicate data and then copies data to a backup device.


tp
ht

 Deduplication at the target end.


Transfers data to a backup device and deletes duplicate data during data storage.
s:

Inline deduplication.
ce

Deletes duplicate data before writing data to disks.


ur
so

 Post-processing deduplication.
Re

Deletes duplicate data after writing data to disks.


ng

 Adaptive deduplication.
ni

Uses inline deduplication in environments with low performance requirements and post-
processing in environments with high performance requirements.
ar
Le

 File-level deduplication.
Checks the properties of files to be stored according to the file system index and compares
re

them with files that have already been stored. It is also called single instance storage (SIS). If
Mo

no identical file exists, the technology stores the new files and updates the index. If an identical
file already exists, it stores only the pointer that points to the existing file.

HCNA-storage V3 | OHC1109107 Backup and Recovery Page | 259


 Block-level deduplication.
Divides files and objects into data blocks with fixed or with variable lengths, computes the
Hash values of these new data blocks and compares them with values with those of the
existing data blocks, and deletes duplicate data blocks if their values are the same.

n
 Byte-level deduplication.

e
Searches for and deletes duplicate data by byte, and usually uses a compression algorithm to

m/
compress data for storage.

co
i.
 Local deduplication.

we
Compares only new data with data stored on the local storage device.

ua
.h
 Global deduplication.

ng
Compares new data with data stored in all storage devices in the deduplication domain.

ni
ar
le
Key indexes of Deduplication
: //
tp

Key indexes of Deduplication


ht

Customers' concerns Key indexes


s:

Deduplication
ce

How much space and TCO can be saved?


ratio
ur

How long does a deduplication process take? Deduplication


Will it affect the backup window? performance
so

Is data after deduplication reliable and


Re

Data reliability
recoverable?
ng

How long can data recovery (DR) be ready in DR Replication


scenarios? performance
ni

How long does DR take after production data is Recovery


lost? performance
ar
Le

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 14


re
Mo

Page | 260 HCNA-storage V3 | OHC1109107 Backup and Recovery


Contents of a backup strategy

Contents of a backup strategy

n
Files, operating systems, databases, raw device backup, backup
Data type
software logs, etc.

e
m/
Backup media Disks, tapes, backup servers, etc.

co
i.
Backup type Full, incremental, and differential backup.

we
Data retention period 1 week, 1 month, 1 year, etc.

ua
.h
Backup period Every day, every week, etc.

ng
Backup window Time elapsed for a backup operation.

ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.
ar Slide 15
le
//

 Data type: the type of data that we need to protect.


:
tp

 Backup media: the device to which protected data is backed up. It is also the backup
ht

destination.

 Backup type: the backup method, including full backup, incremental backup, and differential
s:

backup.
ce

 Data retention period: the period of time when data is saved on storage media. It is also the
ur

validity period of backup data.


so

 Backup period: the frequency of backup jobs. It can be daily, weekly, monthly, etc.
Re

 Backup window: the period of time from the start to the end of a backup job.
ng

 Selecting a backup policy:


ni

- Perform a full backup job for an operating system or application software every time the
ar

operating system is updated or new application software is installed.


Le

- Perform a full backup job for critical application data during off-peak hours every day,
re

because the data is updated every day but the total amount of data is not large.
Mo

- Perform a full backup job for critical applications every week or month, and perform
incremental backup jobs for them with a higher frequency, because the data is only
updated slightly every day.

HCNA-storage V3 | OHC1109107 Backup and Recovery Page | 261


Backup strategy – Data type

Backup strategy — data type

n
Files, databases, operating systems, application software, etc.

e
m/
Files/folders Word / Excel / PPT / photo...

co
i.
Database Oracle / DB2 / Informix / Sybase

we
Logical volumes Oracle / MySQL

ua
.h
Operating systems Windows / Red Hat / SUSE...

ng
Backup software Backup Exec / NetBackup / CommVault...

ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.
ar Slide 16
le
: //
tp
ht
s:
ce
ur
so
Re
ng
ni
ar
Le
re
Mo

Page | 262 HCNA-storage V3 | OHC1109107 Backup and Recovery


Backup media

Backup media

Common backup media include disk arrays, tape libraries, VTLs and

e n
CD-ROM towers/libraries.

m/
co
i.
we
ua
Disk array Tape library VTL CD-ROM

.h
tower/library

ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.
ar Slide 17
le
//

Disk array.
:
tp

 Advantages:
ht

high performance, fast read/write speed, easy maintenance, redundant components


(including power supplies, fans, and controllers), easily impacted by environmental
s:

factors (including temperature, humidity, and dust), and RAID protection for disk arrays.
ce
ur

 Disadvantages:
so

high initial investment, unsatisfactory storage efficiency, prone to man-made mistakes.


Re

Physical tape library.


ng

 Advantages:
ni

tape-based storage system (a combination of drives, slots, mechanical arms, and tapes),
ar

low cost per storage unit, separation of data and read/write devices, theoretically
Le

unlimited storage space.


re

 Disadvantages:
Mo

high hardware failure rate, fragile tape media easily impacted by environmental factors
(including temperature, humidity, and dust), high management and maintenance costs,
poor device redundancy (even large-scale tape libraries only have redundant power
supplies), long backup and restoration periods, and applicable to sequential reads/writes
only.

HCNA-storage V3 | OHC1109107 Backup and Recovery Page | 263


Virtual Tape Library (VTL).

 Advantages:
easy management, high performance, adaptive to existing tape storage systems, high storage
performance, and advanced technologies (including compression and deduplication).

 Disadvantages:

ne
high cost per storage unit (as disks are used as the storage medium), high deployment cost,

m/
and lower capacity expansion capability than tape libraries.

co
i.
CD-ROM tower/library.

we
 Advantages:

ua
low prices of drives and disks, long data retention periods, and low requirements on storage

.h
environments.

ng
 Disadvantages:

ni
low read/write speed, limited numbers of drives, data sources, and supported users, and

ar
inability to repeatedly write data to and erase data from the storage media. le
: //

Backup strategy – Backup Window


tp
ht

Backup strategy — Backup Window


s:
ce

A backup window is the interval of time during which it is possible to


back up data from a system without degrading performance on the
ur

system.
so

80
Re

70
60
50
ng

Network
40
utilization
30
ni

20
10
ar

0
Le

9
00

00

00
:0

:0

:0

:5
0:

4:

8:
12

16

20

23
re

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.


Mo

Slide 18

Business continuity and backup windows are in conflict. A good backup system must balance
these two factors.

Page | 264 HCNA-storage V3 | OHC1109107 Backup and Recovery


As shown in the figure, the network utilization between 8:00 to 12:00 is the highest. So this period
of time is not a suitable backup window as it will affect the system services. Perform data backup
during periods when the network utilization is low.

For most companies backup windows have become smaller over the last couple of years. We live

n
in a 24 hour economy and people need access to their data almost around the clock.

e
m/
The solution is to improve the speed with which we can do the physical backups. One way is to

co
get the best (fastest) possible hardware. The second way is using differential and incremental

i.
backups. This allows the time to backup the relevant data to be much shorter. However: these

we
two methods have one downside: restoring data takes longer than with the traditional full backup.

ua
.h
ng
ni
Backup strategy – backup type ar
le
//

Backup strategy— backup type


:
tp
ht

Full backup Differential backup Incremental backup

Sun. Sun. Sun.


s:

Mon. Mon. Mon.


ce

Tue. Tue. Tue.


ur

Wed. Wed. Wed.

Thu. Thu. Thu.


so

Fri. Fri. Fri.


Re

Sat. Sat. Sat.

Sun. Sun. Sun.


ng

 Full backup every day  Full backup once a week  Full backup once a week
ni

 Easy to manage  Differential backup on  Incremental backup on


other days other days
ar
Le

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 19


re

Full backup: Copies all data from a volume to one (ore more tapes)
Mo

 Advantages:
fast data recovery based on the previous full backup data and short recovery windows.
 Disadvantages:
large storage space occupation and long backup windows.

HCNA-storage V3 | OHC1109107 Backup and Recovery Page | 265


Differential backup: copies all changes since the last full backup to tape. Full backups are taken in
the weekend and daily differential backups are made. As the week progresses it more data has to
be backup up!

 Advantages:
reduced storage space occupation compared with full backup, and short backup and

n
recovery windows.

e
m/
 Disadvantage:

co
Data recovery must depend on the previous full backup data and differential backup data.

i.
Incremental backup: copies all changes since the last incremental backup. Full backups are taken

we
in the weekend and daily incremental backups are made. Per day only the daily changes are

ua
backed up.

.h
 Advantages:

ng
small storage space occupation and short backup windows.

ni
 Disadvantages:

ar
Data recovery must depend on the previous full backup data and incremental backup
le
data of each time, resulting in slow data reconstruction and large recovery windows.
: //
tp
ht
s:
ce
ur
so
Re
ng
ni
ar
Le
re
Mo

Page | 266 HCNA-storage V3 | OHC1109107 Backup and Recovery


Backup strategy – retention period

Backup strategy — retention period

n
A retention period defines how long backup data can be saved. Only

e
after this period expires can the backup data be overwritten.

m/
co
l Dispose Create

i.
we
ua
l Archive Data life cycle l Protect

.h
ng
l Migrate l Access

ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.
ar Slide 20
le
//

The system administrator defines a retention period for each backup. When the retention period
:

expires, the backup software automatically deletes the backup information from the backup
tp

software database (but not from tapes and disks). This way, users can no longer find related
ht

backup data.
s:

When data is created, the important data is protected normally because it is frequently accessed.
ce

The importance of the data decreases over time and will eventually be migrated to a storage
ur

media with a larger capacity but lower performance. As time goes by, and the importance of the
so

data continues to drop, it will be archived on the least important storage media. After the data
Re

retention period expires, the data will be disposed, and this backup set will become invalid.
ng

Note:
ni

A backup set is a group of data that is backed up in a batch. A backup set can be used for either
ar

full backup or incremental backup.


Le
re
Mo

HCNA-storage V3 | OHC1109107 Backup and Recovery Page | 267


Huawei Backup Products: VTL6900 family

Huawei Backup Products: VTL6900 family

n
Dedicated disk backup system — VTL6900

e
m/
Cluster

co
All-in-one device Single-node

i.
we
 Architecture:  Architecture:  Architecture:

ua
all-in-one device. single-engine + array. clustered engines + array.

 Max. performance: 2.34 TB/hr.  Max. performance: 9 TB/hr.  Max. performance: 31 TB/hr.

.h
 Max. capacity: 48 TB.  Max. capacity: 864 TB.  Max. capacity: 1728 TB.

ng
 Flexible and easy deployment.  Easy to expand, high efficiency,  Stable and reliable.
and low energy consumption.

ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.
ar Slide 21
le
: //
tp
ht
s:
ce
ur
so
Re
ng
ni
ar
Le
re
Mo

Page | 268 HCNA-storage V3 | OHC1109107 Backup and Recovery


VTL centralized backup solution

VTL centralized backup solution

VTL centralized backup

n

Small-scale centralized data backup
solution

e
All-in-one (20 TB to 50 TB at 2.34 TB/hour).
LAN

m/
device ‾ All-in-one device: low cost and easy
deployment.
IP IP IP IP IP IP

co
‾ Medium-scale centralized data backup

i.
Backup (50 TB to 500 TB at 9 TB/hour).

Fibre Channel SAN server ‾


Single-node + array: high cost-
Single-node

we
effectiveness, easy management and
maintenance.

ua
‾ Large-scale centralized data backup

.h
(500+ TB at 31 TB/hour).

Inline/Post- ‾ Highly reliable cluster: high


VTL6900

ng
processing performance/concurrent flow backup,
deduplication Cluster and central management.

ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.
ar Slide 22
le
//

Small- and medium-scale sites:


:
tp

 Capacity: 20 TB to 160 TB.


ht
s:

 Retention period: 1 to 6 months.


ce


ur

Performance: 400 MB/s to 1250 MB/s.


so

 Budget: limited.
Re
ng
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109107 Backup and Recovery Page | 269


VTL Backup and archiving solution

VTL backup and archiving solution

Application scenarios:
Data center

n
• Large amounts of historical data requires long-term

e
retention (6+ months).
LAN

m/
• The existing physical tape library delivers low
backup performance. IP IP IP IP IP IP

co
• Backup management and maintenance are
complicated.

i.
• The existing devices must be reused to reduce cost.
Backup
. server
Fibre Channel SAN

we
Customer benefits:
• The VTL6900 functions as a high-performance

ua
archiving cache, greatly reducing the backup
window. FC
FC

.h
• Existing physical tape libraries are used to provide
large-capacity archiving storage resources.

ng
• The VTL6900 automatically archives backup data
VTL6900 Physical tape library
to the tape library, simplifying data management.

ni
ar
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. le Slide 23
//

Tiered backup:
:
tp

 The existing physical tape library must be reused.


ht

 The original backup performance is lower than 200 MB/s.


s:
ce

 The backup retention period is longer than 12 months.


ur
so
Re
ng
ni
ar
Le
re
Mo

Page | 270 HCNA-storage V3 | OHC1109107 Backup and Recovery


Introduction to HDP3500E

Introduction to HDP3500E

n
The HDP3500E is a high-performance backup device that

e
combines backup software, backup server, and backup media.

m/
• The HDP3500E runs NetBackup to deliver all-round data

co
protection for mission-critical services.

i.
we
ua
.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.
ar
Slide 24
le
//

Twelve slots in 2 U space, 18 TB available backup space, and four GE service network ports
:
tp

HDP3500E systems can scale out to form a backup domain so as to achieve a linear growth of
ht

backup capacity and performance.


s:
ce
ur
so
Re
ng
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109107 Backup and Recovery Page | 271


HDP3500E + tape library solution

HDP3500E + tape library solution

en
m/
co
HDP3500E
master server

i.
Backup domain
HDP3500E
media server

we
...
Fiber Channel switch

ua
HDP3500E
media server

.h
ng
Disk array Physical tape library
Backup data flow
LAN

ni
SAN

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.


ar Slide 25
le
//

In this solution, multiple HDP3500E systems form a backup domain. One HDP3500 system
:

functions as a master server while the rest function as media servers.


tp
ht

Backup data is transmitted over a LAN. The backup data is first saved on local disks in
s:

HDP3500E systems, and is then periodically migrated to the physical tape library. This tiered
storage of backup data improves storage utilization and the overall total cost.
ce
ur

If the storage space becomes insufficient, more HDP3500E systems can be added to the backup
so

domain to improve backup performance and increase the overall storage space. External physical
Re

tape libraries can also be added to the domain to achieve tiered data storage and improve the
storage utilization. The external tape libraries must support the Vault function for offline disk
ng

management.
ni
ar
Le
re
Mo

Page | 272 HCNA-storage V3 | OHC1109107 Backup and Recovery


Backup Software Architecture

Backup software architecture

NetBackup Global data

n
NetBackup architecture master server manager

e
m/
co
i.
we
NetBackup
media server

ua
.h
ng
NetBackup
client/agent

ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.
ar Slide 26
le
//

The NetBackup software consists of the following components:


:
tp

 Master server:
ht

Manages all modules in a backup system as well as monitors the progress of backup
policies, backup tasks, and data recovery tasks.
s:
ce

 Media server:
ur

Manages media devices as well as communication and I/O operations among media
so

devices. It is the middleware between backup servers and backup media.


Re

 Client:
ng

Functions as the target backup device and is used to communicate with the master server.
ni

 Agent:
ar

Required for database backup.


Le


re

Management console:
Provides an intuitive GUI used to manage backup software.
Mo

HCNA-storage V3 | OHC1109107 Backup and Recovery Page | 273


Introduction to Disaster Recovery

Introduction to Disaster Recovery

• Some companies must have an ICT

n
infra-structure that must provide

e
m/
Business Continuity even when a
disaster takes place. When creating

co
the ICT infrastructure they must
assume a worst case scenario.

i.
• Examples of disasters are fires, floods,

we
earthquakes or large scale failures in
the power grid of a state or country.

ua
• For disaster recovery solutions

.h
the RTO is typically less than
minutes and sometimes it

ng
should be (near) to zero.

ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.
ar Slide 27
le
//

There are many examples where companies did have a good backup strategy but when those
:

companies were faced with a disaster their backup strategy proved to be too limited.
tp
ht

Fortunately disasters like the 2012 tsunami and the eruption of volcanos do not happen on a
s:

weekly basis. However, if it happens to your company then the company may not survive. To
think of a good disaster recovery plan means you have to think of the worst case scenario. What
ce

is the greatest disaster, the building in which my data is stored, can experience. If your company
ur

is based in earthquake zones or is next to a river that floods every so many years you know it is a
so

matter of time until things go wrong.


Re

If you are in the neighbourhood of a nuclear power plant or if you are near to an oil refinery it is
ng

not predictable when a disaster takes place. However, when it happens you are impacted. Even
ni

when the building itself is not damaged in any way, the police or fire brigade will have you leave
ar

the building for security reasons. From that point your local data is inaccessible.
Le

A disaster recovery plan will then tell what the next steps are to keep the business up and running.
re

Most disaster recovery plans are based on using two sets of data that are kept as far away from
Mo

each other as possible. This should prevent both the local and the remote site to be struck by the
same disaster.

Page | 274 HCNA-storage V3 | OHC1109107 Backup and Recovery


Introduction to Disaster Recovery

In a good disaster recovery plan:

n
Loss of user data is prevented.

e
m/
• Access to recovered data is immediate.

co
• Applications to work with the recovered data is in place.

i.
• Staff to use the applications and recovered data is in place.

we
• There are still traditional backup strategies in place.

ua
.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 28

ar
le
It is one thing to have the user data available after the disaster struck, but more must be done.
//

There must be servers that run the applications that can use the recovered data. The data itself
:

must not be too old as for disaster recovery the RTO and RPO are typically very low. The data
tp

itself is not the only thing: There must be people that work with the data. Many disaster recovery
ht

plans went wrong because, although they managed to recover the correct data, there were not
enough people to use the data.
s:
ce

For organizations that have very short RTO requirements having tapes in remote locations is not
ur

working. Restoring large amounts of data from a tape is usually very time-consuming.
so
Re

Having a good disaster recovery plan does not mean you can choose not to implement a backup
strategy. Disaster recovery is no substitute for backups because in most cases manually deleting
ng

data (mostly by mistake=> user error) means that the data will also be removed on the remote
ni

site automatically. In those situations backup tapes are needed.


ar
Le

There are many disaster recovery methods than can be used. Two popular ones are replication
and host-based mirroring. We will briefly discuss these methods and add a little bit of information
re

on alternatives too.
Mo

HCNA-storage V3 | OHC1109107 Backup and Recovery Page | 275


Introduction to Disaster Recovery

Disaster recovery Solutions:

n
Replication.

e
m/
• Host Base mirroring & Clustering technologies.

co
• Intelligent backup software.

i.
we
ua
.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 29

ar
le
: //

1. Replication
tp
ht

Introduction to Disaster Recovery: Replication


s:
ce

With replication the goal is to have a (near)identical data set available


ur

on a remote site that is as far away as possible.


so

• Synchronous replication
2 4
Re

1 3

6 5
ng

• Asynchronous replication
ni

2 5
ar

1 4
Le

3 6
re
Mo

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 30

Replication comes in two different versions: Synchronous and Asynchronous.

Page | 276 HCNA-storage V3 | OHC1109107 Backup and Recovery


With synchronous replication we can be certain that the data kept on the local site is identical to
the recovery data on the remote site. The first step in synchronous replication is to establish a link
between the two sites by connecting the two storage systems together.

Now, when the host application writes data to the volume (in the local storage device) the data

n
gets stored there but the host will not get a confirmation that it has been stored. First the next

e
steps should be taken: send a copy of the written data to the remote storage device. Once the

m/
remote storage device has stored the copy of the data it sends an acknowledgement back to the

co
local site. Only when the acknowledgement is received by the local storage device it will send a

i.
confirmation of the write to the host. The entire process, steps one through six, takes time. This

we
time is very much dependable on the time needed to move the copy of the data to the remote site

ua
and the acknowledge signal back to the local site. This time is referred to as the round trip time.

.h
Applications will have to be patient for the confirmation of their writes, but when they receive the

ng
confirmation they have the guarantee that the data is now physically present on two different
locations.

ni
ar
In the situation where the round trip time is too long for the application to wait for, asynchronous
le
replication should be used.
: //

With asynchronous replication the host gets the confirmation directly after the write. At that
tp

point it is not certain that the data has a copy on the remote site. That takes another waiting
ht

period that again is mostly depending on the round trip time.


s:

With asynchronous replication one must understand therefore that there can be a difference
ce

between the data on the local site and the remote site.
ur
so

Most vendors of storage devices provide replication in both methods. On top of that they have
Re

tools that make the process of failover (automatic or manual) very easy. Of course Huawei
supports all replication options a customer could ever wish for!
ng
ni

Because replication is, as they call it, storage-based there is a requirement to have two (near)
ar

identical storage devices spread over the two sites. It is the intelligence built in the storage
Le

devices that perform the replication tasks. Often the replication feature is an extra option that has
to be activated through a purchased license.
re
Mo

The investment costs for all of that is not always achievable / affordable. The alternative could
then be host-based mirroring.

HCNA-storage V3 | OHC1109107 Backup and Recovery Page | 277


2. Host-Based Mirror

Introduction to Disaster Recovery: Host-Based Mirror

Host-based replication is the processes of using servers to copy data

ne
from one site to another.

m/
• Copies file data on application level.

co
• Uses LAN / WAN.

i.
we
ua
.h
ng
• Hosts can be configured as nodes of a stretched cluster for seamless
failover.

ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.
ar Slide 31
le
//

With host-based mirroring the actual copying is done on the servers that house the application
:

who’s data should be copied. It is typically a much cheaper solution and it still has the same end
tp

result. The data is safely stored on a remote site. However: the performance of host-based mirror
ht

is lower than traditional replication. And also: the distances that can be reached for copying are
s:

often limited to less than 100 km.


ce

If the distance is relative small host-based mirroring can be done between two servers that are
ur

part of a cluster. In that case the two servers (in cluster terms we call them nodes) actually run the
so

application together. That means that in case of a node crash the other node will take over
Re

immediate. Of course when one of the data volumes is lost the copy is accessible on the remote
site.
ng
ni

Next to replication and host-based mirroring there are other possibilities for disaster recovery. In
ar

the next section we will highlight a few of the alternatives.


Le
re
Mo

Page | 278 HCNA-storage V3 | OHC1109107 Backup and Recovery


3. Backup software

Introduction to Disaster Recovery: Backup software

Some advanced backup software offer disaster recovery options:

e n

m/
Automatic replication of data that was already backed up.

co
• Virtual instant restore of even TB sized volumes.

i.
• Log shipping in combination with backup data.

we
ua
.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.
ar
Slide 32
le
: //
tp
ht
s:
ce
ur
so
Re
ng
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109107 Backup and Recovery Page | 279


Questions

Questions

1. How many backup topologies are available? What are their advantages

ne
and disadvantages?

m/
2. What are the categories of deduplication technology?

co
i.
we
ua
.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.
ar Slide 33
le
: //
tp

Answers
ht

1. LAN-based backup and LAN-free backups. With LAN-based backup the data that is being
s:

backed up goes across the same network as the regular user data. This may lead to
ce

congestion. With LAN-free backups a dedicated network must be built to be used for backup
ur

purposes only. More costly but with less congestion problems.


so

2. Ten Deduplication categories:


Re

a. Deduplication at the source end.


ng

b. Deduplication at the source end.


c. Inline deduplication.
ni

d. Post-processing deduplication.
ar

e. Adaptive deduplication.
Le

f. File-level deduplication.
g. Block-level deduplication.
re

h. Byte-level deduplication.
Mo

i. Local deduplication.
j. Global deduplication.

Page | 280 HCNA-storage V3 | OHC1109107 Backup and Recovery


Exam Preparation

Exam Preparation

n
Multiple response questions:

e
m/
1. Common backup media include:
a. Tape library.

co
b. Disk array.
c. VTL.

i.
d. CD-ROM tower/library.

we
2. By granularity, deduplication can be divided into:

ua
a. File-level deduplication.
b. Block-level deduplication.

.h
c. Byte-level deduplication.
d. Deduplication at source end.

ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.
ar
Slide 34
le
//

Answer (Multiple response questions):


:
tp

1. A B C D.
ht

2. A B C.
s:
ce
ur
so
Re
ng
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109107 Backup and Recovery Page | 281


Summary

Summary

n
Backup concepts and topologies.

e
• Backup technologies.

m/
• Backup policies.
• Huawei backup solutions and application.

co
• Disaster Recovery Introduction.

i.
we
ua
.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.
ar Slide 35
le
: //
tp
ht
s:
ce
ur
so
Re
ng
ni
ar
Le
re
Mo

Page | 282 HCNA-storage V3 | OHC1109107 Backup and Recovery


e n
Thank you

m/
co
www.huawei.com

i.
we
ua
.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 37

ar
le
: //
tp
ht
s:
ce
ur
so
Re
ng
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109107 Backup and Recovery Page | 283


Mo
re
Le
ar

Module 8
ni
ng
Re
so
ur
ce
s:
ht
tp
://
Basics of Cloud Computing
le
ar
ni
ng
.h
ua
we
i.
co
m/
en
www.huawei.com
Mo
re
Le
ar
ni
ng
Re
so
ur
ce
s:
ht
tp
://
le
ar
ni
ng
.h
ua
we
i.
co
m/
en
Introduction

In this module we will give a glimpse on the future. It is the conviction of most ICT gurus that the
future of ICT is in “The Cloud”. Many of us will already have some of our data stored in the cloud
because many vendors like Microsoft, Google and Apple offer storage capacity to their users. The

n
real cloud solution of the future will go one step further than just to offer storage capacity. The

e
m/
cloud of the future will offer both storage capacity as well as computing power. Essentially we, as
users, only need a very simple device and connect to all resources we need in “our” cloud.

co
i.
we
ua
Objectives

.h
ng
After this module you will be able to

ni
 Know the concepts and backgrounds of cloud computing.

ar
Master the deployment and business models of cloud computing.

le
Know the core technologies and value of cloud computing.
 Master Huawei cloud computing solutions.
: //
tp
ht

Module Contents
s:
ce

1. Concept and background of cloud computing.


ur

2. Models of cloud computing.


so

3. Core technologies and value of cloud computing.


4. Huawei cloud computing solutions.
Re
ng
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109108 Basics of Cloud Computing Page | 287


e n
m/
co
i.
we
ua
.h
ng
ni
ar
le
: //
tp
ht
s:
ce
ur
so
Re
ng
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109108 Basics of Cloud Computing Page | 288


Concept of Cloud Computing

Concept of cloud computing

e n
Cloud computing is a style of computing in which dynamically

m/
scalable and often virtualized resources are provided as a service
over the Internet.

co
— From Wikipedia

i.
we
ua
.h
ng
ni
ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 3
: //

Cloud computing is a style of computing in which dynamically scalable and often virtualized
tp

resources are provided as a service over the Internet. The term "cloud" is a metaphor for the
ht

network and Internet.


s:

In earlier modules, we initially used the picture of a cloud to indicate a network infrastructure or
ce

the Internet.
ur
so

In this module we will use the cloud symbol, and the general term cloud, to describe an ICT
Re

infrastructure as a whole. In that infrastructure the users can obtain desired resources through
networks in an on-demand and scalable manner. In other words, in the cloud there are computing
ng

resources available and storage capabilities. For the users it is not visible where the resources
ni

come from. The only thing is that the cloud guarantees that the computer and storage resources
ar

you need are available when you need them. Cloud computing resources are therefore
Le

dynamically scalable and virtualized, provided using Internet. End users do not need to know the
details about the cloud infrastructure, acquire professional knowledge, or even directly operate
re

the cloud. They only need to know what resources they want and how they can obtain these
Mo

resources over the Internet.

HCNA-storage V3 | OHC1109108 Basics of Cloud Computing Page | 289


Cloud computing from a business perspective

For many companies the term Cloud Computing appears to offer the ideal solution for their ICT
problems. The problems are partly technical (hardware, software, knowledge IT staff, disaster
recovery) as well as economical (costs of hardware, software licenses, training, costs for cooling

n
and power). Especially for an external cloud, that is when somebody else is responsible for the

e
cloud, it is just a matter of ordering resources for the business to use.

m/
co
Business perspective:

i.
cloud computing = information power plant

we
ua
Changes in consumption models Changes in business models
Cloud computing provides software, hardware, Users do not need to buy all the required

.h
and services over the Internet. Users obtain hardware or software, but only need to buy
services using browsers or lightweight information services.
terminals. buy information services.

ng
Age of PC Age of Internet

ni
Enterprise data center Internet data center

ar
App2 App1 App3 Computing and storage:
1 migrated from LANs to Internet App1 App2 Appn
le
LAN
//

• Decoupled hardware
Internet and software
:

App1 App1 App1 • Hardware sharing


App2 App2 App2
tp

Appn Appn Appn


ht

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 4


s:
ce

A number of services can be distinguished from the perspective of the business owner:
ur
so

IaaS : Infrastructure as a Service. Here the user only worries about resources and
Re

not about hardware. The IaaS has to provide everything and keep it running.
ng

PaaS : Platform as a Service. With PaaS the provider will offer a platform to the user.
ni

The user is often a software developer. In traditional environments a software


ar

developer had to consider hardware and operating systems when creating


Le

applications. With PaaS he only has to worry about writing the best application
as the underlying platform is taken care of by the PaaS provider.
re
Mo

SaaS : Software as a Service. This has been the first implementation of the cloud
computing technology. The user had minimal hardware to think about and the
SaaS provider arranged a working environment with an operating system and

HCNA-storage V3 | OHC1109108 Basics of Cloud Computing Page | 290


the necessary applications. All the annoying jobs like licensing and software
updates are now handled by the provider of the SaaS environment.

e n
m/
co
i.
we
ua
.h
ng
ni
ar
le
: //
tp
ht
s:
ce
ur
so
Re
ng
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109108 Basics of Cloud Computing Page | 291


Cloud computing from a technical perspective

Technical perspective:
cloud computing = computing / storage network

e n
m/
Service and
Community Search Commerce … Computing File Storage
application software

co
i.
Application service API Cloud capability service API
Cloud platform
Cluster management Parallel processing Automation software: the soul of

we
cloud computing
Operating system + virtual machine Distributed storage

ua
Servers and storage

.h
supporting mass
information processing
0.3inch

ng
Ethernet switches
connecting to

ni
thousands of servers

ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 5
: //

There are a lot of changes in Cloud Computing compared with traditional ICT infrastructures like
tp

NAS and SAN. In traditional ICT infrastructures an application would run on a physical server and
ht

the application would be stored on a local disk or an external disk. The user data also would be
stored locally (DAS) or on an external disk (SAN / NAS volume). The ICT administrator was given
s:

the task to keep all the hardware components running. All the data generated must be protected
ce

against data loss. It meant that within every organization there must be knowledge about server
ur

technology, application, operating systems, networking, storage technology and backup / disaster
so

recovery technologies. Imagine the problems a traditional ICT infrastructure could face today with
Re

ever increasing amounts of user data being generated. Also look at the demands applications
have today that might exceed the potential of any single server.
ng
ni

A very important concept within ICT nowadays, and also the fundamental technology with Cloud
ar

Computing, is Virtualization.
Le
re
Mo

HCNA-storage V3 | OHC1109108 Basics of Cloud Computing Page | 292


Virtualization

Concept of cloud computing: Virtualization

e n
Storage virtualization.

m/
The act of abstracting, hiding, or isolating the internal function of a storage

co
(sub) system or service from applications, compute servers or general

i.
network resources for the purpose of enabling application and network
independent management of storage or data.

we
ua
Compute virtualization.

.h
Software that enables a single server hardware platform to support multiple

ng
concurrent instances of an operating system and applications.

ni
ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 6
: //

There are two kinds of virtualization: storage and compute.


tp
ht

Storage virtualization is the goal to have storage become a resource (or commodity) that is
available to the user. The user itself has no idea about the technical aspect of managing the
s:

hardware. The only thing the user specifies is the number of gigabytes he would need and the
ce

performance requirements of the storage.


ur
so

Compute virtualization (or sometimes called server virtualization) separates the operating
Re

system and the applications from the physical hardware needed to run them. The traditional
approach when setting up an ICT infrastructure is to take hosts, install operating systems and
ng

install applications on the hosts. There was almost always a one-application-per-server policy so
ni

many physical servers were used to run the many applications a company needed. In most
ar

situations the application would only use a limited fraction of the resources (CPU, RAM, storage
Le

capacity) available.
re

With compute virtualization the goal is to emulate multiple virtual servers running on the same
Mo

physical hardware. Well known compute virtualization vendors are VMWare; XEN; KVM and
Virtuozzo.

HCNA-storage V3 | OHC1109108 Basics of Cloud Computing Page | 293


A good Cloud environment can support/offer both virtualization methods. Then the environment
will offer these specifications:

 On the virtualized platforms, applications can be expanded, migrated, and backed up.

n
 Dynamic expansion: Applications can be dynamically expanded. More servers can be added

e
m/
into existing server clusters in real time to increase the computing capability.

co
 On-demand deployment: The cloud computing platform allocates resources and computing

i.
capabilities to applications on demand.

we
ua
 High reliability: Virtualization scatters applications and computing resources to different

.h
physical servers. If one server breaks down, a new server can be added using the dynamic

ng
expansion function, ensuring the proper operation of applications and computing.

ni

ar
High cost efficiency: Cloud computing employs a virtual resource pool to manage all
resources, posing low requirements on physical resources. The cloud formed using low-cost
le
PC’s can deliver higher performance than a mainframe computer.
: //
tp

Cloud computing:
ht

a combination of business models and technologies


s:
ce

Cloud platform Cloud service


ur

Distributed &
parallel software SaaS PaaS
so

Internet
Re

IaaS
Servers & storage
ng
ni

Cloud service Cloud platform


On-demand business models + Distributed and parallel software systems
ar

Huge Capability
Le
re

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 7


Mo

In on-demand business models, user application software and data is stored in the cloud, and can
be accessed using clients. Cloud service providers offer services to customers based on their
needs and charge fees correspondingly.

HCNA-storage V3 | OHC1109108 Basics of Cloud Computing Page | 294


Background of cloud computing

• Software engineering has • The interaction mode is becoming more


changed from machine or and more fine tuned to the users habits.

Interaction mode
language oriented to

n
requirement, network, and

e
Keyboard Mouse Touch Voice
service-oriented.

m/
co
Computing device

i.
1970S Process-oriented

we
1980S Object-oriented
1990S Component-oriented

ua
2000S Field-oriented

.h
2010S Service-oriented 1970s 1980s 2010s
1990s 2000s
Mainframe Midrange PC Desktop Mobile

ng
computer computer Internet Internet

ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 8

ar
le
Over the past fifty years there was a big change in computing devices: mainframe computers in
//

the 1960’s; midrange computers in the 1970’s; PCs and LANs in the 1980’s and desktop Internet
:

and mobile Internet in the 1990’s.


tp

Computing devices are changing from standalone computers to network connected devices.
ht

Communications technologies and networks are developing at a greater speed than predicted by
s:

Moore's law.
ce

Secondly, in the last forty years, there was a change in the way software was engineered: In the
ur

1970’s, flowcharts were used in top-down programming styles. Later the focus was on object-
so

oriented programming. Then in the 1990’s the focus moved to service-oriented programming that
Re

we still see today. Software engineering is no longer oriented towards hosts, such as their
machines, languages, and middleware, but is oriented towards requirements and services over
ng

networks. This is what we call Software as a Service (SaaS). The development of cloud
ni

computing software aims to provide services to customers to suit their needs.


ar
Le

Thirdly, over the last half-century, the way humans interact with computers has changed. In the
beginning all programs required input via a keyboard. A big change was the move to the graphical
re

user interfaces that used a mouse to give inputs to the program. Today there are computers and
Mo

applications that can be operated based on touch, voice, and gestures. The interaction method is
no longer computer-centered but user-centered. On the cloud computing infrastructure, users are
not required to be computer engineers of or IT specialists, but only need to focus on their core
applications.

HCNA-storage V3 | OHC1109108 Basics of Cloud Computing Page | 295


Driving forces of cloud computing

Driving forces of cloud computing

e n
Low investment, high performance and good user experience

m/
co
Customer
requirements

i.
we
ua
Development Changes in

.h
Diagram
of
Diagram Diagram
business
Diagram
22
technologies 33
models

ng
ni
Virtualization, distributed and parallel
computing, Internet and web technologies Cloud computing as a service

ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 9
: //

The popularity of cloud computing solutions of course has its reasons. Here are a few examples
tp

of why cloud computing is used:


ht

 Government and enterprise users need high-performance information systems at low


s:

investment costs.
ce
ur

 Individual users want to be able to access their data wherever they are. So often the
so

requirements include that they should be able to use smart phones or tables. This is referred
Re

to as BYOD which is short for Bring Your Own Device.


ng

 The advanced technology used in cloud computing offers low cost storage. But there is more:
ni

in the cloud all data protection options can be offered too (BaaS or Backup as a Service
ar
Le

 The maturity of the broadband technology and the increased population of subscribers have
made Internet-based services mainstream. That not only applies to the performance but also
re

to the scalability in distance. There is high speed internet almost everywhere now.
Mo

 In the age of Big Data it is almost a necessity to adopt Cloud computing. The success of
many cloud implementations gave shown that it works! Examples are Google’s Google Docs;
Microsoft’s Office 365 and Apple’s iCloud.

HCNA-storage V3 | OHC1109108 Basics of Cloud Computing Page | 296


Cloud computing models

Deployment models of cloud computing

e n
m/
Private

co
cloud

i.
we
Public

ua
Hybrid cloud
cloud

.h
ng
ni
ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 10
: //

Cloud computing has three deployment models: private cloud computing, public cloud computing,
tp

and hybrid cloud computing.


ht

 Private cloud computing: It is usually operated and used by the same organization.
s:

Huawei's data centers use this deployment model. Huawei is both the operator and end user
ce

of the data centers.


ur
so

 Public cloud computing: It is like a public switch. It is operated by a telecommunications


Re

carrier but used by the public.


ng

 Hybrid cloud computing: Its infrastructure is a combination of the previous two types of
ni

clouds. Looking from the outside it appears to be one entity, one cloud. But it remains two
ar

different environments. An enterprise using a hybrid cloud would store its important data
Le

(such as financial data) in its private cloud and unimportant data in the public cloud. Another
example is e-commerce websites. The service volume of an e-commerce website during
re

ordinary days is stable, so the website is able to operate these services in its private cloud.
Mo

However, during events such as sales promotion activities, the service volume surges and the
website has to rent servers from the public cloud to process its services. Resources in both
the public and private cloud can be scheduled in a unified manner, so this is a typical
application of hybrid cloud.

HCNA-storage V3 | OHC1109108 Basics of Cloud Computing Page | 297


Business models of Cloud computing

Business models of cloud computing

e n
m/
User SaaS
CRM, email, games, instant message…

co
PaaS

i.
Developer Cloud service
Database, web server, IDE…

we
IaaS
Storage, network, server…

ua
User

.h
Virtualization

ng
Server Storage Network

ni
ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 11
: //

Infrastructure as a Service (IaaS)


tp

IaaS providers offer all kinds of infrastructure resources to users, including processors, storage
ht

devices, networks, and other basic computing resources. With IaaS, users can deploy and run
any software from operating systems to applications. Without the need to manage or control cloud
s:

computing facilities, users can select the operating system, storage space, and applications, as
ce

well as control network components (for example, the firewall and load balancer). Amazon Elastic
ur

Compute cloud (EC2) is a typical representative of IaaS.


so
Re

Platform as a Service (PaaS)


PaaS providers offer application development platforms (such as Java and .net) running on the
ng

cloud computing infrastructure to users. Without the need to manage or control cloud computing
ni

facilities, users can control their deployed application development platforms. Microsoft Azure is a
ar

typical application of PaaS.


Le

Software as a Service (SaaS)


re

SaaS providers offer applications (such as CRM, ERP, and OA) running on the cloud computing
Mo

infrastructure to users. Salesforce online CRM is a typical application of SaaS.

Other than the previous three business models, there are some other business models: Backup
as a service (BaaS), Desktop as a Service (DaaS); Remote Management as a Service (RmaaS)

HCNA-storage V3 | OHC1109108 Basics of Cloud Computing Page | 298


Categories of cloud computing

Categories of cloud computing

e n
• Dividing a big physical machine to small APP APP APP APP APP APP

m/
virtual machines
VM1 VM2 VMn VM1 VM2 VMn

co
VMM VMM

i.
Physical machine Physical machine

we
ua
• Aggregating smaller physical machines APP1 APP1 APP1
into a big physical machine MapReduce MapReduce MapReduce

.h
Physical machine Physical machine Physical machine

ng
APP1

ni
Physical machine

ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 12
: //

The deployment of cloud computing can be divided into two categories: The division of a big
tp

physical machine to small virtual machines and the aggregation of smaller physical machines into
ht

a big physical machine.


s:

The division of a big physical machine into small virtual machines:


ce

Virtualizes the resources of a high-performance physical machine, and uses these resources to
ur

create a resource pool that combines the functions of computing, storage, and networking. Key
so

technologies used in this method include virtualization, surveillance, scheduling, and migration of
Re

virtual machines. It is applicable in scenarios supporting time-division multiplexing. Amazon EC2


is a typical application of this category.
ng
ni

The aggregation of smaller physical machines into a big physical machine:


ar

Group a number of multiple low-performance physical resources into a single logical high-
Le

performance physical resource. With this method, a task that requires a lot of resources can be
allocated to multiple small physical machines for processing. Key technologies used in this
re

method include task breakdown and scheduling, distributed communications bus, and global
Mo

consistency. Services like the ones provided by Google are a typical application of this category.

HCNA-storage V3 | OHC1109108 Basics of Cloud Computing Page | 299


Compute Virtualization

Core technologies of Cloud Computing-Virtualization

e n
Application

m/
co
Operating system

i.
we
Virtualization layer

ua
.h
ng
ni
Computing and storage

ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 13
: //

Compute or Server Virtualization refers to the creation of a virtual machine with physical IT
tp

resources. It plays an important role in large-scale data center management and solution delivery.
ht

It is the solid foundation for cloud computing. Using this technology, computing, storage, and
network resources can be virtualized as services required by users. A major player in the server
s:

virtualization market is VMware. It allows a physical server with its resources (CPU cycles, RAM,
ce

network interfaces etc.) to be “split up” into multiple virtual servers. Each of the virtual servers (or
ur

vm’s) has its own RAM, amount of CPU’s, network cards and they can all run different operating
so

systems. Each of the vm’s lives isolated within the so-called hypervisor software of the
Re

virtualization server. That means that if a vm runs into trouble and crashes the other vm’s living on
the same physical virtualization server will not be impacted.
ng
ni

VMware offers many tools that allow an ICT infrastructure to be made with all characteristics of
ar

the cloud: scalable, flexible, secure and manageable.


Le

A VMware administrator has control over all virtualization servers; over networking components
and storage resources. From one user interface the administrator can create new vm’s; make
re

backups of them; relocate them to other storage devices or even can even migrate them.
Mo

Migration is the feature where a running vm “moves” from one virtualization server to another.
This is done because the current server does not have enough resources, because the server is
down or when the server has to go down because of maintenance.

HCNA-storage V3 | OHC1109108 Basics of Cloud Computing Page | 300


All applications on the vm’s will then continue to run while to move takes place!

Storage Virtualization – Thin provisioning

n
Core technologies of cloud computing-Thin provisioning

e
m/
co
Client Client Client Client

i.
we
ua
FusionCompute

.h
ng
Thick Thin Thin
20 GB 40 GB 80 GB

ni
ar
Equipment room le
20 GB 20 GB 40 GB
: //

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 14


tp
ht

An administrator can use various storage devices to be used as storage for the virtualization
environment. Storage is a very important factor within the VMware philosophy. That is because a
s:

virtual machine (vm) is in fact represented by a file. That file has to be accessible to the
ce

virtualization server. The storage assigned to VMware to keep vm’s is referred to as a datastore.
ur

So for many vm’s we need a lot of storage or in other words: we need a (lot of) big datastore(s).
so

Datastores are created and later on datastore capacity will be used to store vm’s on. Datastores
Re

that do not have vm’s yet still consume physical storage space as the creating of a datastore
implies that the storage is allocated to the datastore.
ng
ni

For cost effectiveness there is a feature called thin provisioning which is supported in both the
ar

hardware of the storage device as well as within VMware.


Le

Thin provisioning enables flexible, on-demand allocation of storage space, which improves
re

storage utilization. This is done by not assigning physical storage to a datastore yet. VMware will
Mo

only claim storage capacity from the storage device at the time a vm is created and the space is
actually needed.

HCNA-storage V3 | OHC1109108 Basics of Cloud Computing Page | 301


With thin provisioning, a system can be initially allocated with storage space that is actually
required by the services in the system, but it gives the appearance of having more storage space.
As time goes by and more vm’s are created more virtual disks can be added to expand the
storage space. After all configured storage space is allocated a thin disk is using the same
amount of storage capacity as a thick disk.

ne
VMware in general can use storage capacity from different storage devices to be used to form

m/
datastores. So inside the storage architecture different vendors and different types of storage

co
devices can be used.

i.
we
Space monitoring: This function provides alarms on storage space usage. If the space usage

ua
exceeds the preset threshold, an alarm will be generated. That could be the signal for the

.h
administrator to ask more budget for the expansion of physical storage capacity.

ng
Space reclamation: This is a very useful feature of modern virtualization servers. Imagine that a

ni
thin provisioned volume has been filled up to 80% of the capacity with vm’s. Now the

ar
administrator has decided he wants to remove a number of vm’s that he created for testing
le
purposes. The storage capacity allocated to the thin provisioned volume is now more than he
//

actually needs. Space reclamation will now arrange for all excess space used to be released to
:

the storage devices. Space reclamation is now supported with the latest versions of VMware and
tp

the latest versions of all major operating systems.


ht
s:
ce
ur
so
Re
ng
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109108 Basics of Cloud Computing Page | 302


Cloud computing - Resource Planning

Core technologies of cloud computing-Quality of


service (QoS) control

e n
BT downloading Web Oracle

m/
co
i.
we
ua
Fusion Compute

.h
ng
ni
Computing and storage

ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 15
: //

Resource planning is an important job within the virtualized environment. As the total of all vm’s
tp

uses resources on one or more virtualization servers it is important that no vm is able to claim all
ht

available resources of a virtualization server. On the other hand a vm needs a specific amount of
resources so that the application on that vm performs well. The resources that have to be planned
s:

for are:
ce
ur

 CPU resource
so

Every CPU in the virtualization server has a number of cores and each core has computing
Re

abilities. The normal expression is: a CPU has so many cycles. The performance of a CPU is
the product of the number of core times the individual number of cycles of a single core.
ng

Cycles are expressed with GHz. It shows how many calculations per second a core can do.
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109108 Basics of Cloud Computing Page | 303


For resource planning on CPU two parameters are used: Limit and Reservation

The limit of a vm for CPU is the maximum number of cycles the vm can claim. Setting a value
for the limit prevents a virtual machine to use up all resources of a virtualization server.

n
The reservation sets the minimum computing capability needed by a virtual machine. In case

e
a virtualization server has too many vm’s it must run resources might become scarce. At that

m/
point starting a vm would succeed but the vm will have few resources. That would basically

co
mean that the vm will perform poorly. Setting the reservation the vm has a specific amount of

i.
cycles to run on. Unless, of course, there are not enough resources. At that point a

we
reservation will not allow a vm to start.

ua
.h
 Memory resource

ng
Again we have two parameters called Limit and Reservation. Most applications have specific
requirements for RAM to have the application perform well. This would be the reservation.

ni
When the required RAM resources (expressed in GB) are not available; applications will

ar
suffer. There are clever solutions built in VMware but it still is an important parameter. Limits
le
again prevent an application that goes crazy to claim all RAM resources.
: //

 Network resource
tp

This is one of the most complex “problems” in virtualized environments. Reason is the fact
ht

that there are always two separate networks:


One network is physical and it connects virtualization servers and storage devices. The other
s:

network is physical and it allows vm’s to connect to other vm’s. Now in the last case the vm’s
ce

might not be in the same physical virtualization server. So the traffic will then be across both
ur

networks!
so
Re
ng
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109108 Basics of Cloud Computing Page | 304


Cloud computing – Load balancing

Core technologies of cloud computing-Load balancing

e n
App App App App

m/
co
20 GB

i.
we
FusionCompute FusionCompute

ua
.h
ng
Computing and storage

ni
ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 16
: //

One of the most amazing features that Vmware offers is dynamic resource scheduling or DRS.
tp

In a well-designed VMware environment there could be many virtualization servers that together
ht

run hundreds of virtual machines. In the above picture there are just two servers and 4 virtual
machines used, but there is something illogical going on. Three of the vm’s are on one server and
s:

the fourth vm is on another server. DRS could now be setup in such a way that all vm’s are
ce

arranged across the servers so every vm has the resources it needs. If on a server new vm’s
ur

have to be created or started then the vm will look for the most suitable server to “live” on. If a vm
so

finds that it has not enough resources on a specific server it can automatically move to another
Re

server that has more resources available. While the vm is moving from one server to the other the
application is still working.
ng
ni

If we would translate DRS into the cloud computing environment it means that the application (i.e.
ar

your email program) could be running on a virtual machine on any of the physical virtualization
Le

servers.
re
Mo

HCNA-storage V3 | OHC1109108 Basics of Cloud Computing Page | 305


Value of cloud computing

Value of cloud computing

e n
APP1

m/
Consolidation
of APP2 APP2 APP1
servers APP3 APP4 APP3 APP4

co
i.
we
Consolidation of resources, improving utilization Automated scheduling, reducing power consumption

Data can

ua
Central data
be freely
management
accessed
by users.

.h
+

ng
ni
Traditional IT platform Cloud platform

Central data management, enhancing information Efficient maintenance, reducing investment

ar
security le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 17
: //

 A physical server can be virtualized into multiple virtual machines to process different
tp

applications.
ht


s:

The specifications (such as the CPU and memory) of a virtual machine can be flexibly
adjusted, and the number of virtual machines in a system can be added or reduced, to suit
ce

the changing requirements in computing resources.


ur
so

 Automated scheduling, reducing power consumption


Re

To safe costs for power and cooling dynamic power management (DPM) is added. That could
ng

mean that DRS might decide to consolidate the vm’s onto a smaller amount of servers. That
ni

is of course if these servers have enough resources to run the vm’s. Once this is the case the
ar

servers that are not required anymore will be switched of. This reduces power consumption
Le

and emissions. Of course when there are more vm’s powered on or more resources are
needed it would mean that the servers will be powered on again.
re
Mo

 Central data management, enhancing information security

HCNA-storage V3 | OHC1109108 Basics of Cloud Computing Page | 306


On a traditional IT platform, data is scattered on different application servers, and this has the
risks of single points of failure. In a cloud system, data is centrally stored and maintained.
Resources (like vm’s and datastores) are managed from one user interface.

Huawei FusionCloud solutions

e n
m/
In its entire portfolio Huawei of course has some solutions for building cloud computing
environments. In this section we will briefly discuss them.

co
i.
we
HUAWEI FusionCloud solutions

ua
.h
ng
FusionAccess FusionCloud

ni
Installing VDI on a virtual Installing VDI on

ar
platform makes a standard FusionCube make
desktop cloud solution. the VDI FusionCube.
le
Installing FusionShpere on
FusionCube specific hardware makes the
FusionCube solution.
: //
tp

FusionSphere virtualizes
ht

FusionSphere physical infrastructures, laying


a foundation for the other two
solutions.
s:
ce

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 19


ur
so

Huawei provides three cloud computing solutions:


Re

 FusionSphere (infrastructure virtualization)


ng

 FusionCube (all in one)


ni

 FusionAccess (desktop cloud).


ar
Le

FusionSphere is the basis of the other two solutions, and it is used to virtualize the physical
infrastructure. FusionSphere can be preinstalled on specific hardware to form the FusionCube
re

solution for fast service deployment. A Virtual desktop infrastructure (VDI) can be deployed on
Mo

FusionCube or FusionSphere to form the FusionAccess solution.

HCNA-storage V3 | OHC1109108 Basics of Cloud Computing Page | 307


e n
m/
co
i.
we
ua
.h
ng
ni
ar
le
: //
tp
ht
s:
ce
ur
so
Re
ng
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109108 Basics of Cloud Computing Page | 308


Infrastructure virtualization - FusionSphere

Infrastructure virtualization — FusionSphere

e n
Enterprise IT O&M personnel
Third-party
FusionAccess SQL Server personnel

m/
application

co
i.
Enterprise IT system

we
FusionSphere

ua
FusionManager

.h
FusionCompute

ng
ni
Server Storage

ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 20
: //

FusionSphere virtualizes hardware resources using the virtualization software deployed on


tp

physical servers, so that one physical server can function as multiple virtual servers. The server
ht

workloads are consolidated and new applications and solutions are deployed on idle servers to
keep the consolidation rate high. FusionSphere has two main software components:
s:

FusionCompute and FusionManager.


ce
ur

FusionCompute consists mainly of virtual resource management (VRM) and host components. It
so

virtualizes physical resources and provides virtualized services to data centers.


Re

FusionManager consists of integrated resource management (IRM), self-service provisioning


ng

(SSP), automatic management engine (AME), identity and access management (IAM), unified
ni

portal (Uportal), intelligent data base (IDB), common service and bus (CSB), and unified hardware
ar

management (UHM) systems. It is the management software of data center virtualization that
Le

manages virtual resources, hardware resources, and services.


re

FusionManager reports alarms to the upper-layer network management system (NMS) through
Mo

SNMP interfaces. Computing, storage, and network devices can access FusionManager through
SNMP, IPMI, or SSH.

HCNA-storage V3 | OHC1109108 Basics of Cloud Computing Page | 309


FusionManager obtains configuration and alarm information about virtual resources using
FusionCompute, which manages virtual machines as instructed by FusionManager.

All in one - FusionCube

e n
All in one — FusionCube

m/
co
i.
Computing Cloud infrastructure Cloud
management

we
Elastic Disaster
Network computing recovery Service
management

ua
Storage Virtual Service
Elastic load

.h
private protection
balancing
cloud
SSD card

ng
Security
management
Virtualized infrastructure

ni
iNIC card
FusionCube Virtualized resource scheduling Automation

ar
+ Compression
card Unified
Computing Storage Network
hardware
le
virtualization virtualization virtualization
management
GPU&SNP
: //
tp

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 21


ht

FusionCube consolidates computing, storage, and switching devices, and is preinstalled with
s:

FusionCompute, FusionManage, and FusionStorage. It virtualizes and centrally manages


ce

hardware resources.
ur
so

FusionCube is an open, scalable, and all-in-one virtual system. Its advanced features such as
Re

unified resource management, automatic application deployment help users deploy and maintain
different cloud applications at ease.
ng
ni

FusionCube also allows users to customize, deploy, update, and manage service applications in
ar

both standalone machines and clusters, including Exchange, SharePoint, Enterprise Resource
Le

Planning (ERP), Customer Relationship Management (CRM), Virtual Desktop Infrastructure (VDI),
and SQL Server.
re
Mo

HCNA-storage V3 | OHC1109108 Basics of Cloud Computing Page | 310


Desktop cloud - FusionAccess

Desktop cloud — FusionAccess

e n
m/
Virtual desktop management layer

O&M management system

co
Existing IT system
Access control layer
Cloud terminal

i.
Cloud computing infrastructure

we
Virtualization infrastructure

ua
Server virtualization / Network virtualization /
Storage virtualization

.h
Hardware resources

ng
Server / Storage / Network

ni
ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 22
: //

FusionAccess delivers virtual desktop applications based on HUAWEI FusionCube and


tp

FusionSphere. By deploying software and hardware on these cloud platforms, users can access
ht

cross-platform applications and even the entire desktop cloud using thin clients (TCs) or other
devices connected to the Internet.
s:
ce

FusionAccess addresses challenges faced by PC’s such as security issues, investment concerns,
ur

and work efficiency considerations. It is a wise choice for financial institutions, large- and medium-
so

sized enterprises, government departments, call centers, customer service centers, medical
Re

organizations, military agencies, and dispersed, outdoor, or mobile offices. Logical architecture of
HUAWEI FusionAccess:
ng
ni

 Hardware resources
ar

Hardware refers to FusionAccess hardware infrastructure, including servers, storage devices,


Le

switching devices, racks, security devices, firewalls, and power supply equipment.
re

 Virtualization infrastructure platform


Mo

It virtualizes various physical resources in the desktop cloud based on resource requirements
of virtual desktops.

HCNA-storage V3 | OHC1109108 Basics of Cloud Computing Page | 311


 Cloud computing infrastructure platform

The cloud computing infrastructure platform includes the following:

□ Cloud resource management: FusionCloud manages virtual user desktop resources


including computing, storage, and network resources.

n
□ Cloud resource scheduling: FusionCloud migrates virtual machines from high-load

e
physical resources to low-load physical resources based on the current system running

m/
status.

co

i.
Virtual desktop management layer

we
This layer authenticates virtual desktop users. This helps to ensure the security of the virtual

ua
desktop application, and to manage sessions of all virtual desktops in the system.

.h
ng
 Access control layer

ni
This layer effectively controls access from terminals. Access control devices include the

ar
access gateway, firewall, and load balancer. le

//

O&M management system


:

This system incorporates service operation management as well as O&M management.


tp

□ Service operation management is responsible for service processes such as account


ht

creation and deletion.


s:

□ O&M management is used to operate and maintain resources in the desktop cloud
ce

system.
ur

□ Cloud terminal
so
Re

It is used to access the virtual desktop. It can be a PC a Thin Client, software client, or mobile
terminal.
ng
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109108 Basics of Cloud Computing Page | 312


Questions

Questions

e n
1. What three terms best describe cloud computing?

m/
2. Name four reasons why a company could consider using a cloud

co
computing solution.

i.
3. What is compute virtualization?

we
ua
.h
ng
ni
ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 22
: //

Answers:
tp
ht

1. Storage virtualization, compute virtualization, parallel processing, dynamic and expandable.


s:

2. Reasons of considering cloud computing are:


ce

- No need to own and maintain much hardware.


ur

- No need to do software patches and updates.


so

- Total cost of ownership is lower.


Re

- Cloud computing solutions can offer disaster recovery and backup.


- Lower education costs for ICT staff.
ng
ni

3. With compute virtualization the resources of a physical server are subdivided to “build”
ar

smaller virtual servers that borrow parts of the resources of the physical server like CPU
cycles, RAM memory and network interfaces.
Le
re
Mo

HCNA-storage V3 | OHC1109108 Basics of Cloud Computing Page | 313


Exam Preparation

Exercises

e n
Multiple-answer questions

m/
1. Which of the following are the deployment models of cloud computing?

co
Check all that apply.

i.
a. Private cloud. c. Hybrid cloud.
b. Public cloud. d. Desktop cloud.

we
ua
2. Which of the following models of cloud computing can be described as:
The cloud provider arranges the installation, configuration and updating of

.h
all operating systems and applications a user remotely connects to.

ng
a. IaaS. c. SaaS.
b. PaaS. d. DaaS.

ni
ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 23
: //
tp

Exercises
ht

3. Statement 1: Huawei data centers are hosted in the public cloud.


s:

Statement 2: The Huawei FusionCube solution provides such functions


as computing, storage, and network.
ce

a. Statement 1 is true; statement 2 is true.


ur

b. Statement 1 is true; statement 2 is false.


so

c. Statement 1 is false; statement 2 is true.


Re

d. Statement 1 is false; statement 2 is false.


ng
ni
ar
Le

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 24


re
Mo

Answers:

1. A, B, C.

2. C.

HCNA-storage V3 | OHC1109108 Basics of Cloud Computing Page | 314


3. C.

Summary

e n
Summary

m/
co
• The concept of cloud computing.

i.
□ Separate physical factors and resources for the user.

we
• Deployment and business models of cloud computing.
□ SaaS, PaaS, IaaS.

ua
• Core technologies of cloud computing.

.h
□ Storage and compute virtualization.

ng
□ Public, private and hybrid clouds.

ni
• Huawei cloud computing solutions.
□ FusionSphere, FusionAccess, FusionCube.

ar
le
: //

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 25


tp
ht
s:
ce
ur
so
Re
ng
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109108 Basics of Cloud Computing Page | 315


e n
Thank you

m/
co
www.huawei.com

i.
we
ua
.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 27

ar
le
: //
tp
ht
s:
ce
ur
so
Re
ng
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109108 Basics of Cloud Computing Page | 316


en
m/
co
i.
we
ua
OHC1109109

.h
Huawei Storage Product Information

ng
and Licenses

ni
ar
le
//

www.huawei.com
:
tp
ht
s:
ce
ur
so
Re
ng
ni
ar
Le
re
Mo
Mo
re
Le
ar
ni
ng
Re
so
ur
ce
s:
ht
tp
://
le
ar
ni
ng
.h
ua
we
i.
co
m/
en
Introduction

In this module we will look at the specific products Huawei has in its portfolio for building any type
of ICT infrastructure. The focus of course will be on the various storage products Huawei offers.
The module however will start with the explanation of the RAID 2.0+ technology. RAID 2.0+ is the

n
basis for all enterprise class storage devices Huawei offers.

e
m/
co
i.
Objectives

we
ua
After this module you will be able to:

.h
 Describe the concepts behind Huawei’s advanced RAID virtualization technology.

ng
 Understand how Hot Spare Space is used during data reconstruction.

ni
 List the convergence benefits of the new V3 generation storage devices of the OceanStor

ar
series.
 Identify the most important storage related products Huawei offers.
le
: //
tp

Contents
ht

 RAID 2.0+ concepts.


s:

 Hot Spare Space.


ce

 OceanStor V3 products.
ur

 OceanStor Legacy products.



so

OceanStor Licenses.
Re
ng
ni
ar
Le
re
Mo

CNA-storage V3 | OHC1109109 Huawei Storage Product Information and Licenses Page | 317
e n
m/
co
i.
we
ua
.h
ng
ni
ar
le
: //
tp
ht
s:
ce
ur
so
Re
ng
ni
ar
Le
re
Mo

Page | 318 HCNA-storage V3 | OHC1109109 Huawei Storage Product information and Licenses
RAID 2.0+ Evolution

In module 5 the concepts of RAID were explained. That was the traditional way of working with
RAID that is still applied in some storage solutions and definitely in many server solutions. Huawei
Enterprise Class Storage Solutions use an advanced version of RAID. It is still the intention of

n
RAID to prevent data loss in case of a hardware failure. The RAID 2.0+ technology is based on

e
so-called storage virtualization. This type of virtualization implies that the data is split up in smaller

m/
segments and those segments are stored on physical disks. The goal of RAID2.0+ is now to

co
make sure that we do not lose a single segment of data!

i.
we
ua
RAID 2.0+ Evolution

.h
ng
ni
ar
le
: //
tp

Hot
spare
ht

Traditional RAID LUN virtualization RAID 2.0+


s:
ce
ur

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 3


so
Re

The initial RAID technology combines several cheap and small-capacity physical disks into a
large logical disk for a server to use. As the capacities of disks become increasingly large, RAID
ng

is not merely used to construct a large-capacity disk but to obtain higher data reliability and
ni

security and improve storage performance.


ar
Le

The number of disks combined into a RAID group can be divided into LUNs that are mapped to
servers for data read/write. The capacity of modern disks has gone up to be several terabytes.
re

With traditional RAID the rebuild of a failed disk takes a long time and if another disk fails during
Mo

the reconstruction, data could be lost. To resolve the problem, block virtualization is developed. A
traditional RAID group uses a single disk as a member disk. Block virtualization further divides
disk space into small blocks and uses the blocks as members to form RAID groups. This
technology is known as Huawei’s RAID 2.0+.

HCNA-storage V3 | OHC1109109 Huawei Storage Product information and Licenses Page |


319
Principle of RAID 2.0+

With traditional RAID the first step was to create a RAID group. There are restrictions and
requirements to RAID groups: They should be of disks with the same size and rotational speed.
Secondly the advice is to have no more than twelve disks in a RAID group.

en
m/
Disk Domain

co
i.
we
A Disk Domain has a maximum of three tiers.

ua
Physical Disks Disk domain #1

.h
Tier

ng
High Performance

ni
ar
Performance

Disk domain #2
le
Capacity
: //
tp
ht

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 4


s:

Huawei storage devices that are based on RAID 2.0+ use another approach. The first step is to
ce

create a Disk Domain. A Disk Domain is a group of physical disks that will work together. Disk
ur

Domains look to be the same as RAID groups but there is a big difference. With Disk Domains the
so

number of disks per Disk Domain is much higher than with traditional RAID groups. Also: in a Disk
Re

Domain a maximum of three different drive types (SATA; SAS; SSD) can be combined. The term
TIER is used to indicate the disk drive type within a Disk Domain.
ng
ni

Tier Disk Drive Type


ar

High Performance Solid State Disks (SSD)


Le

Performance SAS disks (10,000 and 15,000 RPM)


re

Capacity NL-SAS disks (7,200 RPM)


Mo

Page | 320 HCNA-storage V3 | OHC1109109 Huawei Storage Product information and Licenses
The capacity of a Disk Domain is divided into space for Storage Pools and so-called Hot Spare
Space. The amount of hot spare space is determined automatically and it is related to the number
of disks in the Disk Domain.

n
Hot Space Space Policy

e
m/
co
Minimum reserved capacity is equal to the size of one disk.

i.
Number of disks in Hot Spare Space Hot Spare Space

we
disk domain in HIGH policy in LOW policy
1 - 12 1 1

ua
13 - 24 2 1

.h
25 - 48 3 2

ng
49 - 72 4 2
73 - 120 5 3

ni
121 - 168 6 3

ar
169 - 264 7 4
265 - 360 8 4
le
//

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 5


:
tp
ht

In each Huawei storage device using RAID 2.0+ there is at least hot spare space to survive a
single disk failure. This hot spare space can grow to a capacity equal to eight disks. This however
s:

does not automatically mean that up to eight disks can fail simultaneously without data loss. It just
ce

means that there is room to rebuild eight disks that have failed with the following limitation: the
ur

disks have not failed at the same time and between two disk failures there was enough time to
so

reconstruct all user data!


Re

So the raw capacity of an Disk Domain is equal to (#disks - hot spare space) * disk capacity
ng
ni

The net capacity is depending on the selected RAID level. It requires us to look deep inside the
ar

concepts of the RAID 2.0+ technology.


Le

In the next slides we will see how user data will be divided into smaller parts and we will see how
re

these parts are stored on physical disks in a very clever way that allows us to:
Mo

 Access the data (READ and WRITE) very quickly.

 Reconstruct the data on a failed disk much quicker than with traditional RAID.

 Have a more flexible and more enhanced data protection method that could sustain
multiple consecutive disk drive failures.

HCNA-storage V3 | OHC1109109 Huawei Storage Product information and Licenses Page |


321
In the next slides new terms will be introduced:

Disk Group : Disks within a Disk Domain of the same type.

Chunk: A 64 MB section of space allocated on a disk.

Chunk Group: A number of Chunks, taken from multiple disks, and protected using RAID. All the

ne
Chunks of a Chunk Group come from the same Disk Group.

m/
Extent: A section of a Chunk Group. The smallest unit with which requested space,

co
released space and relocated data is calculated. Extents are the building blocks

i.
for Thick LUNs. Default size of an extent is 2 MB but they can be configurable

we
between 512 kB and 64 MB.

ua
Grain: A subdivision of an extent used when creating Thin LUNs. A Grain is 64 kB in

.h
size.

ng
ni
Principle of RAID 2.0+

ar
le
Disk domain
//

Chunk (CK) Chunk Group (CKG)


:
tp
Disk Group SAS

ht

RAID is set for CKG


s:

Thick LUN 1
Extent
ce

Extent
Extent Extent
Disk Group

Extent
NL-SAS

ur

Extent
Thick LUN 2
Extent
Extent Extent
so

Extent Extent
Extent
Re

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 6


ng
ni

Inside a Huawei storage device that holds different drive types (SSD, SAS and/or L-SAS) there
ar

are multiple tiers and therefore multiple Disk Groups. A number of chunks taken from multiple
Le

disks in the Disk Group are combined into a Chunk Group. Extents are subdivisions of a Chunk
re

Group and they are used to build thick LUNs. Extents are 4 MB by default.
Mo

From the user perspective the Disk Groups, Chunks and Chunk Groups are invisible and not
configurable entities. The Huawei RAID 2.0+ firmware handles all of these internally. Users can
configure the size of the Extent (512 kB through 64 MB).

Page | 322 HCNA-storage V3 | OHC1109109 Huawei Storage Product information and Licenses
Thick LUNs are built using Extents. This means that any LUN occupies a multiple of 4 MB of
storage capacity. Extents are assigned to a LUN at the time the LUN gets created. Although there
is no actual user data written to the LUN by any external application the storage is already pre-
allocated and could be considered to be used already.

e n
Principle of RAID 2.0+

m/
co
i.
Disk domain

Chunk (CK) Chunk Group (CKG)

we
ua
Disk Group SAS

.h
RAID is set for CKG

ng
ni
Thin LUN 1
Extent Grain
Disk Group

ar
Extent Grain Grain
NL-SAS

Extent Grain
Extent Grain Grain
le
Extent Grain Grain
Extent
//

Extent
:
tp

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 7


ht

Within the Huawei storage devices there is an option to create so-called Thin LUNs. A thin LUN
s:

only allocates physical storage when actual user data is written to a LUN. That is why in the case
ce

of a Thin LUN the extents are divided into smaller 64 kB Grains. Grains will be associated with
ur

written user data and not entire Extents. This means that the storage consumption of a Thin LUN
so

is allocated with 64 kB increments when very small files are written to the Thin LUN.
Re

The RAID 2.0+ technology within the Huawei storage devices can handle multiple Disk Domains,
ng

up to 360 disks per Disk Domain, multiple tiers within a Disk Domain, Extents and/or Grains to
ni

build LUNs and at the same time handle hot spare space!
ar
Le
re
Mo

HCNA-storage V3 | OHC1109109 Huawei Storage Product information and Licenses Page |


323
RAID 2.0+ Logical objects

RAID 2.0+ Logical objects

ne
m/
co
i.
we
ua
.h
ng
ni
ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 8
: //

There is another object shown in the above image. A storage pool is a subdivision of a Disk
tp

Domain. Storage Pools are created within the user interface with two parameters: capacity and
ht

RAID type. Within each Storage Pool three tiers may exist (if the Disk Domain has three different
disk types) and from each tier space can be allocated. Also for each tier inside the Storage Pool
s:

the administrator can select the required RAID protection level.


ce
ur

What RAID 2.0+ in fact does is to make sure that RAID like techniques are used on the level of
so

Chunks. So RAID 10 will now make a copy of a Chunk on another disk inside the same Disk
Re

Group. That means that the term RAID actually is not very correct anymore. Maybe a better name
would be RAIC or Redundant Array of Independent Chunks.
ng
ni

As it operates on chunk level and not on disk level there are other differences with traditional
ar

RAID. For instance in RAID 5 there was the concept of N+1. For N data disks we needed the
Le

capacity of one extra drive to calculate and store the parity information.
re

In RAID 2.0+ there are options like 2D+1P; 4D+1P and 8D+1P. This implies that 2 (or 4 or 8) data
Mo

chunks with user data in them are used to calculate the parity. This now means a variable
overhead. With 2D+1P the overhead is 33%, with 4D+1P it is 20% and with 8D+1P the overhead
is 11%.

Page | 324 HCNA-storage V3 | OHC1109109 Huawei Storage Product information and Licenses
These variable overheads look like they are less efficient than with traditional N+1 RAID 5.
However in a twelve disk RAID 5 group we can only lose a single drive. When a second drive fails
this leads to data loss. Using 4D+1P with RAID 5 in RAID 2.0+ it means that the chunks of a
RAID 5 family (4D + 1P) are located on five out of the twelve physical disks. Now inside of that
twelve Disk Domain two drives can fail as long as they do not carry two out of the five chunks of a

n
specific RAID 5 family!

e
m/
co
Automatic load balancing, reducing the system failure rate

i.
we
ua
.h
ng
ni
Traditional RAID RAID 2.0+

ar
le
: //
tp
ht

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 9


s:
ce

The intelligence of the Huawei RAID 2.0+ technology will make sure that all chunks of all RAID
ur

groups are distributed across all the disks of the Disk Domain. This means that the workload of
so

storing and reading data is divided across all the disks. On top of adding to the performance of
Re

the system RAID 2.0+ also adds to the fault protection rate.
ng
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109109 Huawei Storage Product information and Licenses Page |


325
High System Reliability

High System Reliability

en
m/
Traditional RAID

co
i.
RAID 2.0+

we
ua
Traditional RAID RAID 2.0+

.h
Global or local hot spare disks must be Distributed hot spare space does not need to be
manually configured. separately configured.

ng
Multi-to-one reconstruction is used. Multi-to-multi reconstruction is used.
Reconstruction data blocks are written onto a Reconstruction data blocks are written onto
single hot spare disk in serial. multiple disks in parallel.

ni
Reconstruction is prolonged due to hotspots. Reconstruction is shortened owning to load

ar
balancing. le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 10
: //

Maybe the greatest advantage of RAID 2.0+ is the rebuilding capability of the system. In
tp

traditional RAID the data of the failed disk could be reconstructed but it took a lot of time. Reason
ht

is that all remaining disks had to be read to find all the data in the stripe. With the parity
information there was then the option to reconstruct the data. That reconstructed data now had to
s:

be written onto the one spare disk.


ce
ur

With RAID 2.0+ the data can be constructed by reading less disks (maximum with RAID 5 8D+1P
so

is eight disks). The second advantage is that RAID 2.0+ does not have hot spare disks but hot
Re

spare space. This space is located across all the disks in the Disk Domain. So the reconstructed
data can be stored on multiple drives. Therefore with reconstructing data there is no bottleneck in
ng

a single spare disk like with traditional RAID.


ni
ar

Reconstructing a failed disk can be up to twenty times faster using RAID 2.0+ technology.
Le
re
Mo

Page | 326 HCNA-storage V3 | OHC1109109 Huawei Storage Product information and Licenses
Fast Reconstruction

Fast Thin Reconstruction to Reduce Dual-Disk Failure Probability

e n
m/
co
i.
we
ua
.h
ng
ni
ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 11
: //

In the schematic diagram of traditional RAID, HDDs 0 to 4 compose a RAID 5 group, and HDD 5
tp

serves as a hot spare disk. If HDD 1 fails, an algorithm is used to reconstruct data based on
ht

HDDs 0, 2, 3, and 4, and the reconstructed data is written onto HDD 5.


s:

In the schematic diagram of RAID2.0+, if HDD 1 fails, its data is reconstructed based on a CK
ce

granularity, where only the allocated CKs (CK12 and CK13 in the figure) are reconstructed. All
ur

disks in the storage pool participate in the reconstruction. The reconstructed data is distributed on
so

multiple disks (HDDs 4 and 9 in the figure).


Re

RAID2.0+ fined-grained and efficient fault handling also contributes to reconstruction acceleration.
ng

If a traditional RAID group is reconstructed the entire disk will be reconstructed including empty
ni

sections. By efficiently identifying used space, RAID2.0+ implements thin reconstruction upon a
ar

disk failure to further shorten the reconstruction time, mitigating data loss risks.
Le
re
Mo

HCNA-storage V3 | OHC1109109 Huawei Storage Product information and Licenses Page |


327
Dynamic Space Distribution

Dynamic Space Distribution to Flexibly Adapt to Service Changes

ne
m/
co
i.
we
SmartTier SmartThin SmartMotion SmartVirtualization

ua
.h
ng
SmartVirtualization

ni
ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 12
: //

RAID2.0+ is implemented based on industry-leading block virtualization. Data and service load in
tp

a volume are automatically and evenly distributed onto all physical disks in a storage pool. RAID
ht

2.0+ offers optimal data protection; optimal performances and extreme efficient reconstruction
performances.
s:
ce

On top of that there are even more advantages to RAID 2.0+’s block (or better Chunk)
ur

virtualization.
so
Re

Huawei has created a number of enterprise level features that can be purchased in combination
with its storage devices. Examples are SmartTier and SmartVirtualization.
ng
ni

In the next section of this module we will have an overview of the latest generation of Huawei
ar

storage devices and their specifications. We will also list a number of features that are sold
Le

separate from the hardware.


re
Mo

Page | 328 HCNA-storage V3 | OHC1109109 Huawei Storage Product information and Licenses
Huawei Storage Products

In the previous modules we have explained the fundamentals of storage in a general way. We
saw concepts like DAS, NAS and SAN and we discussed RAID, iSCSI, Fibre Channel etcetera
from a neutral standpoint. This section will now discuss the latest generation of Huawei storage

n
products. The storage models are usually called OceanStor.

e
m/
In 2015 the next generation of Oceanstor is released: Generation V3.

co
i.
we
Huawei Storage Products

ua
• Enterprise Unified Storage Solutions:

.h
□ OceanStor 18000 series.

ng
□ OceanStor 6800 V3 series.

ni
□ OceanStor 5300/5500/5600/5800 V3 series.
□ OceanStor Dorado 2100 G2/5100.
ar
le
□ OceanStor S2200T series.
□ OceanStor S2600T/S5500T/S5600T/S5800T/S6800T.
//

□ OceanStor VIS6600T.
:
tp

• Enterprise Storage Networking Solutions:


ht

□ OceanStor SNS2124/2224/2248.
□ OceanStor SNS3096/5192/5384.
s:
ce

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 13


ur
so

As you can see in the above image not all OceanStor models are available as a release 3 version
Re

yet, but in the upcoming months more and more models will become available in V3.
ng

The image above also lists some legacy models for storage (the SxxxxT series). They will not be
ni

discussed in this section but legacy models are not End-Of-Life and will still be supported by
ar

Huawei.
Le
re
Mo

HCNA-storage V3 | OHC1109109 Huawei Storage Product information and Licenses Page |


329
Huawei Storage Products

• Massive Storage Solutions:


□ OceanStor 9000 Big Data.

n
□ OceanStor UDS Massive Storage.

e
m/
□ OceanStor N8500 Clustered NAS system.

co
• Data Protection Solutions:

i.
□ OceanStor VTL6900.

we
□ OceanStor HDP3500E Backup Appliance.

ua
• Storage Software:

.h
□ OceanStor ReplicationDirector.

ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 14

ar
le
With its portfolio of storage devices and storage related devices there is almost always a solution
//

Huawei can offer for the customers ICT infrastructure.


:
tp

Positioning Huawei Storage


ht
s:
ce
ur
so
Re
ng
ni
ar
Le

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 15


re
Mo

The range of products starts with storage devices for Small and Medium Business companies
(SMB’s) with a few servers and switches all the way up to a complete turnkey datacenter. With
the last Huawei can provide for all required equipment and facilities needed to build and configure
a complete working datacenter.

Page | 330 HCNA-storage V3 | OHC1109109 Huawei Storage Product information and Licenses
Enterprise Converged Storage

OceanStor V3 – Enterprise Converged Storage

e n
OceanStor V3 Key Features

m/
co
SSD & HDD

i.
Convergence

we
High-End, Mid-Range, Primary & Backup Storage

ua
Entry-Level Convergence Convergence

.h
SAN & NAS Heterogeneous Storage
Convergence Unified & easy management Convergence

ng
ni
State of art hardware

ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 16
: //

SAN / NAS Convergence


tp

All V3 models of the OceanStor series are now built as Unified Storage devices. To explain what
ht

unified storage is let us look at the definition of unified storage:


A single, integrated storage infrastructure that functions as unification engines to simultaneously
s:

support Fibre Channel, IP Storage Area Networks (SAN) and Network Attached Storage (NAS)
ce

data formats. That means that all V3 OceanStor devices are shipped with the intelligence to
ur

handle block based and file based storage. Block based will be assigned to hosts in the traditional
so

storage way. For file based data there is the option to access the files via the CIFS and/or the
Re

NFS protocol.
ng

High-End, Mid-range and Entry-Level Convergence


ni

All OceanStor V3 storage devices are now based on the same architecture which allows for easy
ar

upgrades and conversions. Also for DR solutions it is no longer required to have (near) identical
Le

hardware in the remote datacenter.


re

SSD and HDD Convergence


Mo

In Huawei V3 there will be a convergence of data on SSD and HDD. Traditionally data will be on
one of the two platforms. With RAID 2.0+ and V3 data will be at the optimal location which could
mean it is partly on SSD and partly on HDD.

HCNA-storage V3 | OHC1109109 Huawei Storage Product information and Licenses Page |


331
Primary and Backup Convergence
Built in the Huawei OceanStor is snapshot technology as well as replication technology. That can
be used (for both file based data as well as block based storage) to implement a backup strategy.

Heterogeneous Convergence

n
Huawei is involved in the process to migrate data from storage devices (i.e. EMC, IBM) to Huawei

e
OceanStor V3 storage devices. Support for other vendors and more models is planned for the

m/
coming period.

co
i.
OceanStor V3 Software Architecture

we
ua
Across almost all models of the OceanStor in the new V3 platform the functionalities are
applicable. The next image shows the software architecture for the OceanStor V3 models.

.h
ng
OceanStor V3 Software Architecture

ni
ar
Management function control software
OceanStor DeviceManager Syslog Syslog Syslog
le
Basic function control Value-added function control software
//

software
Snapshot Remote replication LUN Copy
:

Cache SPool
Clone Consistency Group SmartQoS
tp

SCSI SRAID SmartMotion SmartPartition SmartThin


ht

SmartTier SmartMigration SmartVirtualization


File Quota
Protocol Manage SmartErase SmartMulti-Tennant HyperMirror
s:

ment
SmartDedupe&SmartCompression SmartCache
ce

Volume Management
Module of File System WORM
ur

Operating system layer of a storage system


so
Re

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 17


ng

Just of few of the licensed features can be used in specific models.


ni
ar
Le
re
Mo

Page | 332 HCNA-storage V3 | OHC1109109 Huawei Storage Product information and Licenses
OceanStor Unified Storage Platform

OceanStor Unified Storage Platform

n
Controller Platform Disk Enclosure
Model

e
(SAN + NAS) Platform

m/
5300 V3

co
2U Platform 2U 25*2.5” disk enclosure

i.
5500 V3

we
5600 V3

ua
3U Platform 4U 24*3.5” disk enclosure
5800 V3

.h
ng
6800 V3 6U Platform
4U 75*3.5” high-density

ni
disk enclosure

ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 18
: //

The first two models (5300 / 5500) are based in a 2U chassis and in that chassis we find the
tp

controllers as well as a number of physical disks. In both models 5300 and 5500 additional
ht

storage capacity can be added using SAS cables connected to one or more disk enclosures. The
models 5600 V3, 5800 V3 and 6800 V3 are in a 3U or 6U chassis with just controllers. All disk
s:

capacity will be created with SAS attached disk enclosures.


ce
ur

Currently three disk enclosure models are available:


so

 A 2U disk enclosure that can hold up to 25 disks with a size of 2.5”.


Re

 A 4U disk enclosure that can hold up to 24 disks with a size of 3.5”.


 A 4U high-density disk enclosure that can hold up to 75 disks of 3.5”.
ng
ni

Note:
ar

In IT the unit U is used to indicate the dimension of components. Most devices are constructed to
Le

be 19 inch wide. The height of servers is expressed in U units. Servers are usually 1, 2 of 3 U in
size. Storage devices are often 2, 3 or 4 U high. The racks that servers, storage devices etc are
re

mounted in are typically 42U in height.


Mo

( 1 U equals 1.75 inch or 4.45 cm).

HCNA-storage V3 | OHC1109109 Huawei Storage Product information and Licenses Page |


333
OceanStor 5300 V3

5300 V3/5500 V3 Controller Platform (1)

ne
m/
co
i.
we
ua
System architecture

.h
• The latest PANGEA hardware platform.

ng
• Disk and controller integration (2 U controller enclosure: disk and controller
integration).

ni
• Active-active dual controllers.

ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 19
: //
tp

5300 V3/5500 V3 Controller Platform (2)


ht
s:

Highlights
ce

• High performance: PCIe 3.0 high-speed bus and SAS 3.0 high-speed I/O channel.
• Outstanding reliability: Full redundancy design.
ur

Built-in BBU + data coffer.


A wide range of data protection technologies.
so

• Flexible scalability: Hot-swappable I/O interface modules.


Re

Four hot-swappable interface modules and two onboard.


interface modules (2 U controller enclosures).
• Energy saving:
ng

Intelligent CPU frequency control.


Delicate fan speed control.
ni
ar
Le
re

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 20


Mo

Page | 334 HCNA-storage V3 | OHC1109109 Huawei Storage Product information and Licenses
OceanStor 5300 V3 specifications

OceanStor 5300 V3 Specifications

en
Model 5300 V3

m/
System Cache (expanded with the number of controllers) 32 GB to 256 GB
Maximum Number of Controllers 8

co
Supported Storage Protocols Fibre Channel, FCoE, iSCSI, InfiniBand, NFS, CIFS, HTTP, and FTP
1 Gbit/s Ethernet, 10 Gbit/s FCoE, 10 Gbit/s TOE, 16 Gbit/s FC, and

i.
Port Types
56 Gbit/s InfiniBand, SAS 3.0 (back-end, 4 x 12 Gbit/s per port)
Maximum Number of Disks Supported by Two Controllers 500

we
Maximum Number of Front-end Ports per Controller 12
Maximum Number of I/O Modules per Controller 2

ua
Maximum Number of Snapshots (LUN) 256
Maximum Number of LUNs 2048

.h
Maximum Number of Snapshots per file system 2048
Maximum Capacity of a single file 256 TB

ng
Disk Types SSD, SAS, and NL-SAS
RAID Levels RAID 0, 1, 3, 5, 6, 10, or 50

ni
Key Software Features UltraPath, Cloud Service, ReplicationDirector, DeviceManager, eSight
2 U controller enclosure: 86.1 mm x 447 mm x 750 mm
Dimensions

ar
(3.39 in. x 17.60 in. x 29.53 in.)
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 21
: //
tp
ht

OceanStor 5300 V3 / 5500 V3 Controller Platform


s:
ce
ur
so
Re
ng

Power-BBU-Fan modules SAS expansion ports


 1+1.  Two SAS expansion
ni

 Up to 94% of power ports per controller.


conversion efficiency. Interface modules
 –48 V DC and 240 V DC. Onboard ports  Two slots for hot-swappable
ar

 5300 V3: four GE ports per interface modules.


controller.  Port types: 8 or 16 Gbit/s Fibre
Le

 5500 V3: four 8 Gbit/s Fibre Channel, GE, 10GE TOE, 10GE
Channel ports per controller. FCoE, and 12 Gbit/s SAS.
re

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 22


Mo

HCNA-storage V3 | OHC1109109 Huawei Storage Product information and Licenses Page |


335
Various controller models exist but they all have more or less the same layout. Shown is the
detailed view of a 5300 controller with the modules and indicators.

OceanStor 5300 V3 Detailed Rear View

ne
m/
co
i.
we
ua
.h
A B C D E F G H

ng
A = On board 1 Gb/s Ethernet port. E = I/O modules (FC depicted).

ni
B = Mini SAS HD expansion ports. F = Management network port.
C = Alarm and Power status LEDs. G = Maintenance network port.
D = USB port.
ar
H = Serial port.
le
//

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 23


:
tp

The first 4 disks in S5300/5500 V3 are called as Data Coffer. They are very important part of the
ht

system.
s:

The data coffer offers extra protection for the data in the cache memory, especially for the
ce

Writeback cache data. Write back cache is not written to disk yet but stored in RAM. This
ur

improves the write performance of the system a lot, but there is a risk. When the power to the
so

system fails the content of the RAM may be lost. That means that data is lost that was written by
Re

a host to the storage device. To prevent this a few “tricks” are used. First there is a copy of all
cached data on the other controller (mirrored cache). Secondly there is a battery pack in the
ng

controller that keeps the power on the RAM change if the power ever should fail. Huawei offers
ni

another layer of protection. As soon as the power fails (and the battery starts doing its job) the
ar

option for the data coffer will start copying the data to specific installed coffer disks. That makes
Le

sure that when the batteries are dead the data is still safe as they were stored on the coffer disks.
re

Management network port: this port is used to connect to the maintenance terminal. It is provided
Mo

for the remote maintenance terminal. It is also used by the DeviceManager, for daily management.

Page | 336 HCNA-storage V3 | OHC1109109 Huawei Storage Product information and Licenses
Icons and Status Indicators

Icons and Status Indicators

e n
Power indicator for controller and disk enclosure (Front).

m/
Alarm indicator for controller module and disk enclosure.

co
Depicts management interface port.

i.
Fan indicator for controller module and disk enclosure.

we
Depicts maintenance interface port.

ua
Power indicator for disk enclosure (Back).

.h
BBUindicator for disk enclosure (Back).

ng
Location indicator for disk enclosure.

ni
Enclosure ID display.

ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 24
: //

BBU stands for Battery Backup Unit. This is a special module in controllers and disk enclosures
tp

that provides backup power to the RAM modules of the cache in the system.
ht

Data that is written to a LUN will initially be stored\buffered in RAM memory of the cache module.
s:

This improves the response of the storage device when a host writes data to a LUN. The host
ce

receives an acknowledgement of the write very quick as writing to RAM is much faster than
ur

writing to a physical sector on a hard disk. However: if power fails for the enclosure the content of
so

the RAM will be lost. The host assumes it is stored (after the acknowledgement) but the data is
Re

lost anyway. That is why the cache is “protected” with an additional battery pack that is inside the
enclosure. The indicator shows what the status is of the BBU. These are the optional colors of the
ng

indicator:
ni
ar
Le

BBU LED Status

Steady Green BBU is fully operational


re
Mo

Blinking Green 1 Hz BBU battery is charging

Blinking Green 4 Hz BBU battery is being discharged

Red BBU is faulty

HCNA-storage V3 | OHC1109109 Huawei Storage Product information and Licenses Page |


337
OceanStor 5500 V3 Specifications

OceanStor 5500 V3 Specifications

e n
Model 5500 V3

m/
System Cache (expanded with number of controllers) 48 GB to 512 GB
Maximum Number of Controllers 8

co
Supported Storage Protocols Fibre Channel, FCoE, iSCSI, InfiniBand, NFS, CIFS, HTTP, and FTP
1 Gbit/s Ethernet, 10 Gbit/s FCoE, 10 Gbit/s TOE, 16 Gbit/s FC, and
Port Types
56 Gbit/s InfiniBand, SAS 3.0 (back-end, 4 x 12 Gbit/s per port)

i.
Max. Number of Disks Supported by Two Controllers 750
Maximum Number of Front-end Ports per Controller 12

we
Max. Number of I/O Modules per Controller 2
Max. Number of Snapshots (LUN) 1024

ua
Max. Number of LUNs 4096
Max. Number of Snapshots per file system 2048

.h
Max. Capacity of a single file 256 TB
Disk Types SSD, SAS, and NL-SAS

ng
RAID Levels RAID 0, 1, 3, 5, 6, 10, or 50
Key Software Features UltraPath, Cloud Service, ReplicationDirector, DeviceManager, eSight

ni
2 U controller enclosure: 86.1 mm x 447 mm x 750 mm
Dimensions
(3.39 in. x 17.60 in. x 29.53 in.)

ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 25
: //
tp
ht
s:
ce
ur
so
Re
ng
ni
ar
Le
re
Mo

Page | 338 HCNA-storage V3 | OHC1109109 Huawei Storage Product information and Licenses
OceanStor 5600 V3

OceanStor 5600 V3 / 5800 V3 Controller Platform (1)

e n
BBU modules:

m/
• 5600 V3: 1+1; 5800 V3: 2+1.

co
• AC power failure protection.

i.
Controller modules:

• Dual controllers.

we
• Automatic frequency adjustment for

ua
reduced power consumption.

• Built-in fan modules (fan modules are

.h
integrated in controller modules, but can
be maintained independently).

ng
ni
ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 26
: //
tp

OceanStor 5600 V3 / 5800 V3 Controller Platform (2)


ht
s:

Management modules:
 1+1.
ce

 Hot-swappable.
 Multi-controller scale-out and
ur

interconnection for establishing


so

heartbeats.
Re

Power modules:
 1+1.
 Up to 94% of power conversion efficiency.
ng

 240 V DC.
ni

Interface modules:
ar

 16 slots for hot-swappable interface modules.


 Port types: 8 or 16 Gbit/s Fibre Channel, GE, 10GE TOE, 10GE FCoE, and 12 Gbit/s SAS.
Le
re

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 27


Mo

HCNA-storage V3 | OHC1109109 Huawei Storage Product information and Licenses Page |


339
OceanStor 5600 V3 Specifications

OceanStor 5600 V3 Specifications

n
Model 5600 V3

e
System Cache (expanded with number of controllers) 64 GB to 512 GB

m/
Maximum Number of Controllers 8
Supported Storage Protocols
Fibre Channel, FCoE, iSCSI, InfiniBand, NFS, CIFS, HTTP, and FTP

co
1 Gbit/s Ethernet, 10 Gbit/s FCoE, 10 Gbit/s TOE, 16 Gbit/s FC, and 56
Port Types
Gbit/s InfiniBand, SAS 3.0 (back-end, 4 x 12 Gbit/s per port)

i.
Max. Number of Disks Supported by Two Controllers 1000
Max. Number of Front-end Ports per Controller 28

we
Max. Number of I/O Modules per Controller 8
Max. Number of Snapshots (LUN) 2048

ua
Max. Number of LUNs 4096
Max. Number of Snapshots per file system 2048

.h
Max. Capacity of a single file 256 TB
Disk Types SSD, SAS and NL-SAS

ng
RAID Levels RAID 0, 1, 3, 5, 6, 10, or 50

Key Software Features UltraPath, Cloud Service, ReplicationDirector, DeviceManager, eSight

ni
3 U controller enclosure: 130.5 mm x 447 mm x 750 mm
Dimensions
(5.14 in. x 17.60 in. x 29.53 in.)

ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 28
: //
tp

OceanStor 5600 V3 / 5800 V3 Header Platform


ht
s:
ce
ur
so
Re
ng

1. System enclosure
2. BBU module
ni

3. Controller
4. Power module
ar

5. Management module
6. Interface module
Le
re

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 29


Mo

Page | 340 HCNA-storage V3 | OHC1109109 Huawei Storage Product information and Licenses
OceanStor 5800 V3 Specifications

OceanStor 5800 V3 Specifications

n
Model 5800 V3

e
System Cache (expanded with number of controllers) 128 GB to 1024 GB

m/
Maximum Number of Controllers 8
Supported Storage Protocols

co
Fibre Channel, FCoE, iSCSI, InfiniBand, NFS, CIFS, HTTP, and FTP
1 Gbit/s Ethernet, 10 Gbit/s FCoE, 10 Gbit/s TOE, 16 Gbit/s FC, and
Port Types
56 Gbit/s InfiniBand, SAS 3.0 (back-end, 4 x 12 Gbit/s per port)

i.
Max. Number of Disks Supported by Two Controllers 1250
Max. Number of Front-end Ports per Controller 28

we
Max. Number of I/O Modules per Controller 8
Max. Number of Snapshots (LUN) 2048

ua
Max. Number of LUNs 8192
Max. Number of Snapshots per file system 2048

.h
Max. Capacity of a single file 256 TB
Disk Types SSD, SAS, and NL-SAS

ng
RAID Levels RAID 0, 1, 3, 5, 6, 10, or 50
Key Software Features UltraPath, Cloud Service, ReplicationDirector, DeviceManager, eSight

ni
3 U controller enclosure: 130.5 mm x 447 mm x 750 mm
Dimensions
(5.14 in. x 17.60 in. x 29.53 in.)

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. ar Slide 30


le
//

All OceanStor models 5300, 5500, 5600 and 5800 support up to eight controllers. The models
:
tp

each have two controllers running in the so-called active-active mode. This means that both
controllers within the chassis are active data movers. The expansion of the amount of controllers
ht

implies that more processing power as well as more cache memory is available.
s:
ce

The expansion itself can physically be done in two different ways. Both methods require additional
hardware to be installed. This additional hardware is the Smart I/O card and they should be
ur

inserted in the controllers in specific slots.


so
Re

Direct Connection Mode.


This expansion option is only possible when upgrading to 4 controllers (equals 2 chassis). In this
ng

mode there are fiber optic cables that run from one controller in chassis #1 directly to another
ni

controller in chassis #2.


ar
Le

Switch Connection Mode.


In this mode the expansion can be from 2 – 4 controllers or from 2 – 8 controllers. The method
re

uses fiber optic cables from the Smart I/O cards in the controllers to two separate fabric switches.
Mo

HCNA-storage V3 | OHC1109109 Huawei Storage Product information and Licenses Page |


341
OceanStor 6800 V3

OceanStor 6800 V3 Controller Platform (1)

e n
BBU modules:

m/
• 3+1.

co
• AC power failure protection.

i.
Controller modules:

we
• 2 - or 4 - controller configuration.

• Automatic frequency adjustment for reduced

ua
power consumption.

.h
Built-in fan modules (fan modules are
integrated in controller modules, but can be
maintained independently).

ng
ni
ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 31
: //
tp

OceanStor 6800 V3 Controller Platform (2)


ht
s:

Power modules:
ce

• 1+1.
• 240 V DC.
ur

• Up to 94% of power conversion efficiency.


so

Management modules:

• 1+1.
Re

• Hot-swappable.
• Multi-controller scale-out and
interconnection for establishing heartbeats.
ng

Interface modules:
ni

• 2-controller: 12 / 4-controller: 24.


ar

• Hot-swappable.
• Port types: 8 or 16 Gbit/s Fibre Channel, GE, 10GE TOE, 10GE FCoE, and 12 Gbit/s SAS.
Le
re

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 32


Mo

Page | 342 HCNA-storage V3 | OHC1109109 Huawei Storage Product information and Licenses
OceanStor 6800 V3 Specifications

OceanStor 6800 V3 Specifications

e n
Model 6800 V3

m/
System Cache (expanded with number of controllers) 256 GB to 4096 GB
Maximum Number of Controllers 8

co
Supported Storage Protocols Fibre Channel, FCoE, iSCSI, InfiniBand, NFS, CIFS, HTTP, and FTP
1 Gbit/s Ethernet, 10 Gbit/s FCoE, 10 Gbit/s TOE, 16 Gbit/s FC, and 56
Port Types
Gbit/s InfiniBand, SAS 3.0 (back-end, 4 x 12 Gbit/s per port)

i.
Max. Number of Disks Supported by Two Controllers 3200
Max. Number of Front-end Ports per Controller 20

we
Max. Number of I/O Modules per Controller 6
Max. Number of Snapshots (LUN) 32768

ua
Max. Number of LUNs 65536
Max. Number of Snapshots per file system 2048

.h
Max. Capacity of a single file 256 TB
Disk Types SSD, SAS, and NL-SAS

ng
RAID Levels RAID 0, 1, 3, 5, 6, 10, or 50
Key Software Features UltraPath, Cloud Service, ReplicationDirector, DeviceManager, eSight

ni
• Heterogeneous virtualization
• Block Virtualization
Virtualization Features • Supports virtual machines: Vmware, Citrix, Hyper-V

ar
• Value-added features related to virtual environments: VAAI and
integration of vSphere and vCenter
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 33
: //

As mentioned before the 5300 and 5500 models are based on a chassis with both controllers as
tp

well as disk drives. The models 5600, 5800 and 6800 get their storage capacity always using
ht

external disk enclosures.


s:

All disk enclosures are connected via mini SAS HD connectors and use SAS as an underlying
ce

technology. The SAS used today can be 3 Gb/s, 6 Gb/s or 12 Gb/s.


ur
so

Disk enclosures are available for all common drive types, formats and sizes.
Re

Supported are:
ng
ni

Disk Drive Type Physical Size 2,5 “ Physical Size 3,5 “


ar

Solid State Disks 


Le

SAS disks 10,000 rpm  


re

SAS disks 15,000 rpm 


Mo

NL-SAS disks 7,200 rpm 

HCNA-storage V3 | OHC1109109 Huawei Storage Product information and Licenses Page |


343
OceanStor 6800 V3 Header Platform

n
6

e
5

m/
co
i.
1. System enclosure

we
2. BBU module

ua
3. Controller
4. Power module

.h
5. Management module
6. Interface module

ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 34

ar
le
: //
tp
ht
s:
ce
ur
so
Re
ng
ni
ar
Le
re
Mo

Page | 344 HCNA-storage V3 | OHC1109109 Huawei Storage Product information and Licenses
OceanStor Disk Enclosure Platform

OceanStor Disk Enclosure Platform

e n
2 U disk enclosure: 25 x 2.5-inch disks.

m/
Disk module.

co
Expansion module.
Power module.

i.
4 U disk enclosure: 24 x 3.5-inch disks.

we
Disk module.

ua
.h
Fan module.

ng
Expansion module.

ni
Power module.

ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 35
: //
tp

OceanStor High-density Disk Enclosure


ht
s:

4 U high-density disk enclosure: 75 x 3.5-inch disks.


ce
ur
so
Re
ng
ni

1. System enclosure
ar

2. Power module 4. Expansion module


3. Fan Module 5. Diskmodule
Le
re

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 36


Mo

The High-density disk enclosure is only available with 3,5 inch disk drives. They usually are filled
with NL-SAS drives with capacity starting from 1 TB. That makes a high-density enclosure hold at
least 75 TB of raw disk capacity. With the size of the disk capacity increasing constantly the
capacities offered by the high-density enclosures will be enormous in the future.

HCNA-storage V3 | OHC1109109 Huawei Storage Product information and Licenses Page |


345
OceanStor 18000

OceanStor 18000 series

ne
m/
co
i.
we
ua
.h
ng
ni
ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 37
: //

The OceanStor 18000 series is the top of the range model. It is primarily designed for customers
tp

who have a very high demand for performance.


ht

The amount of disks inside an OceanStor 18000 series model can be up to 3216 for the
s:

OceanStor 18800. The enormous performance capabilities lie in the fact that the 18000 series
ce

have a very big amount of cache memory (up to 3 TB of RAM). Second factor for this high
ur

performance is the number of controllers. There can be up to sixteen controllers working together!.
so

Benchmark tests have proved that the OceanStor 18000 series can reach more than 1 million
Re

IOPS.
ng
ni
ar
Le
re
Mo

Page | 346 HCNA-storage V3 | OHC1109109 Huawei Storage Product information and Licenses
OceanStor 18500 Specifications

OceanStor 18500 Specifications

n
Model 18500

e
Maximum Number of Controllers 8

m/
Max. Cache Size 768 GB
Max. Number of Front-end Host Ports 128 (FC/iSCSI/FCoE)

co
Max. Number of Disks 1584
2.5-inch disks: SSD and SAS
Supported Disk Types

i.
3.5-inch disks: SSD, SAS, and NL-SAS
RAID Levels RAID 5,6, and 10

we
Max. Number of hosts 65536
Max. Number of LUNs 65536

ua
Snapshot (HyperSnap), clone (HyperClone), copy (HyperCopy), and remote
Data Protection Software
replication (HyperReplication)
Thin provisioning (SmartThin), data relocation (SmartMotion), storage tiering

.h
Data Efficiency Software (SmartTier), service quality control (SmartQoS), and heterogeneous
virtualization (SmartVirtualization), and cache partitioning (Smart Partition)
Disaster recovery software (ReplicationDirector) and host multipathing
Host Software Suite

ng
(UltraPath)
Compatible Operating Systems AIX, HP-UX, Solaris, Linux, Windows, etc
Virtualization platforms: VMware, XenServer, and Hyper-V

ni
Supported Virtual Environment Features Value-added virtualization features: VMware VAAI, VASA,SRM,and Hyper-V
Integration: vSphere and vCenter

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. ar Slide 38


le
: //
tp
ht
s:
ce
ur
so
Re
ng
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109109 Huawei Storage Product information and Licenses Page |


347
OceanStor 18800 Specifications

OceanStor 18500 Specifications

n
Model 18500

e
Maximum Number of Controllers 8

m/
Max. Cache Size 768 GB
Max. Number of Front-end Host Ports 128 (FC/iSCSI/FCoE)

co
Max. Number of Disks 1584
2.5-inch disks: SSD and SAS
Supported Disk Types

i.
3.5-inch disks: SSD, SAS, and NL-SAS
RAID Levels RAID 5,6, and 10

we
Max. Number of hosts 65536
Max. Number of LUNs 65536

ua
Snapshot (HyperSnap), clone (HyperClone), copy (HyperCopy), and remote
Data Protection Software
replication (HyperReplication)
Thin provisioning (SmartThin), data relocation (SmartMotion), storage tiering

.h
Data Efficiency Software (SmartTier), service quality control (SmartQoS), and heterogeneous
virtualization (SmartVirtualization), and cache partitioning (Smart Partition)
Disaster recovery software (ReplicationDirector) and host multipathing
Host Software Suite

ng
(UltraPath)
Compatible Operating Systems AIX, HP-UX, Solaris, Linux, Windows, etc
Virtualization platforms: VMware, XenServer, and Hyper-V

ni
Supported Virtual Environment Features Value-added virtualization features: VMware VAAI, VASA,SRM,and Hyper-V
Integration: vSphere and vCenter

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. ar Slide 38


le
: //
tp
ht
s:
ce
ur
so
Re
ng
ni
ar
Le
re
Mo

Page | 348 HCNA-storage V3 | OHC1109109 Huawei Storage Product information and Licenses
OceanStor 18800F Specifications

OceanStor 18800F Specifications

e n
Model 18800F

m/
Maximum Number of Controllers 16

Max. Cache Size 3072 GB

co
Max. Number of Front-end Host Ports 256 (FC/iSCSI/FCoE)

Max. Number of Disks 2304

i.
Supported Disk Types 2.5-inch disks: SSD

we
RAID Levels RAID 5,6, and 10

Max. Number of hosts 65536

ua
Max. Number of LUNs 65536
Snapshot (HyperSnap), clone (HyperClone), copy (HyperCopy), and remote
Data Protection Software
replication (HyperReplication)

.h
Data Efficiency Software SmartThin / SmartMotion / SmartQoS / SmartPartition / SmartVirtualization

Host Software Suite Disaster recovery software (ReplicationDirector) and host multipathing (UltraPath)

ng
Compatible Operating Systems AIX, HP-UX, Solaris, Linux, Windows, etc
Virtualization platforms: VMware, XenServer, and Hyper-V

ni
Supported Virtual Environment Features Value-added virtualization features: VMware VAAI, VASA,SRM,and Hyper-V
Integration: vSphere and vCenter

ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 40
: //

The OceanStor 18800F is the version where there are some restrictions. The 18800F cannot be
tp

used in combination with disk enclosure that hold 3,5”disks. This automatically implies that the
ht

high-density enclosures are not supported with the OceanStor 18800F. The OceanStor 18800F
also comes with more cache memory. It is always fitted with 192 GB of cache RAM whereas the
s:

OceanStor 18800 can also be fitted with 96 GB.


ce
ur
so
Re
ng
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109109 Huawei Storage Product information and Licenses Page |


349
I/O Modules for the OceanStor V3 series

Depending on the model type there are a number of I/O cards that can be used in combination
with the OceanStor controllers. The cards are typically used to connect the OceanStor controllers
to the front-end side: the switches or hosts in the storage network. Other I/O modules can be

n
used to connect disk enclosures to the OceanStor controller.

e
m/
co
I/O Modules for the OceanStor series

i.
we
Various I/O modules exist to connect hosts, enclosures and
controllers.

ua
.h
ng
ni
ar
le
: //
tp
ht

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 41


s:
ce

The cards are often available in different speeds and/or generations. Huawei supports many of
these generations. Examples are the Fibre Channel Host Bus Adapters that are supported in 4
ur

Gb/s, 8 Gb/s and 16 Gb/s speeds. Also 2 port and 4 port versions exist.
so
Re
ng
ni
ar
Le
re
Mo

Page | 350 HCNA-storage V3 | OHC1109109 Huawei Storage Product information and Licenses
In this module a very important I/O card will be discussed: the so-called Smart I/O card.

Smart I/O interface module


1 Power indicator/Hot Swap button

n
2 16 Gbit/s FC/8 Gbit/s FC/FCoE/iWARP (Scale-Out)
1 4

e
3 Port indicator (Link/Active/Mode indicator)

m/
4 Module handle
5 Port working mode silkscreen

co
2
No. Indicator Status and Description

i.
3 Green on: The module is working properly.
5 1 Power indicator
Blinking green: The module needs to be hot-swapped.

we
Red on: The module is faulty.
Off: The module is not powered on.
Blinking blue slowly: The module is working in FC mode with

ua
link down.
Blinking blue quickly: The module is working in FC mode with
link up and data is being transmitted.

.h
Steady blue: The module is working in FC mode with link up but
Port indicator
no data is being transmitted.
3 (Link/Active/
Blinking green slowly: The module is working in FCoE/iWARP
Mode indicator)
mode with link down.

ng
Blinking green quickly: The module is working in FCoE/iWARP
Note: Smart I/O interface modules mode with link up and data is being transmitted.
Steady green: The module is working in FCoE/iWARP mode
are supported by V3R2 only.

ni
with link up but no data is being transmitted.

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.


ar Slide 42
le
//

The most important task of the Smart I/O card is to connect OceanStor controller chassis’
:
tp

together. This allows the OceanStor to scale up with groups of two controllers at a time. Two
controllers are added as one OceanStor chassis of course houses two controllers!
ht

Up to 8 controllers can be present in an OceanStor V3 solution which means 4 chassis of


s:

OceanStor will be linked together. This requires the use of the Smart I/O card.
ce
ur

In some of the models the card must be enter in a special slot (shown in previous image) and
so

some OceanStor controllers already have a Smart I/O card onboard (next image).
Re
ng
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109109 Huawei Storage Product information and Licenses Page |


351
Onboard Smart I/O interface module

1 3

e n
m/
co
2 4 No. Indicator Status and Description

i.
1 16 Gbit/s FC/8 Gbit/s FC/FCoE 1 Port indicator Blinking blue slowly: The module is working in FC
(Link/Active/Mode mode with link down.
indicator) Blinking blue quickly: The module is working in
2 Port indicator (Link/Active/Mode

we
FC mode with link up and data is being
indicator) transmitted.
Steady blue: The module is working in FC mode

ua
3 Module handle with link up but no data is being transmitted.
Blinking green slowly: The module is working in
FCoE mode with link down.
4 Port working mode silkscreen

.h
Blinking green quickly: The module is working in
FCoE mode with link up and data is being
Note: Smart I/O interface modules transmitted.
Steady green: The module is working in FCoE

ng
are supported by V3R2 only. mode with link up but no data is being transmitted.

ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 43

ar
le
Notice that in the previous two images a comment was added in red text: Smart I/O cards are only
//

supported in V300R200 (or short V3R2) version firmware.


:
tp

This is important to remember as the V3R2 firmware is the only one that supports the scale out to
ht

8 controllers!

.
s:
ce
ur
so
Re
ng
ni
ar
Le
re
Mo

Page | 352 HCNA-storage V3 | OHC1109109 Huawei Storage Product information and Licenses
OceanStor Dorado 2100 G2

The Huawei OceanStor Dorado systems are designed as all flash arrays and therefore can only
be equipped with Solid State Disks. This makes the OceanStor Dorado systems very useful in
high performance environments. Solid State Disks offer tremendous performances in IOPS but

n
the capacity of the disk is limited. On top of that: Solid State Disks are more expensive than

e
traditional rotating disks with the same capacity.

m/
co
Huawei offers two OceanStor Dorado models: The 2100 and 5100 models.

i.
we
At this point in time the OceanStor Dorado systems are still generation 2.

ua
.h
ng
OceanStor Dorado 2100 G2

ni
ar
le
: //
tp
ht
s:
ce
ur
so
Re

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 44


ng
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109109 Huawei Storage Product information and Licenses Page |


353
OceanStor Dorado 2100 G2 Specifications

OceanStor Dorado 2100 G2 Specifications

n
Model Dorado 2100 G2

e
Number of controllers Dual active-active controllers

m/
Front-end port types 8 Gbit/s FC, 10 Gbit/s iSCSI (TOE), 40 Gbit/s InfiniBand QDR
Back-end port types 6 Gbit/s SAS 2.0 wide port

co
Max. number of I/O modules 2
Max. number of disk enclosures 3

i.
Max. bandwidth 10 GB/s
Max. IOPS 600.000

we
Access latency 500 μs (microseconds)
RAID levels 0, 5, 10

ua
Supported max. number of Hosts 512
Supported max. number of LUNs 2048

.h
Dimensions 2 U controller enclosure: 86.1 mm x 446 mm x 582 mm (3.39 in. x 17.56 in. x 22.91 in.)

ng
Key software features HyperThin (thin provisioning)

ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. ar Slide 45
le
: //

OceanStor Dorado 5100


tp
ht

OceanStor Dorado 5100


s:
ce
ur
so
Re
ng
ni
ar
Le
re
Mo

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 46

OceanStor Dorado 5100 Specifications

Page | 354 HCNA-storage V3 | OHC1109109 Huawei Storage Product information and Licenses
OceanStor Dorado 5100 Specifications

Model Dorado 5100


Number of controllers Dual active-active controllers

n
Front-end port types 8 Gbit/s FC, 10 Gbit/s iSCSI (TOE)

e
Back-end port types 6 Gbit/s SAS 2.0 wide port

m/
Max. number of I/O modules 12

Max. number of disk enclosures 4

co
Max. bandwidth 12 GB/s

Max. IOPS 1.000.000

i.
Access latency 500 μs (microseconds)

we
RAID levels 0, 1, 5, 10

Supported max. number of Hosts 1024

ua
Supported max. number of LUNs 2048

Dimensions 4 U Controller enclosure: 175 mm x 446 mm x 502 mm

.h
Key software features HyperImage (snapshot), HyperMirror (synchronous/asynchronous remote replication)

ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 47

ar
le
: //
tp
ht
s:
ce
ur
so
Re
ng
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109109 Huawei Storage Product information and Licenses Page |


355
OceanStor VIS6600T

The OceanStor VIS or Virtual Intelligent Storage system is designed for mid-range and high-
end customers. It is built as a solution that can consolidate different storage devices and present
the capacity as one big storage pool. It offers all value-added functions like snapshot, mirroring,

n
and replication. The OceanStor VIS 6600T series was therefore used in for instance government

e
data centers, financial institutions, carriers, and large enterprises and institutions.

m/
co
An OceanStor VIS6600T is not a storage device but acts as an intermediate between multiple

i.
storage arrays and hosts that run applications that need storage capacity.

we
ua
.h
OceanStor VIS6600T Front

ng
ni
ar
le
: //
tp
ht
s:
ce
ur
so

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 48


Re

The performance and scalability make the OceanStor VIS6600T a flexible solution. The
ng

expansion options were numerous for connecting to storage devices, to application servers and to
ni

remote OceanStor VIS6600T systems..


ar
Le
re
Mo

Page | 356 HCNA-storage V3 | OHC1109109 Huawei Storage Product information and Licenses
OceanStor VIS6600T Back

OceanStor VIS6600T Back

e n
m/
co
i.
we
ua
.h
ng
ni
ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 49
: //
tp

OceanStor VIS6600T Specifications


ht
s:

Model VIS6600T
ce

Number of Nodes 2 to 8 active-active load-balanced nodes

Processors per Node Multiple 64-bit cores


ur

Cache per Node 96 GB

Service Ports per Node Up to 20 x 8 GFC ports, 20 x 1 Gbit/s iSCSI ports, and 8 x 10 Gbit/s iSCSI ports
so

Storage virtualization
Basic Features Load-balancing and failover among links
Re

Multi-node clustering
Value-Added Features Heterogeneous volume mirroring / Snapshot / Data replication
• Huawei OceanStor family
• IBM System Storage DS series, TotalStorage DS series, V series, and XIV series
ng

• NetApp FAS series


• HP StorageWorks MSA series, EVA series, and XP series
Compatible Storage Systems
• EMC CLARiiON CX series, Symmetrix DMX series, and VNX series
• Fujitsu ETERNUS series
ni

• Hitachi AMS/WMS series, Lightning series, Thunder series, and USP/NSC series
• Oracle/SUN StorageTek series
ar

UltraPath (Windows/Linux/AIX), STMS (Solaris), PV-Links (HP-UX), and VxDPM (all


Multipathing Software
operating systems)
Compatible Host Operating Systems Windows, Linux, Solaris, HP-UX, AIX, VMware, Hyper-V, and Citrix XenServer
Le
re

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 50


Mo

HCNA-storage V3 | OHC1109109 Huawei Storage Product information and Licenses Page |


357
OceanStor 9000 Big Data Storage System

The OceanStor VIS6600T is now used less often as the evolution of storage has continued and
disk capacity and intelligent virtual storage are now packed together. Maybe the best example of
this new generation of storage devices is the OceanStor 9000 Big Data system.

ne
m/
It offers everything: centralized management, huge capacity and scalability, NAS (CIFS and NFS)
functions and all enterprise class data protection option needed.

co
i.
we
OceanStor 9000 Big Data Storage System

ua
.h
Performance node

ng
ni
Mini capacity node

ar
Capacity node
le
: //
tp
ht
s:
ce

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 51


ur
so

The OceanStor 9000’s are shipped as units with a number of disks installed. This number varies
Re

but the 9000 can hold SSD, SAS and NL-SAS. Up to 288 OceanStor 9000’s (then referred to as a
node) can work together. Used as a NAS solution it now offers a file system size up to 40 PB.
ng
ni
ar
Le
re
Mo

Page | 358 HCNA-storage V3 | OHC1109109 Huawei Storage Product information and Licenses
OceanStor 9000 Specifications

OceanStor 9000 Specifications

n
Sybsystem File Storage Subsystem

e
m/
System Architecture Fully symmetrical distributed architecture

Number of Nodes 3 to 288

co
Wushan distributed file system, which supports global namespace and can be dynamically expanded
System Features
up to 40 PB

i.
Applications File storage

Network Types 10 GE Ethernet, 40 GE Infiniband, or 1 GE

we
Data Protection Levels N+1, N+2, N+3, and N+4

ua
Data Disk Types SSD, SAS, SATA, and NL-SAS

Dynamic-storage tiering (InfoTier)

.h
Software Automatic client connection load-balancing (InfoEqualizer)
Space quota management (InfoAllocator)

ng
Data Recovery Quick automated parallel data recovery at up to 1 TB per hour

Supported Protocols NFS, CIFS, HDFS, NIS, Microsoft Active Directory, and LDAP

ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. ar Slide 52
le
: //
tp

OceanStor 9000 Specifications


ht

Sybsystem Analysis Subsystem


s:

System Architecture Fully symmetrical distributed architecture Fully symmetrical distributed architecture
ce

Number of Nodes 3 to 32 3 to 32

WushanSQL distributed database, supporting


FusionInsight Hadoop, supporting Sqoop,
System Features quick retrieval of a large amount of structured and
ur

MapReduce, HBase, and Hive


unstructured data

Unstructured and semi-structured data analysis


so

Applications Enterprise Hadoop


and Hadoop

Network Types 10 GE or 1 GE 10 GE
Re

Data Protection
Mirror The same as file system
Levels
Data Disk Types SAS and SATA --
ng

The compression rate is automatically adjusted.


Software The average compression ratio reaches 3:1. --
Quick retrieval of massive files (InfoExplorer)
ni

Quick automated parallel data recovery at up to 1 Quick automated parallel data recovery at up
Data Recovery
TB per hour to 1 TB per hour
ar

FusionInsight Hadoop, supporting Sqoop,


Supported Protocols Database protocol JDBC and ODBC
MapReduce, HBase, and Hive
Le

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 53


re
Mo

HCNA-storage V3 | OHC1109109 Huawei Storage Product information and Licenses Page |


359
Cabling Diagrams

Most of the models of the OceanStor are designed to be flexible. The customer can decide to add
disk enclosures for more storage capacity or add controllers for more performance. Or he can do
both. This means that a Huawei storage solution can consist of multiple controllers working

n
together and many disk enclosures connected to them. In this section a few simple examples of

e
the cabling schemes used with Huawei are discussed.

m/
co
Cabling Diagrams

i.
we
Displays the cabling required for connecting:

ua
1. Controllers to disk enclosures.

.h
2. Disk enclosures with other disk enclosures in a loop or chain.

ng
ni
• A loop or chain has a maximum of disks.


ar
High density enclosures and density enclosures cannot co-exist in
le
the same loop or chain.
//

• Multiple loops or chains can exist in one OceanStor system.


:


tp

Cables between enclosures en controller are type mini SAS (HD).


ht

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 54


s:
ce
ur

The examples shown next are just about adding disk enclosures. For more detailed information
so

on adding controllers please visit the Huawei support site.


Re

For Field Service Engineers and Installation Engineers there is a link to remember:

http://support.huawei.com/onlinetool/datums/nettool/index.en.jsp
ng
ni

Here they will find the so-called Huawei Storage Networking Assistant. It is possible to select the
required OceanStor model and the configuration type (number of controllers and enclosures). The
ar

Networking Assistant will then show the cabling diagram.


Le

In the next images you will see some of the results of the Networking Assistant. Optionally you
can ask your instructor for a live demonstration of the Networking Assistant.
re
Mo

Page | 360 HCNA-storage V3 | OHC1109109 Huawei Storage Product information and Licenses
OceanStor 5300 & 5500 V3

en
m/
co
i.
1 2 3 4 5 6

we
1 Ethernet Ports 4 Management

ua
network port
2 Mini SAS 5 Maintenance

.h
expansion ports network port
3 Fibre Channel host 6 Serial port

ng
ports

ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 55

ar
le
//

SAS expansion ports – Controller enclosure


:
tp
ht
s:
ce
ur
so
Re
ng
ni
ar
Le

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 56


re

The controllers in this example have on board expansion ports called EXP 0 and EXP 1.
Mo

HCNA-storage V3 | OHC1109109 Huawei Storage Product information and Licenses Page |


361
SAS expansion ports – Disk enclosure

ne
m/
co
i.
we
ua
.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 57

ar
le
Cabling 1
: //

Single OceanStor 5300/5500 V3 and single disk enclosure.


tp
ht
s:
ce
ur
so
Re
ng
ni
ar

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 58


Le

This example shows that the controller is connected via a red and a blue cable. It is not so that we
need both cables to connect the controller with the disk enclosure. The two cables are there for
re

redundancy reasons. If one of the cables fails, or if the enclosure module fails, there is still a
Mo

reserve path available.

Page | 362 HCNA-storage V3 | OHC1109109 Huawei Storage Product information and Licenses
Cabling 2

Single OceanStor

5300/5500 V3
and three disk

n
e
m/
enclosures.

co
i.
we
ua
.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 59

ar
The image above shows a more complex solution where there are 4 disk enclosures connected to
le
the controller. As mentioned in the module that discussed SAS there is a maximum number of
//

disk enclosures that can be linked together in a single loop. If the solution requires more disk
enclosres additional loops must be created.
:
tp
ht

OceanStor 5600 & 5800 V3


s:
ce
ur
so
Re
ng

1 2 3 4
ni
ar

1 SAS/FC/Ethernet 3 Maintenance
ports network port
Le

2 Management 4 Serial port


network port
re

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 60


Mo

HCNA-storage V3 | OHC1109109 Huawei Storage Product information and Licenses Page |


363
The OceanStor 5600/5800 series is an example of an OceanStor that has no onboard SAS
interface ports. There a SAS card must be inserted to be able to create SAS loops.

Cabling 1

Single OceanStor 5600/5800 V3 and single disk enclosure.

ne
m/
co
i.
we
ua
.h
ng
ni
ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 61
: //
tp

Cabling 2
ht

Single OceanStor
s:

5600/5800 V3
and three disk
ce
ur

Enclosures.
so
Re
ng
ni
ar
Le

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 62


re
Mo

Page | 364 HCNA-storage V3 | OHC1109109 Huawei Storage Product information and Licenses
Huawei Licensed Software Features

Licensed features are additional options a customer can purchase. Some of these features can
be applied in very specific situations like SmartQOS and SmartPartitioning. Other features like
HyperSnap and HyperReplication can be used to create a better backup strategy and/or disaster

n
recovery strategy. Backup strategies and DR strategies are of course a 24 hour a day application

e
of the Huawei licensed features.

m/
co
In this section we will list the most common licensed features and briefly explain their functions. In

i.
module 11 we will take a closer look at the most used licensed features HyperSnap, HyperClone,

we
SmartTier, SmartReplication and SmartThin. There will be lab exercises on some of the licensed

ua
features there as well.

.h
ng
ni
Licensed Software Features ar
le
: //

HyperClone SmartCache SmartPartition


tp

HyperCopy SmartCompression SmartQOS


ht

HyperMirror SmartDedupe
SmartThin
HyperReplication SmartErase
SmartTier
s:

HyperSnap SmartMigration
SmartVirtualization
ce

SmartMotion
ur
so

Note: Not all licenses are applicable to all OceanStor models


Re
ng
ni

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 63


ar
Le
re
Mo

HCNA-storage V3 | OHC1109109 Huawei Storage Product information and Licenses Page |


365
Licensed features descriptions

The previous image showed the various licenses that can be purchased for the OceanStor
models. Most licenses are applicable to all models from the “smaller” OceanStor 5300 all the way
up to the big OceanStor 18800 models. The licenses sometimes are depending on each other

n
and in that case a licensed feature can only be used if the co-depending feature is already

e
licensed as well.

m/
co
The list is in alphabetical order.

i.
we
HyperClone:

ua
Provides the clone function. Clone generates a full data copy of the source data in the local

.h
storage system.

ng
ni
HyperCopy:
Provides the LUN copy function. A LUN copy copies the source LUN data onto the target LUN,

ar
addressing the requirements of tiered storage, application upgrade, and remote backup.
le
//

HyperMirror:
:

HyperMirror backs up data in real time. If the source data becomes unavailable, applications can
tp

automatically use the data copy, ensuring high data security and application continuity.
ht
s:

HyperReplication:
Provides the remote replication function. Remote replication creates an available data duplicate of
ce

a local storage system almost in real time on a storage system that resides in a different region.
ur

The duplicate is instantly available without data restore operations, protecting service continuity
so

and data availability to the maximum.


Re

HyperSnap:
ng

Provides the snapshot function. A snapshot is not a full physical copy of data. It only provides a
ni

mapping table for locating data to implement quick data access.


ar
Le

SmartCache:
The SmartCache feature uses solid state drives (SSDs) as caching storage resources. It
re

accelerates system read performance in the case that there exists hot data, random small I/O’s
Mo

and more reads than writes.

SmartCompression:
SmartCompression reorganizes data to reduce storage space consumption and improve the
data transfer, processing, and storage efficiency without any data loss.

Page | 366 HCNA-storage V3 | OHC1109109 Huawei Storage Product information and Licenses
SmartDedupe:
SmartDedupe is a data downsizing technology that deletes duplicate data blocks in a storage
system to save physical storage capacity, meeting growing data storage needs.

SmartErase:

n
SmartErase erases unnecessary data on a specified LUN several times so that the data on the

e
LUN cannot be recovered in case of leakage.

m/
co
SmartMigration:

i.
SmartMigration migrates services on a source LUN transparently to a target LUN without

we
interrupting host services. After the migration, the target LUN can replace the source LUN to carry

ua
the services.

.h
ng
SmartMotion:
By analyzing services, SmartMotion evenly distributes data in the same type of medium for

ni
dynamically balanced capacity and performance.

ar
le
SmartPartition:
//

SmartPartition allocates the cache resources from storage system engines on demand to improve
:

QoS for mission-critical applications and high-level users.


tp
ht

SmartQoS:
SmartQoS controls the storage performance of one or more LUNs, and prioritizes the service
s:

quality of critical applications.


ce
ur

SmartThin:
so

SmartThin allocates storage space on demand. Within a specified quota of storage space, the
Re

OceanStor Enterprise Storage System provides storage space based on demands of applications
to save storage resources.
ng
ni

SmartTier:
ar

SmartTier periodically detects hotspot data per unit time, and promotes them from low-speed
Le

storage media to high-speed one, boosting the system performance at an affordable cost.
re

SmartVirtualization:
Mo

SmartVirtualization enables a local storage system to centrally manage storage resources of


third-party storage systems, simplifying storage system management and reducing maintenance
costs.

HCNA-storage V3 | OHC1109109 Huawei Storage Product information and Licenses Page |


367
Questions

Questions

ne
1. What is the difference between traditional RAID and Huawei’s RAID 2.0+ ?

m/
2. What are the three tiers the OceanStor models supports?

co
3. What is hot spare space used for?

i.
4. What is the difference between an Extent and a Grain?

we
5. List the five convergence levels that OceanStor V3 offers.

ua
.h
ng
ni
ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 64
: //

Answers:
tp

1. Traditional RAID offers protection on the level of physical disks. RAID 2.0+ uses storage
ht

virtualization and protects blocks (chunks) of data against data loss.


s:

2. High Performance (SSD), Performance (SAS) and Capacity (NL-SAS)


ce

3. Hot spare space is located across all disks in a disk domain. They hold reconstructed blocks
of data in case a physical disk in the disk domain fails.
ur

4. An Extend is the administrative unit used to create a thick LUN with (Default size is 2 MB). A
so

Grain is a subdivision of an Extent in 64 kB blocks. Grains are used to build Thin LUNs with
Re

5. SAN & NAS, High-End & Mid-Range & Entry-Level, SSD & HDD, Primary & Backup Storage,
Heterogeneous convergence
ng
ni
ar
Le
re
Mo

Page | 368 HCNA-storage V3 | OHC1109109 Huawei Storage Product information and Licenses
Exam preparation

Exam preparation (1)

n
e
Statement 1: RAID 2.0+ offers better protection against data loss than

m/
traditional RAID but it performs a little bit slower.

co
Statement 2: To rebuild a RAID 2.0+ protected failed drive takes a lot of
time as all drives are involved in the rebuild of the spare

i.
disk.

we
a. Statement 1 is true; Statement 2 is true.

ua
b. Statement 1 is true; Statement 2 is false.
c. Statement 1 is false; Statement 2 is true.

.h
d. Statement 1 is false; Statement 2 is false.

ng
ni
ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 65
: //
tp
ht

Exam preparation (2)


s:

2. Which of the following OceanStor models are available as generation v3?


ce

Select all that apply.


ur

a. OceanStor 2600.
b. OceanStor 5300.
so

c. OceanStor 6600.
Re

d. OceanStor 6800.
e. OceanStor 9000.
ng
ni
ar
Le
re

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 66


Mo

Answers:

1) D
2) B , D

HCNA-storage V3 | OHC1109109 Huawei Storage Product information and Licenses Page |


369
Summary

Summary

ne
• RAID 2.0+ uses storage virtualization.

m/
• Hot Spare Space replaces the use of spare disks.

co
• RAID 2.0+ offers higher protection rates and higher

i.
performances in rebuilding.

we
• OceanStor V3’s main features are convergence and in special

ua
with SAN & NAS convergence. All V3 OceanStor models
natively support block based and file based storage.

.h
• Many licensed features exist that can be purchased separately.

ng
ni
ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 67
: //
tp
ht
s:
ce
ur
so
Re
ng
ni
ar
Le
re
Mo

Page | 370 HCNA-storage V3 | OHC1109109 Huawei Storage Product information and Licenses
e n
m/
co
Thank you

i.
www.huawei.com

we
ua
.h
ng
ni
ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 68
: //
tp
ht
s:
ce
ur
so
Re
ng
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109109 Huawei Storage Product information and Licenses Page |


371
en
m/
co
i.
we
ua
OHC1109110

.h
Huawei Storage Initial Setup and

ng
ni
Configuration
ar
le
//

www.huawei.com
:
tp
ht
s:
ce
ur
so
Re
ng
ni
ar
Le
re
Mo
Mo
re
Le
ar
ni
ng
Re
so
ur
ce
s:
ht
tp
://
le
ar
ni
ng
.h
ua
we
i.
co
m/
en
Introduction

In this module the initial setup and configuration of the OceanStor is discussed. That means that
the physical rack mounting procedure has been completed and all cabling is done.

n
The steps to set up an OceanStor for first time use will be discussed here as well as all necessary

e
m/
steps to create a LUN. Once a LUN is created the process of mapping will be discussed. With

co
mapping we give access to the LUN to one or more servers. The lab exercises that come with this

i.
chapter will have you create LUNs and map them to Windows based and/or Linux based hosts.

we
ua
.h
Objectives

ng
ni
After this module you will be able to:

ar
Configure Disk Domains, Storage Pools, LUNs, LUN Groups, Hosts, Host Groups, Port
le
Groups and Mapping Views
//

 Connect the created LUN to a Windows server as a new volume


:

 Use Disk Management to prepare the volume for use in Windows


tp
ht

Module Contents
s:
ce
ur

1. Create a Disk Domain


2. Create a Storage Pool
so

3. Create a LUN
Re

4. Create a LUN Group


ng

5. Create a Host
6. Create a Host Group
ni

7. Create a Port Group


ar

8. Create a Mapping View


Le

9. Perform OS specific steps


re
Mo

HCNA-storage V3 | OHC1109110 Huawei Storage Initial Setup and Configuration Page | 375
e n
m/
co
i.
we
ua
.h
ng
ni
ar
le
: //
tp
ht
s:
ce
ur
so
Re
ng
ni
ar
Le
re
Mo

Page | 376 HCNA-storage V3 | OHC1109110 Huawei Storage Initial Setup and Configuration
Initial Setup

Initial Setup

e n
m/
Initial Setup Create Host

co
Create Disk Domain Create Host Group

i.
we
Create Storage Pool Create Port Group

ua
Create LUN Create Mapping View

.h
Create LUN Group OS Specific Steps

ng
ni
ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 3
: //
tp

After the physical rack mounting procedure has been completed and all cabling is done, the first
step is to set up an IP address that will be used to connect to the OceanStor device for
ht

management. This requires a serial cable connected to the serial interfaces of both of the
s:

controllers. The serial interface port is labeled: I0I0I


ce
ur

Setting the Management IP addresses


so
Re

Serial cable used to connect to controller (115,200 Baud)


ng
ni
ar
Le
re
Mo

Serial cable
Management cable

Default IP addresses : 192.168.128.101 and 192.168.128.102

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 4

HCNA-storage V3 | OHC1109110 Huawei Storage Initial Setup and Configuration Page | 377
Every OceanStor device is shipped with a universal IP address set for the management interfaces.
That address is set to 192.168.128.101 for the first controller and 192.168.128.102 for the second
controller. A terminal program that has the option to run serial communication can now be used to
connect to the individual controllers. Many of those terminal programs exist. In the labs a well-
known program called Putty is used.

ne
The connection in Putty must be set to 115,200 Baud. After the connection is established the

m/
login screen appears.

co
i.
we
ua
Initial Setup Commands

.h
ng
ni
Initial Setup Commands

ar
le
Default login with:
//

• Username = admin
• Password = Admin@storage
:
tp
ht
s:
ce
ur
so
Re

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 5


ng
ni

On the command line prompt, indicated with admin:/>, the next steps should be taken the very
ar

first time the OceanStor is going to be used.


Le

For security reasons it is very important to change the password for the admin user (who has the
re

highest administrator level rights) from Admin@storage into something only the authorized
Mo

system administrators know.

As the default IP address is not always in the range the administrator uses for management we
probably have to change that as well. The new ip address that will be set is from then on used to
launch the web based user interface called DeviceManager.

Page | 378 HCNA-storage V3 | OHC1109110 Huawei Storage Initial Setup and Configuration
Initial Setup Commands

Task 1: Change login password [recommended]


Task 2: Set management IP addresses

e n
m/
CLI command:

co
admin:/> changesystemmanagement_ipeth_port=CTEO.SMM0.MGMT0

i.
ip_type=ipv4_addressipv4_address=172.16.190.2mask=255.255.0.0

we
gateway_ipv4=172.16.0.1

ua
.h
Note:  indicates a space. Command is typed as one line of text!

ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 6

ar
le
In the lab environment it is not a problem to use the default settings for the admin password. So
//

we do not change it here. Again in real life only a limited number of persons should have access
:

to the OceanStor.
tp
ht

In that respect it is best to create multiple user accounts with different levels. In the picture below
there are some commands that show, create and delete users or change their level.
s:
ce
ur

Initial Setup Commands


so
Re

Some useful CLI commands:


ng

□ admin:/> show system management_ip


ni

□ admin:/> showuser
ar

□ admin:/> chgpasswd

□ admin:/> adduser –u <username> -l <level>


Le

□ admin:/> resetpasswd –u <username>


re

□ admin:/> chguserlevel –u <username> -l <new level>


Mo

□ admin:/> deluser –u <username>

Note <....> depicts an input of a name, password or other parameter

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 7

HCNA-storage V3 | OHC1109110 Huawei Storage Initial Setup and Configuration Page | 379
Initial Setup Commands

More useful CLI commands:


□ admin:/> showsubrack

ne
□ admin:/> showtemperature

m/
□ admin:/> showmaster

co
□ admin:/> swapsys

i.
□ admin:/> upgradesys –i<host ip address>-u<username>-p

we
<password>-f<file name>[-force]

ua
.h
Note: <....> depicts an input of a name, password or other parameter
 depicts a space

ng
[...] indicates an optional parameter

ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 8

ar
le
Although most of the day-to-day configuration will be done using the graphical user interface there
//

are some CLI commands that can be used to monitor the OceanStor.
:
tp

Typing these commands offers a quick overview of the status of the controller, the temperature
ht

etc.
s:

Other examples are the commands that relate to the role of the two controllers. Huawei
ce

OceanStor devices work as dual controllers in the active-active mode. Both controllers are
ur

actively moving data to and from hosts, but still there must be a hierarchy for the controllers. That
so

hierarchy is called primary master – secondary master. To find out who is the master we can use
Re

the command showmaster. The result of that command is something like this:
ng

Admin:/.showmaster
ni

=====================
ar

Master Status
Le

-------------------------------------
Status | Primary
re

=====================
Mo

It would mean that the controller on which this command was typed is in fact the master controller.

These are just a few examples of the CLI commands that can be used. For each firmware version
of the OceanStor there is an extended Command Line Reference guide available. In there we find

Page | 380 HCNA-storage V3 | OHC1109110 Huawei Storage Initial Setup and Configuration
hundreds of commands. Some of these commands are used to create LUNs and create
mappings to host. As most administrators will perform these tasks in the graphical user interface
we will look at that now.

In the lab guide that comes with this course you will find that the initial configuration has been

n
done and management IP addresses have been determined and set.

e
m/
After the initial setup via the serial connection is completed, the graphical user interface will be

co
used to perform the next steps.

i.
we
ua
.h
Launching the DeviceManager User Interface

ng
ni
Launching the DeviceManager User Interface
ar
le
//

In a supported webbrowser type: http://<management ip>:8088


:
tp
ht
s:

Default password:
ce

Admin@storage
ur
so
Re
ng

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 9


ni
ar

The default login information is :


Le

User name = admin


Password = Admin@storage
re
Mo

After that the main window of DeviceManager will be shown.

HCNA-storage V3 | OHC1109110 Huawei Storage Initial Setup and Configuration Page | 381
OceanStor DeviceManager Main Window

ne
m/
co
i.
we
ua
.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 10

ar
le
Home: Brings the user back to the main window
: //

System: Rack and controller information. Restarting controller(s)


tp

configuration of IP addresses and configuration of FC ports


ht

Provisioning Various volume related tasks (create, expand, delete) and mapping of
s:

the volumes (host group, mapping view). Here also disk domains and
ce

storage pools are managed.


ur
so

Data Protection Options for snapshots, clones and replications


Re

Monitor Monitoring information of the entire system (i.e. IOPS, network


ng

bandwidth)
ni
ar

Settings Initial configuration tasks, Export data, Restart/Power off devices,


Le

Basic settings (time, location), Alarm & Performance Monitoring


settings, User settings
re
Mo

Page | 382 HCNA-storage V3 | OHC1109110 Huawei Storage Initial Setup and Configuration
Create Disk Domain

Steps to map a LUN in Windows

e n
m/
Initial Setup Create Host

co
Create Disk Domain Create Host Group

i.
we
Create Storage Pool Create Port Group

ua
Create LUN Create Mapping View

.h
Create LUN Group OS Specific Steps

ng
ni
ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 11
: //
tp

Create Disk Domain


ht
s:

Go to the Create Disk Domain dialog box: Click the Provisioning button
ce

Click Disk Domain


ur
so
Re
ng
ni
ar
Le
re

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 12


Mo

The first step in the process of creating a volume is to allocated storage capacity. That storage
capacity has to come from physical disks. A disk domain must be created to group physical disks
together (optionally with different disk types). That is a Provisioning task.

HCNA-storage V3 | OHC1109110 Huawei Storage Initial Setup and Configuration Page | 383
At the right navigation bar, click the Provisioning button.

In the Provisioning screen, click the Disk Domain button. You will find the Disk Domain button in
the Storage Configuration and Optimization area (bottom part of the Provisioning window).

n
There are several steps that you have to make before you can map a LUN to an Operating

e
System. All options can be found in the Provisioning screen. At the top of the Provisioning screen,

m/
there is a diagram with all the steps that you have to take.

co
i.
A Disk Domain is a set of disks of the same type or different types. Isolated Disk Domains carry

we
different services, preventing mutual service impact.

ua
.h
ng
ni
Disk Domain Wizard

ar
le
Disk Domain Wizard
: //
tp

• Click the Create button


ht
s:
ce


ur

Enter a Name and Description for the


Disk Domain
so
Re

• Select one option in the Select Disk area


ng
ni
ar

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 13


Le

1. Enter a Name and Description for the Disk Domain


re

2. In the Name text box, enter a name for the Disk Domain
Mo

3. In Description text box, enter the function and properties of the Disk Domain.
The descriptive information helps identify the Disk Domain
4. Select one option in the Select Disk area. The following options are: All available disks,
Specify disk type and manually select.

Page | 384 HCNA-storage V3 | OHC1109110 Huawei Storage Initial Setup and Configuration
All available disks:
When you select this option, the system will use all available disks. You can choose a Hot Spare
Policy for each storage tier. From the dropdown menu you can choose High, Low or None.
Choose one of these options.

n
Specify disk type:

e
When you select this option, the system will give you the possibility to select one or multiple

m/
storage tiers, as well as a specific amount of disks (blocks) per storage tier and a Hot Spare

co
Policy for each storage tier. You specify the amount of disks per storage tier.

i.
we
Manually select:

ua
When you select this option, you are able to select specific disks per storage tier and the Hot

.h
Spare Policy.

ng
NOTE: You need at least four disks per storage tier to create a Disk Domain.

ni
ar
le
In the following image the screen is shown where the administrator can manually select the disks
//

that should be included in the disk domain.


:
tp
ht

Create Disk Domain: Manual Select Disks


s:
ce
ur
so
Re
ng
ni
ar
Le
re

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 14


Mo

We can see that from the twelve available disks six have been selected for the disk domain. As
they are all the same type it would mean that the disk domain would represent a single tier disk
domain. (Here a performance tier with SAS disks).

HCNA-storage V3 | OHC1109110 Huawei Storage Initial Setup and Configuration Page | 385
The number of disks and the type of disks determine available capacity, performance
characteristics and the possibility for Smart Tiering.

Disk Domain created

ne
m/
The success box will show that the operation is succeeded

co
i.
we
ua
.h
ng
ni
ar
le
//

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 15


:
tp

The success box will show that the operation has succeeded.
ht
s:
ce
ur
so
Re
ng
ni
ar
Le
re
Mo

Page | 386 HCNA-storage V3 | OHC1109110 Huawei Storage Initial Setup and Configuration
Create Storage Pool

Storage pools are subdivisions of disk domains and there can be multiple storage pools in a disk
domain. Important to remember is that a storage pool will be assigned with a RAID protection
method. All LUNs that will be created inside of that storage pool will inherit those RAID protection

n
settings.

e
m/
co
i.
Steps to map a LUN in Windows

we
ua
Initial Setup Create Host

.h
ng
Create Disk Domain Create Host Group

ni
Create Storage Pool Create Port Group

Create LUN ar
Create Mapping View
le
//

Create LUN Group OS Specific Steps


:
tp
ht

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 16


s:
ce
ur
so
Re
ng
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109110 Huawei Storage Initial Setup and Configuration Page | 387
Create Storage Pool

Go to the Create Disk Domain dialog box: On the right navigation bar, click
Click Storage Pool

ne
m/
co
i.
we
ua
.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 17

ar
le
1. At the right navigation bar, click the Provisioning button.
//

2. In the Provisioning screen, click the Storage Pool button. You will find the Storage Pool
:

button in the Storage Configuration and Optimization area.


tp
ht

Storage Pool Window


s:
ce
ur

• Click Create
so
Re

An alternative way to open the Create Storage Pool wizard is via the flowchart like
diagram at the top of the screen.
ng
ni
ar
Le
re
Mo

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 18

3. In the Storage Pool window, click the Create button to start the Disk Domain wizard.

Page | 388 HCNA-storage V3 | OHC1109110 Huawei Storage Initial Setup and Configuration
At this point the administrator should have a clear idea about the LUNs he needs to create later
on for this storage pools. The RAID properties he will select for this storage pool will determine
protection, overhead and rebuilding. The RAID types supported are: RAID 1, RAID 10, RAID 3,
RAID 5, RAID 50, RAID 6

n
With RAID 5 there are three settings to choose from:

e
m/
2D+1P: Two chunks hold user data and one parity chunk is calculated across these

co
chunks. Overhead = 33%

i.
4D+1P Four chunks hold user data and one parity chunk is calculated across these

we
chunks. Overhead = 20%

ua
.h
8D+1P Eight chunks hold user data and one parity chunk is calculated across these

ng
chunks. Overhead = 11%

ni
ar
To use 4D+1P and 8D+1P there must be at least five respectively nine disks used to build the
le
disk domain.
: //
tp

Create Storage Pool wizard


ht
s:

• Enter a Name and Description


for the Storage Pool
ce

• Select Usage type


ur

• Select a Disk Domain


so

• Select Storage Medium


Re
ng
ni

Optional: Click Set Smart Tier Policy to


ar

set the Service Monitoring Period and


Data Migration Plan.
Le

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 19


re
Mo

1. Enter a Name and Description for the Storage Pool.


2. In the Name text box, enter a name for the Storage Pool.
3. In the Description text box, enter the function and properties of the Storage Pool. The
descriptive information helps identify the Storage Pool.

HCNA-storage V3 | OHC1109110 Huawei Storage Initial Setup and Configuration Page | 389
4. In the Usage box, you need to select a user type for the Storage Pool. The value can be
Block Storage Pool or File Storage Service.
NOTE: This usage type is not changeable once it is configured.
5. In Disk Domain, you need to select a Disk Domain from the dropdown list.
6. In the Storage Medium, select the storage Tiers and RAID Policy needed for the Storage

n
Pool. Choose a capacity per storage tier. You can choose GB or TB.

e
m/
Optional: Click the Set SmartTier Policy button to set the Service Monitoring Period and

co
Data Migration Plan.

i.
The Set Smart Tier Policy button is highlighted as soon as more than one tier is being used in the

we
disk domain and storage pool. Smart Tier is a method in which data will be moving from disks

ua
from one tier to disks of another tier. The reason for moving is the usage level of the data. Data

.h
that is not used a lot is best stored at cheaper storage capacity. Frequently used data is best

ng
located on higher performance disks. SmartTier can arrange for this to happen. However: data

ni
migration has a certain impact on the performance of the system. That is why Huawei schedules

ar
the migration jobs to be run at off-peak hours. To determine which periods are off-peak the
system must be monitored for I/O performance. In the Service Monitoring Period we determine
le
when the OceanStor will do performance monitoring. Once the monitoring has provided the
//

system with the off-peak periods we can use the Data Migration Plan option to have OceanStor
:
tp

only migrate data using these off-peak periods.


ht

As SmartTier is a licensed option, and not everybody uses this function, the settings here are
optional.
s:
ce

Storage Pool created


ur
so

The execution result box will display that the operation is succeeded
Re
ng
ni
ar
Le
re
Mo

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 20

The execution result box will display that the operation is succeeded.

Page | 390 HCNA-storage V3 | OHC1109110 Huawei Storage Initial Setup and Configuration
Create LUN

Steps to map a LUN in Windows

e n
m/
Initial Setup Create Host

co
Create Disk Domain Create Host Group

i.
we
Create Storage Pool Create Port Group

ua
Create LUN Create Mapping View

.h
Create LUN Group OS Specific Steps

ng
ni
ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 21
: //
tp

A LUN or Logical Unit Number is an amount of space that is allocated inside a storage pool for
a host. A LUN has the same RAID protection as the storage pool. A LUN can be created as a
ht

thick or a thin LUN. Thick LUN pre-allocate all required GB of storage capacity even though no
s:

user data is stored yet. Thin LUNs will only occupy physical storage when user data is written. For
ce

thin LUNs the SmartThin license must be acquired.


ur
so
Re
ng
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109110 Huawei Storage Initial Setup and Configuration Page | 391
Create LUN

Go to the Create LUN dialog box: On the right navigation bar, click
Click LUN

ne
m/
co
i.
we
ua
.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 22

ar
le
To create a LUN follow these steps:
: //

1. At the right navigation bar, click the Provisioning button.


tp

2. In the Provisioning screen, click the LUN button. You will find the LUN button in the Block
ht

Storage Service area.


s:
ce

LUN window
ur
so

• Click Create
Re
ng
ni
ar
Le
re
Mo

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 23

Page | 392 HCNA-storage V3 | OHC1109110 Huawei Storage Initial Setup and Configuration
Create LUN wizard

• Enter a Name and Description


for the LUN

e n
m/
co
• Fill in the Capacity

i.
• Fill in the Quantity

we
• Select the Owning Storage Pool

ua
• Click the Advanced button

.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 24

ar
le
1. Enter a Name and Description for the LUN.
//

2. In the Name text box, enter a name for the LUN.


:

3. In the Description text box, enter the function and properties of the LUN. The descriptive
tp

information helps identify the LUN.


ht

Optional If the SmartThin licensed feature is purchased, it is possible to create thin


s:

provisioned LUNs. To enable this feature, check the Enable checkbox. When the SmartThin
ce

feature is enabled, the Create LUN wizard will show an option called Initially Allocated
ur

Capacity. Example: When the total capacity is 50 GB and you fill in the Initially Allocated
so

Capacity with 10 GB, the LUN will take just 10 GB on the Storage Pool. This LUN can grow
Re

until it reached 50 GB.


ng

4. Fill in the Capacity for the LUN.


ni

5. In the dropdown box, select one of the following options: Blocks, MB, GB and TB.
ar
Le

6. Fill in the Quantity.


It is possible to create a maximum of 500 LUNs at the same time. If the quantity is 5, the
re

system will create five LUNs with the same capacity. The names of the LUNs will be extended
Mo

with 001, 002 up to 005.

7. Select a Owning Storage Pool from the dropdown list. The LUN will be created in the
Storage Pool that is selected.
8. Set the advanced properties for the LUN by clicking the Advanced button.

HCNA-storage V3 | OHC1109110 Huawei Storage Initial Setup and Configuration Page | 393
Advanced settings 1

• Click the Properties tab

ne
m/
co
i.
we
ua
.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 25

ar
le
In the Advanced menu there are options which could be useful based on your service needs.
: //
tp
ht

Properties tab
s:

The option Owning Controller ID allows the administrator to force the ownership of a LUN to a
ce

specific controller. Default setting is automatic which means that LUNs will alternatively be owned
ur

by the two controllers:


so

- First LUN to controller 0


Re

- Second LUN to controller 1


- Third LUN to controller 0 etc.
ng
ni

There are four options for the Initial Capacity Allocation Policy:
ar
Le

 Default: Automatic allocation


 Allocate from the high-performance tier first
re

 Allocate from the performance tier first


Mo

 Allocate from the capacity tier first

Page | 394 HCNA-storage V3 | OHC1109110 Huawei Storage Initial Setup and Configuration
In the Cache Policy area the Read and Write Policy can be changed. There are three options
available for the Read Policy as well as for the Write Policy:

 Resident:
For random cache access. Data is retained in cache the longest to improve the read ratio.

e n

m/
Default:
For regular cache access. Keep balance between write hit ratio and disk access performance.

co
i.
 Recycle:

we
For sequential cache access. The idle cache resources are released for other access

ua
requests.

.h
ng
ni
Select a Prefetch Policy from the Prefetch Policy area.

ar
Prefetching is a technique that can be used to improve the read performance for data read from
le
disks. The technique analyses data that was read before and determines if the data may be used
//

again soon. That data will be prefetched (loaded before the user requests the data) and stored in
:

the READ AHEAD RAM cache of the controller. Next time the user requests the same data again
tp

it is read from RAM in stead of from disk. Next to the performance gain there is an additional
ht

benefit. Disks have to do less seeks to find data which extends their lifespan slightly.
s:
ce

Advanced settings 2
ur
so

• Click the Tuning tab


Re
ng
ni
ar
Le
re
Mo

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 26

HCNA-storage V3 | OHC1109110 Huawei Storage Initial Setup and Configuration Page | 395
Tuning tab

In the Tuning tab it is possible to configure some licensed features. Licensed features which are
available are as follow:

 SmartTier

ne
 SmartQOS

m/
 SmartCache

co
 SmartDedupe & SmartCompression

i.
 SmartPartition

we
ua
.h
LUN created

ng
ni
The execution result box will display that the operation succeeded

ar
le
: //
tp
ht
s:
ce
ur

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 27


so
Re

The execution result box will display that the operation succeeded.
ng
ni
ar
Le
re
Mo

Page | 396 HCNA-storage V3 | OHC1109110 Huawei Storage Initial Setup and Configuration
Create LUN Group

Steps to map a LUN in Windows

e n
m/
Initial Setup Create Host

co
Create Disk Domain Create Host Group

i.
we
Create Storage Pool Create Port Group

ua
Create LUN Create Mapping View

.h
Create LUN Group OS Specific Steps

ng
ni
ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 28
: //
tp

With Huawei a LUN can be used by a host when it has been mapped to that host. This will be
discussed in the next section. However, if a number of LUNs must be presented to a host than a
ht

LUN Group can be created. A host that has access privileges to a LUN group has access to all
s:

LUNs of that LUN Group.


ce
ur
so
Re
ng
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109110 Huawei Storage Initial Setup and Configuration Page | 397
Create LUN Group

Go to the Create LUN dialog box: On the right navigation bar, click
Click LUN

ne
m/
co
i.
we
ua
.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 29

ar
le
1. At the right navigation bar, click the Provisioning button.
//

2. In the Provisioning screen, click the LUN button. You will find the LUN button in the Block
:

Storage Service area.


tp
ht

Create LUN Group


s:
ce
ur

• Select the LUN Group tab. Click Create


so
Re
ng
ni
ar

• Select LUNs from the


Le

Available LUNs to move to


the Selected LUNs area
re
Mo

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 30

3. When the LUN window is opened, click the LUN Group tab. Click the create button to start the
Create LUN Group wizard.

Page | 398 HCNA-storage V3 | OHC1109110 Huawei Storage Initial Setup and Configuration
4. In the LUN window, click the Create button to start the Create LUN wizard.
5. In the Create LUN Group wizard, enter a Name and Description for the group.

6. Enter a Name and Description for the LUN.


7. In the Name text box, enter a name for the LUN.

n
8. In the Description text box, enter the function and properties of the LUN. The descriptive

e
information helps identify the LUN.

m/
co
9. Select the LUN to add to the LUN Group.

i.
10. In the Available LUNs area, select one or multiple LUNs based on your service needs.

we
11. Click the Right arrow button ( > ) to add the LUNs to the Selected LUNs area.

ua
.h
LUN Group created

ng
ni
The execution result box will display that the operation succeeded
ar
le
: //
tp
ht
s:
ce
ur
so

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 31


Re
ng

The execution result box will display that the operation succeeded.
ni
ar
Le

In module 11 we will show how a snapshot can be created of a LUN. A snapshot can be mapped
re

to a host by adding the snapshot to the LUN Group that already has the original LUN.
Mo

HCNA-storage V3 | OHC1109110 Huawei Storage Initial Setup and Configuration Page | 399
Create Host

Steps to map a LUN in Windows

en
m/
Initial Setup Create Host

co
Create Disk Domain Create Host Group

i.
we
Create Storage Pool Create Port Group

ua
Create LUN Create Mapping View

.h
Create LUN Group OS Specific Steps

ng
ni
ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 32
: //
tp

A host is the physical server that runs an application that generates data that should be stored on
the LUN created in an OceanStor device. From the DeviceManager perspective a host consists of
ht

a number of I/O interfaces that the host uses to connect to the storage network.
s:
ce

When a host is created the I/O interfaces are identified but also the operating system that the host
runs. Also the IP address of the host must be entered in this process.
ur
so
Re
ng
ni
ar
Le
re
Mo

Page | 400 HCNA-storage V3 | OHC1109110 Huawei Storage Initial Setup and Configuration
Create Host

Go to the Create Host dialog box: On the right navigation bar, click
Click Host

e n
m/
co
i.
we
ua
.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 33

ar
le
1. At the right navigation bar, click the Provisioning button.
//

2. In the Provisioning screen, click the Host button. You will find the Host button in the Block
:

Storage Service area.


tp

3. Add initiators to hosts and add the hosts to host groups to establish a logical connection
ht

between application servers and the storage system.


s:

Create Host wizard 1


ce
ur
so

• On the Host tab: Click Create  Manually Create


Re
ng
ni
ar

• Enter a Name and


Description for the
Le

Host
• Select an OS from the
dropdown list
re

• Enter an IP address
Mo

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 34

In the Host screen you can create Hosts and Host Groups.

HCNA-storage V3 | OHC1109110 Huawei Storage Initial Setup and Configuration Page | 401
To create a Host:

1. click the Host tab.


2. Click the Create button and select the Manually Create option. This wil open the Create Host
wizard.
3. Then enter a name and description for the Host

n
4. In the Name text box, enter a name for the Host

e
m/
5. In the Description text box, enter the function and properties of the Host. The descriptive

co
information helps identify the Host
6. Select an Operating System from the dropdown list

i.
7. Enter the IP address for the host

we
ua
Optional Enter a Device Location

.h
ng
ni
Create Host wizard 2

ar
le
• Select one or multiple initiators
from the Available Initiators
//

and move the selected initiators


to the Selected Initiators area
:
tp
ht
s:

• Click the Create button if there


is no initiator available
ce
ur
so
Re

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 35


ng
ni

1. Select one or multiple initiators from the Available Initiators area and click the Down arrow
ar

to move the selected initiators to the Selected Initiators area


Le

2. If there is no initiator available, click the Create button


re
Mo

Page | 402 HCNA-storage V3 | OHC1109110 Huawei Storage Initial Setup and Configuration
Create Initiator

• Select the initiator Type


When you choose for iSCSI, you will need the use the IQN.

n
When you choose FC/IB, you will need the WWPN

e
m/
co
i.
we
ua
Note:

.h
IQN = iSCSI Qualified Name
WWPN = World Wide Port Name

ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 36

ar
le
If the type for the iSCSI initiator is iSCSI you must enter the IQN. If you select Fibre Channel or
//

InfiniBand (IB) as an initiator type, then you need the WWPN to create a new initiator.
:

For the iSCSI initiator it is possible to enable CHAP authentication.


tp
ht

Create Host wizard 3


s:
ce
ur
so
Re
ng
ni
ar
Le
re

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 37


Mo

Last part of the Create Host Wizard is the summary and the confirmation. IQNs should be unique
for a host but they are not mechanically fixed inside a host. In fact the IQN is a string that can be
changed quite easily. It is therefore important that the administrator agrees with accepting the

HCNA-storage V3 | OHC1109110 Huawei Storage Initial Setup and Configuration Page | 403
consequences in the Danger window. Once the checkbox is checked and the OK button is clicked
the Execution Result will be shown.

e n
m/
co
i.
we
ua
.h
ng
ni
ar
le
: //
tp
ht
s:
ce
ur
so
Re
ng
ni
ar
Le
re
Mo

Page | 404 HCNA-storage V3 | OHC1109110 Huawei Storage Initial Setup and Configuration
Create Host Group

Steps to map a LUN in Windows

e n
m/
Initial Setup Create Host

co
Create Disk Domain Create Host Group

i.
we
Create Storage Pool Create Port Group

ua
Create LUN Create Mapping View

.h
Create LUN Group OS Specific Steps

ng
ni
ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 38
: //
tp

In a host group there can be one or more hosts. A LUN (or LUN Group) can be mapped to the
host group. If multiple hosts are added to a host group it means that all hosts will see the mapped
ht

LUN (or LUN Group). The system will warn you when you add multiple hosts to a Host Group.
s:

The warning states that, if the hosts do not belong to a cluster, there is a real possibility that data
ce

will become corrupt. If more than one hosts, without the intelligence of a cluster/file-locking cluster,
can access and modify the same files on a LUN the data may be corrupt.
ur
so

In environments were a lot of LUNs should be accessible by many hosts the concept of host
Re

groups saves the administrator a lot of work. He now maps a LUN to a host group instead of
mapping a LUN multiple times to multiple hosts.
ng
ni

Especially with server virtualization like VMware and HyperV using Host Groups is very common
ar

for making shared storage (referred to as datastores).


Le

With the advanced options, VMware and also HyperV offer, it is a necessity that all VMware or
re

HyperV hosts can see the datastores at the same time.


Mo

HCNA-storage V3 | OHC1109110 Huawei Storage Initial Setup and Configuration Page | 405
Create Host Group

Go to the Create Host dialog box: On the right navigation bar, click
Click Host

ne
m/
co
i.
we
ua
.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 39

ar
le
1. At the right navigation bar, click the Provisioning button.
//

2. In the Provisioning screen, click the Host button. You will find the Host button in the Block
:

Storage Service area.


tp
ht

Create Host Group wizard


s:
ce
ur

• Open the Host Group tab and click Create


so
Re
ng
ni

• Enter a name for the


Host Group
ar

• Select a host from the


Le

Available Hosts area and


move it to the
Selected Hosts area
re
Mo

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 40

In the Host screen you can create Hosts and Host Groups.

Page | 406 HCNA-storage V3 | OHC1109110 Huawei Storage Initial Setup and Configuration
To create a Host Group:

1. Click the Host Group tab.


2. Click the Create button. This will open the Create Host Group wizard.
3. In the Create Host Group wizard, enter a name and description for the Host Group.

n
4. In the Name text box, enter a name for the Host Group.

e
5. In the Description text box, enter the function and properties of the Host Group. The

m/
descriptive information helps identify the Host Group.

co
6. Select one or multiple hosts from the Available Hosts area and click the Right arrow to

i.
move the selected host(s) to the Selected Hosts area.

we
ua
.h
Host Group created

ng
ni
The execution result box will display that the operation succeeded

ar
le
: //
tp
ht
s:
ce
ur

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved.


so

Slide 41
Re

During the process of creating the Host the initiators are assigned to the host. Typically the
ng

interface card in the physical host has multiple ports. The host definition will then list all individual
ni

ports of the cards as part of the new host. All ports will then be used as paths when a LUN gets
ar

mapped to that host. Sometimes however we want to specify which ports should be used as
Le

active data paths in the mapping of a LUN. In that case a Port Group can be made. Inside a port
group we group interface ports together. When the mapping is done over the host it uses all
re

physically present interface port. When the mapping is done using a port group only the ports
Mo

listed in the port group will be used.

HCNA-storage V3 | OHC1109110 Huawei Storage Initial Setup and Configuration Page | 407
Create Port Group

Steps to map a LUN in Windows

en
m/
Initial Setup Create Host

co
Create Disk Domain Create Host Group

i.
we
Create Storage Pool Create Port Group

ua
Create LUN Create Mapping View

.h
Create LUN Group OS Specific Steps

ng
ni
ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 42
: //
tp

Create Port Group


ht
s:

Go to the Create Port dialog box: On the right navigation bar, click
ce

Click Port
ur
so
Re
ng
ni
ar
Le
re

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 43


Mo

1. At the right navigation bar, click the Provisioning button.


2. In the Provisioning screen, click the Port button.
You will find the Port button in the Storage Configuration and Optimization area.In the Port
screen you can view and manage host ports, port groups, VLANs and logical ports.

Page | 408 HCNA-storage V3 | OHC1109110 Huawei Storage Initial Setup and Configuration
Create Port Group wizard

• Open the Port Group tab.


Click Create

e n
• Enter a Port Group

m/
name and description

co
i.
• Select a port from the
Available Ports area

we
and move it to the
Selected Ports area

ua
.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 44

ar
le
3. When the Port screen is opened, click the Port Group tab.
//

4. Click the Create button. This will open the Create Port Group wizard.
:

5. In the Create Port Group wizard, enter a name and description for the Port Group.
tp

6. In the Name text box, enter a name for the Port Group.
ht

7. In the Description text box, enter the function and properties of the Port Group. The
descriptive information helps identify the Port Group.
s:
ce

8. Select one or multiple ports from the Available Ports area and click the Right arrow to move
ur

the selected port(s) to the Selected Ports area.


so

9. Then click OK to finish the Port Group creation.


Re
ng
ni

The Execution Result windows is shown next.


ar
Le
re
Mo

HCNA-storage V3 | OHC1109110 Huawei Storage Initial Setup and Configuration Page | 409
Create Mapping View

Steps to map a LUN in Windows

en
m/
Initial Setup Create Host

co
Create Disk Domain Create Host Group

i.
we
Create Storage Pool Create Port Group

ua
Create LUN Create Mapping View

.h
Create LUN Group OS Specific Steps

ng
ni
ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 45
: //
tp

Create Mapping View


ht
s:

Go to the Create Mapping View dialog box: On the right navigation bar, click
ce

Click Mapping View


ur
so
Re
ng
ni
ar
Le
re

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 46


Mo

1. At the right navigation bar, click the Provisioning button.


2. In the Provisioning screen, click the Mapping View button. You will find the Mapping View
button in the Block Storage Service area.

Page | 410 HCNA-storage V3 | OHC1109110 Huawei Storage Initial Setup and Configuration
A Mapping View is a view that reflects the access restrictions and mapping among a LUN Group,
a Port Group and a Host Group.

Create Mapping View wizard

e n
m/
• Click Create

co
i.
we
• Enter a Name and
Description for the Mapping

ua
View

.h
ng
• Click the triple dots button to
select a LUN Group, Host

ni
Group and Port Group

ar
le
//

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 47


:
tp

In the Mapping View screen:


ht

1. Click the Create button.


s:

2. Name and describe the Mapping View


3. In the Name text box, enter a name for the Mapping View
ce

4. In the Description text box, enter the function and properties of the Mapping View. The
ur

descriptive information helps identify the Mapping View


so

5. Click the triple dots button to select a LUN Group, Host Group and Port Group
Re
ng
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109110 Huawei Storage Initial Setup and Configuration Page | 411
Create Mapping View Wizard

Check the checkbox and click OK

ne
m/
co
i.
we
ua
.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 48

ar
le
Read the message, check the checkbox and click OK to create the Mapping View.
: //

Mapping View created


tp
ht

The execution result box will display that the operation succeeded
s:
ce
ur
so
Re
ng
ni
ar
Le

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 49


re
Mo

The execution result box will display that the operation succeeded.

Now all steps are completed to create a LUN and map it to an Operating System. In the upcoming
section we will show how the operating system (in this case Windows) can detect the new LUN
and use it as a volume to put data on.

Page | 412 HCNA-storage V3 | OHC1109110 Huawei Storage Initial Setup and Configuration
OS Specific Steps

Steps to map a LUN in Windows

e n
m/
Initial Setup Create Host

co
Create Disk Domain Create Host Group

i.
we
Create Storage Pool Create Port Group

ua
Create LUN Create Mapping View

.h
Create LUN Group OS Specific Steps

ng
ni
ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 50
: //
tp

In most situations the host is physically connected to the storage network via switches using the
FC or the iSCSI protocol. If the protocol used is iSCSI then the detection of new LUNs for
ht

Windows works a bit different that with the FC protocol.


s:
ce
ur
so
Re
ng
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109110 Huawei Storage Initial Setup and Configuration Page | 413
OS Specific Steps

Use the iSCSI Initiator to map


LUNs to a host

n
• Open the iSCSI Initiator

e
m/
• Click the Discovery tab

co
• Click the Discover Portal… button

i.
we
ua
.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 51
v

ar
le
We need a few more steps before we can use the iSCSI-based LUN as a new partition in our
//

Operating System:
:
tp

1. First we have to connect the LUN to the Operating System by using the iSCSI initiator. For
that we need to open the iSCSI Initiator and click the Discovery tab.
ht

2. Configure the Target Portal by clicking the Discover Portal… button.


s:

3. Enter the IP address or DNS name. In this case we need to enter the IP address that is
ce

configured to one of the Huawei OceanStor V3 network ports.


ur

NOTE: When the LUN is configured with Fibre Channel, the LUN will immediately be connected
so

to the Operating System after creating the Mapping View. You only need to use Disk
Re

Management to create a new partition.


ng
ni
ar
Le
re
Mo

Page | 414 HCNA-storage V3 | OHC1109110 Huawei Storage Initial Setup and Configuration
iSCSI Initiators Properties

• Click the Targets tab

e n
m/
• Select the Inactive target and click

co
the Connect button

i.
we
ua
.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 52

ar
le
After the Target Portal is discovered in the Discovery tab, click the Target tab. Notice that a new
//

target is discovered. The status of the new target is Inactive. Select the target and click the
:

Connect button below the Discovered targets area.


tp
ht
s:
ce
ur
so
Re
ng
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109110 Huawei Storage Initial Setup and Configuration Page | 415
Disk Management

For both iSCSI as well as FC connected LUNs the newly discovered LUN should be presented to
the operating system. This is done via the Disk Management module of Windows.

n
All new volumes (Windows uses the term volumes when a LUN is presented to it) will “appear” in

e
m/
disk management. This is the case for LUN s created in Huawei OceanStor devices but also for

co
USB sticks, CD/DVD’s as they also represent storage capacity.

i.
we
Disk Management

ua
.h
ng
Use the Disk Management to create a new disk partition

ni
• Open the Server Manager

ar
le
• Expand the Storage part
//

• Click Disk Management


:
tp
ht
s:
ce

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 53


ur
so

The LUN is now connected to the Operating System. Open the Server Manager, expand the
Re

Storage part and click Disk Management.


ng
ni
ar
Le
re
Mo

Page | 416 HCNA-storage V3 | OHC1109110 Huawei Storage Initial Setup and Configuration
Rescan Disks

• At the left top of the screen, click Action and select the Rescan Disks option

e n
m/
co
i.
we
ua
.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 54

ar
le
If the LUN is not yet discovered by Disk Management, click the Action button at the top menu bar
//

and select the Rescan Disks option. This option will have the operating system do a new
:

hardware scan for new disk devices. This may take some time to complete but after a while a new
tp

storage device will be shown.


ht
s:

New Partition Discovered


ce
ur

• A new partition will show. Click the right mouse button and select Online
so
Re
ng
ni
ar
Le
re
Mo

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 55

The new partition will now show up in Disk Management. The partition is Offline. To put it online,
click the right mouse button and select Online.

HCNA-storage V3 | OHC1109110 Huawei Storage Initial Setup and Configuration Page | 417
Initialize Disk

• Click the right mouse button and select Initialize Disk

ne
m/
co
i.
we
ua
.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 56

ar
le
The partition is not yet initialized. To initialize the disk, click the right mouse button and select the
//

Initialize Disk option. This will open the Initialize Disk window.
:
tp

Initialize Disk
ht
s:
ce
ur
so
Re
ng
ni
ar
Le

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 57


re
Mo

In the Initialize Disk window, select one of the partition styles and click OK. The partition is now
initialized. MBR is the most common one but the GPT type should be selected for LUNs that are
bigger than 2 TB. Initializing a disk means that Windows will create a unique identifier for the
partition and store that ID on the disk. In older versions of windows this was referred to as the disk
signature.

Page | 418 HCNA-storage V3 | OHC1109110 Huawei Storage Initial Setup and Configuration
Create New Simple Volume

• Click the right mouse button and select New Simple Volume…

e n
m/
co
i.
we
ua
.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 58

ar
le
Although the disk has a signature now and is initialized there is not yet a file system on the
//

partition. That would be the next step.


:
tp

Notice that the new storage partition is label as Basic. Windows has two types of partitions called
ht

Basic and Dynamic. Dynamic disks were introduced in Windows NT in the 1990’s. In that period
Microsoft changed the file system they had then (FAT) into the NTFS. NTFS was short for New
s:

Technology File System. In NTFS dynamic disks were introduced because of the fact that
ce

dynamic disks could be expanded. Another reason is the fact that Windows NTFS supports
ur

software RAID. It means that Windows can handle two individual volumes and perform RAID
so

actions on them. Two dynamic disks could be spanned together which basically means they were
Re

put in a RAID 0 configuration. Options as mirroring (RAID 1) and striping with parity (RAID 5)
were also offered. However, in practice the majority of partitions used are Basic disks.
ng
ni

The space of the partition is still unallocated space. To create a new partition, click the right
ar

mouse button and select the New Simple Volume… option. This will open the New Simple
Le

Volume Wizard.
re
Mo

HCNA-storage V3 | OHC1109110 Huawei Storage Initial Setup and Configuration Page | 419
New Simple Volume Wizard

Follow the steps of the New Simple Volume Wizard to create a new Simple Volume
Images show Windows 2008 screenshots

ne
m/
co
i.
we
ua
.h
Note: Windows 2003 used the term partition instead of Simple Volume

ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 59

ar
le
Important step in the New Simple Volume is the Specify Volume Size. Here the administrator
//

decides how much of the physical capacity will be assigned to the New Simple Volume. Mostly it
:

is the total amount, but also less than the maximum is possible. The remaining space can be
tp

added to the volume in a later stage if necessary.


ht
s:

New Simple Volume Wizard


ce
ur
so
Re
ng
ni
ar
Le
re
Mo

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 60

Page | 420 HCNA-storage V3 | OHC1109110 Huawei Storage Initial Setup and Configuration
Next step is to assign a drive letter to the new simple volume. Windows supports up to twenty-six
drive letters. If we need more than twenty-six volumes attached to the host mount points can be
used. Next window is the Format Partition window.

Here we select the file system to be used and the allocation unit size (or block size). This

n
allocation unit size is a software defined size and it has no relation to the block size (chunk or

e
stripe) of the physical disk drive.

m/
co
A volume label must be entered to identify the new volume next to the drive letter. The image

i.
before shows the Format Partition settings to be:

we
ua
- File System: NTFS

.h
- Allocation unit size: Default

ng
- Volume label: LUN001

ni
Per default the Perform a quick format checkbox is checked. It means that, especially with large

ar
LUNs\volumes, the time needed to format the disk is much shorter. A quick format will only write
le
the minimal required information on the volume. A full format would write empty data blocks
//

across the entire volume.


:
tp
ht

New Simple Volume Wizard


s:
ce
ur
so
Re
ng
ni
ar
Le
re

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 61


Mo

This last but one window allows you to check all the settings for the new partition and click the
Finish button to complete the New Simple Volume Wizard.

HCNA-storage V3 | OHC1109110 Huawei Storage Initial Setup and Configuration Page | 421
Depending on the size of the volume (and the checkbox with Quick Format) the process of
formatting a disk can take from 5 -10 seconds up to a couple of minutes.

When the process has finished the disk management window will show the new volume with its
drive letter and label name. It will also indicate the size of the volume.

ne
m/
New Simple Volume Ready

co
i.
we
The partition is now ready for use

ua
.h
ng
ni
ar
le
: //
tp
ht

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 62


s:

The partition is now successfully created and ready for use.


ce
ur

In the Windows Explorer the volume will now be accessible as volume H:\ and the label name or
so

better volume name is LUN 001.


Re

Applications that run on the host can now select the volume to save data there.
ng
ni
ar
Le

This concludes this module. We want to add to this that the process to map a LUN to a Linux
based host is almost identical. The biggest changes are in the discovery of the new LUN in the
re

operating system. The definition of a host is almost the same for a Windows host and a Linux
Mo

based host.

Page | 422 HCNA-storage V3 | OHC1109110 Huawei Storage Initial Setup and Configuration
Questions

Questions

e n
m/
1. What are two reasons to create multiple disk domains within an
OceanStor device?

co
2. When would you put multiple LUNs into a single LUN Group?

i.
3. When would you put multiple hosts into a single Host Group?

we
4. What is the reason to use a Port Group?

ua
5. Describe the difference between a LUN and an NTFS volume

.h
ng
ni
ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 63
: //
tp
ht

Answers:
s:
ce

1. The first reason is to isolate performance characteristics. LUNs are created inside Storage
Pools and Storage Pools are created inside of a Disk Domain. Therefore a LUN can only
ur

benefit from the performance offered by the physical disks inside the Disk Domain.
so
Re

Second reason is to separate hard disks based on type and size into multiple Disk Domains.
This would then offer Disk Domains that differ in disk costs and performance.
ng
ni

2. LUN groups can be created to group LUNs, with dependencies between them, together.
ar

Mappings, snapshots and replication of LUN Groups will use all LUNs of the LUN Group.
Le

3. Host Groups can be used if clustered Hosts should all have access to the same LUN (s).
re
Mo

4. A Port Group can be used to limit or specify the physical ports of a host that should be used
as a data path.

5. A LUN is an entity within the OceanStor device whereas an NTFS volume is defined within
the operating system.

HCNA-storage V3 | OHC1109110 Huawei Storage Initial Setup and Configuration Page | 423
Exam Preparation

Exam preparation (1)

en
m/
1. Which of the following tasks are MANDATORY tasks in the process of
mapping a LUN to a host. (check all that apply)

co
a. Create Disk Domain

i.
b. Create Storage Pool
c. Create Lun Group

we
d. Create Host

ua
e. Create Host Group
f. Create Port Group

.h
g. Create Mapping View

ng
ni
ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 64
: //
tp
ht

Exam preparation (2)


s:

2. Situation:
ce

There are hundred disks put in one single Disk Domain. Just one LUN is
created.
ur

Statement 1: Splitting the disk domain up into two 50 disk disk domains
so

does impact the performance of the LUN.


Statement 2: Initializing the file system as GPT is required for volumes
Re

bigger than 2 TB
ng

a. Statement 1 is true, Statement 2 is true


b. Statement 1 is true, Statement 2 is false
ni

c. Statement 1 is false, Statement 2 is true


ar

d. Statement 1 is false, Statement 2 is false


Le
re

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 65


Mo

Answers:

1. A, B, C, D, E, G

2. A

Page | 424 HCNA-storage V3 | OHC1109110 Huawei Storage Initial Setup and Configuration
Summary

Summary

e n

m/
Disk Domains and Storage Pools is where LUNs are created
• Disk Domains created with different disk types can offer Tiering

co
• Storage Pools have a RAID protection level associated with them

i.
• LUNs inherit the RAID protection fom the Storage Pool they live in

we
• Hosts are created by assigning initiators to them

ua
• Mapping Views create the link between a LUN and a host for data

.h
access

ng
iSCSI and FC connected host have a slightly different way of
discovering new storage LUNs

ni
ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 66
: //
tp
ht
s:
ce
ur
so
Re
ng
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109110 Huawei Storage Initial Setup and Configuration Page | 425
ne
Thank you

m/
co
www.huawei.com

i.
we
ua
.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 67

ar
le
: //
tp
ht
s:
ce
ur
so
Re
ng
ni
ar
Le
re
Mo

Page | 426 HCNA-storage V3 | OHC1109110 Huawei Storage Initial Setup and Configuration
en
m/
co
i.
we
ua
OHC1109111

.h
Huawei Storage Firmware and Features

ng
ni
ar
le
//

www.huawei.com
:
tp
ht
s:
ce
ur
so
Re
ng
ni
ar
Le
re
Mo
Mo
re
Le
ar
ni
ng
Re
so
ur
ce
s:
ht
tp
://
le
ar
ni
ng
.h
ua
we
i.
co
m/
en
Introduction

In the previous module the initial setup and basic configuration of the OceanStor series were
discussed. It included the basic tasks like creating and mapping LUNs. In this module we will
discuss some of the licensed features Huawei offers. There is no room to discuss all of them so a

n
limited number is selected. The features HyperSnap, HyperClone, HyperReplication and

e
m/
SmartTier will be discussed as they are very popular and they are used often by Huawei

co
customers. Also the firmware update procedures will be covered in this module.

i.
we
ua
Objectives

.h
ng
After this module you will be able to:

ni
 Use the HyperSnap licensed feature to create snapshots
 Use snapshots to recover files
ar
le
 Use the SmartThin licensed feature to create thin provisioned LUNs
//

 Use the SmartTier licensed feature to move data between multiple storage tiers
:

 Explain how the HyperClone feature works


tp

 Understand the HyperReplication working modes


ht

 Describe the Huawei firmware update procedures


s:
ce
ur

Module Contents
so
Re

1. HyperSnap
ng

2. Use snapshots to recover files


ni

3. Rollback snapshot
4. SmartThin
ar

5. SmartTier
Le

6. HyperClone
7. HyperReplication
re

8. Firmware updates
Mo

HCNA-storage V3 | OHC1109111 Huawei Storage Firmware and Features Page | 429


e n
m/
co
i.
we
ua
.h
ng
ni
ar
le
: //
tp
ht
s:
ce
ur
so
Re
ng
ni
ar
Le
re
Mo

Page | 430 HCNA-storage V3 | OHC1109111 Huawei Storage Firmware and Features


HyperSnap

The HyperSnap license allows the user of the OceanStor to create snapshots of a LUN. A
snapshot is sometimes referred to as a Point-In-Time copy. What it essentially does, is to create a
copy of an existing LUN. The copy of the LUN can be used in a couple of situations:

e n
m/
We can use the copy for recovery. If something goes wrong with the data on the LUN we can

co
restore the data from the snapshot.

i.
Second application is the use of a snapshot to improve the backup strategy. Traditionally a

we
backup administrator will make one backup per day, mostly in the evening. With snapshots we

ua
can make multiple copies of a LUN and make backups of the snapshot LUNs.

.h
ng
The good thing about a snapshot is that it can be created very quickly (in seconds) and they do

ni
not consume a lot of space.

ar
le
There are two mainstream techniques for making the snapshot: Copy-On-Write and Allocated-On-
Write. Huawei uses the Copy-On-Write method.
: //
tp
ht

HyperSnap
s:

Copy-On-Write Snapshot technique


ce
ur
so

Active Active Active


Snapshot Snapshot
File System File System File System
Re

F:/ S:/ F:/ S:/ F:/

2. Write
ng

A B C D A B C D A B C D* D
ni

1. Copy
ar
Le

Before Snapshot After Snapshot After Block Updated


re

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 3


Mo

In the above picture the technique is explained. Important again is to understand that a LUN for
the OceanStor is made up of chunks. The volume on the host holds files in the file system
directory (F:/ for instance 50 GB) but for the OceanStor there are “only” chunks. The chunks are
represented with the green blocks A, B, C and D.

HCNA-storage V3 | OHC1109111 Huawei Storage Firmware and Features Page | 431


When the snapshot is created in the OceanStor it can be regarded as new LUN. That new LUN
can be mapped to the same host as the original LUN. From that point the host sees the original
LUN as F: and the snapshot of the LUN as S:.At this point S: does not consume space as the
snapshot uses block A, B, C and D to represent F: as well as S:.

n
The challenge is there when data on F: gets changed after the snapshot was created. F: should

e
then change but S: should still represent the data that was on F: at the time the snapshot was

m/
created. With Copy-On-Write the first step to be taken when data changes is to make a copy of

co
the chunk (or block) to preserve the original version of the chunk.

i.
we
This is represented in the picture with step 1 : Copy. (D is the copy of the original chunk). Then

ua
the new data written on F: can modify the used chunk on F:. In this example the new data

.h
changes the content of chunk D. The changed chunk is labeled D*.

ng
At this point F: points to the blocks A, B, C and D* where S: points to A, B, C and D.

ni
ar
F: contains the current version of the files and S: shows the files that were on F: at the time the
le
snapshot was created. Combined the space consumed by F: and S: is not 2 x 50 GB. The size of
//

a snapshot is basically equal to the number of changed chunks times the size of a chunk. In this
:

example it would be 50 GB + 1 chunk to store the original LUN plus the snapshot.
tp
ht

A LUN can have multiple snapshots active where each mapped snapshot could be backed up.
This allows the SAN administrator to make multiple backups during the day. Snapshots have little
s:

impact on the performance of the OceanStor and it takes a very short time to create a snapshot.
ce
ur

In the Provision section of the DeviceManager user interface we already saw how to create a LUN.
so

It is the same area of the user interface where the snapshots can be created. Of course this
Re

requires that the HyperSnap feature is licensed.


ng
ni
ar
Le
re
Mo

Page | 432 HCNA-storage V3 | OHC1109111 Huawei Storage Firmware and Features


Create Snapshot

Create Snapshot

e n

m/
Select a LUN

• Click More and select the Create Snapshot option

co
i.
we
ua
.h
ng
ni
ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 4
: //
tp

To create a Snapshot for a LUN:


ht

1. Open the Provisioning screen and click the LUN button. This will show all LUN(s) that are
created on the storage system.
s:

2. Select a LUN.
ce

3. Click the More menu button and select the Create Snapshot option. This will open the Create
ur

Snapshot wizard.
so

NOTE: It is also possible to select the Create Snapshot option by clicking the right mouse button
Re

on the selected LUN.


ng
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109111 Huawei Storage Firmware and Features Page | 433


Create Snapshot wizard

• Enter a Name and


Description

ne
m/
co
• Optionally click the

i.
Activate Now checkbox

• Click OK

we
ua
.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 5

ar
le
In the Create Snapshot Wizard that is displayed a default name for the snapshot is given. This is
//

a combination of the name of the originating LUN (here ThinLUN) and the creation time of the
:

snapshot (150303231945 or March 3rd of 2015 at 23:19:45)


tp
ht

In the Create Snapshot wizard, the user can modify the name in the Name text box. In the
Description text box, enter the function and properties of the Disk Domain. The descriptive
s:

information helps identify the Snapshot.


ce
ur

Optionally check the Activate Now checkbox and click OK to create the Snapshot. In this example
so

the Activate Now check box is checked.


Re

When the checkbox next to Activate Now is ticked the snapshot is active which means that all
ng

changes to the original LUN will be recorded. From the point that the snapshot is active additional
ni

storage capacity should be available to store the copies of the chunks that are changed in the
ar

original LUN. That is why a warning message will appear. This is to make you aware of the fact
Le

that there should be enough free capacity available in the storage pool.
re
Mo

Page | 434 HCNA-storage V3 | OHC1109111 Huawei Storage Firmware and Features


Warning message

• Read the message

e n
m/
co
• Check the checkbox

i.
• Click OK

we
ua
.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 6

ar
le
After reading the message, tick the checkbox and click OK to confirm that you have read the
//

message.
:
tp
ht

Execution Result
s:
ce

The Execution Result box will display that the operation succeeded
ur
so
Re
ng
ni
ar
Le
re

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 7


Mo

The Execution Result box will display that the operation succeeded.

HCNA-storage V3 | OHC1109111 Huawei Storage Firmware and Features Page | 435


Snapshot tab overview

There can be many snapshots created of a LUN and they can be kept active at the same time. To
find out how many snapshots there are for each LUN we have to go to the Provisioning window
and find the LUN section again.

ne
m/
As soon as snapshots are created the bottom part of the LUN window, provided a LUN is selected,

co
will show all existing snapshots of that LUN under the Snapshot tab.

i.
we
Snapshot tab overview

ua
.h
ng
• Look at the created Snapshot

ni
ar
le
: //
tp
ht
s:
ce

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 8


ur
so

When the LUN is selected, click the Snapshot tab at the bottom part of the LUN window. Because
Re

the snapshot is set to active while creating, you will see that the Running Status is Active. The
Mapping is Unmapped.
ng
ni

Before the snapshot can be used for file recovery, you need to map the snapshot to a LUN Group.
ar
Le
re
Mo

Page | 436 HCNA-storage V3 | OHC1109111 Huawei Storage Firmware and Features


Use Snapshot to recover files

Use Snapshot to recover files

e n

m/
Go to the LUN window

• Click the LUN Group tab

co
• Select a LUN Group

i.
• Click the Add Object button

we
ua
.h
ng
ni
ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 9
: //
tp

There are two ways to recover files using Snapshots. One is to map the snapshot to the
Operating System and the other is to Rollback the snapshot. First we are going to recover files by
ht

mapping the snapshot to our Operating System.


s:
ce

Before we can use a snapshot to recover files, it needs to be added to a LUN Group.
ur

 Go to the Provisioning screen and select the LUN button.


so

 In the LUN window, click the LUN Group tab. Select the LUN Group where the LUN belongs
Re

to.

ng

At the menu bar, click the Add Object button. The Add Object wizard will show.
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109111 Huawei Storage Firmware and Features Page | 437


Add Object Wizard

Add Object Wizard

ne

m/
Select a Snapshot from the Available Snapshots area and move it to the Selected
Snapshots area

co
i.
we
ua
.h
ng
ni
ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 10
: //
tp

The Add Object wizard will now open. Click the Snapshots tab. The snapshot that was created
earlier will be available in the Available Snapshots area.
ht
s:

Select the snapshot and click the right triangle to move it to the Selected Snapshots area and
ce

click OK.
ur

The Execution Result box will display that the snapshot is successfully added to the LUN Group.
so
Re

In order to recover the files on the snapshot LUN, open the Server Manager.
ng

In the Server Manager expand the Storage menu and click Disk Management.
ni
ar
Le
re
Mo

Page | 438 HCNA-storage V3 | OHC1109111 Huawei Storage Firmware and Features


Disk Management

Disk Management

e n

m/
Click Action

co
• select Rescan Disks

i.
we
ua
.h
ng
• Click the right mouse button and select Online

ni
ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 11
: //
tp

If there is no new partition available when opening Disk Management, click the Action button at
the top menu bar and select the Rescan Disks option. Now the new partition will show but the
ht

status is Offline. To put it online, click the right mouse button and select Online.
s:
ce

Notice that the partition already has a file system assigned to it. You can recognize this because
the partition in disk manager has a blue bar and is a Basic partition. Normally the system
ur

automatically assigns a drive letter to the partition making it a volume for use in the operating
so

system. Copy the missing files from the new snapshot volume to the original volume.
Re
ng
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109111 Huawei Storage Firmware and Features Page | 439


Rollback Snapshot

When all the data on the original LU is destroyed (or corrupted) it is still possible to copy all the
files from the snapshot volume back to the original volume. Especially when there are thousands
of files on the volume this is a very lengthy process. But there is a faster (and easier) way: We

n
can use the Rollback function to restore a volume to a previous state.

e
m/
co
Rollback Snapshot 1

i.
we
• Open Disk Management

ua
• Click the right mouse button

.h
ng
ni
• Select Offline

ar
le
: //
tp
ht

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 12


s:
ce

Before we can roll back the Snapshot we need to take the original partition offline.
ur

 Open the Server Manager and expand the Storage option.



so

Click Disk Management. Select the partition, click the right mouse button and select Offline.
Re
ng
ni
ar
Le
re
Mo

Page | 440 HCNA-storage V3 | OHC1109111 Huawei Storage Firmware and Features


Rollback Snapshot 2

1. If the Snapshot is Inactive, click the right mouse button and select Activate

e n
m/
co
i.
2. If the Snapshot is Active, click the right mouse

we
button and select Start Rollback option

ua
.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 13

ar
le
 Open the Provisioning screen and select LUN button to go to the LUN window.
//

 Select the LUN where you want to roll back the Snapshot to. At the bottom click the Snapshot
:

tab.
tp

 If the Running Status is Inactive, click the right mouse button and select Activate.
ht

 When the Running Status is Active, click the right mouse button and select the Start Rollback
option. This will open the Rollback Snapshot window.
s:
ce

NOTE: The Snapshot Running Status must be Active before we can Rollback Snapshot.
ur
so
Re
ng
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109111 Huawei Storage Firmware and Features Page | 441


Rollback Snapshot 3

• Select the Rollback Speed

ne
m/
• Click OK

co
i.
• Read the message

we
ua
• Check the checkbox

.h
• Click OK

ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 14

ar
le
The Rollback Snapshot window will open. Take a look at the Rollback Speed.
//

 Choose one of the options that is available.


:


tp

These options are; Low, Medium, High and Highest.


 Select a Rollback Speed and click OK.
ht
s:
ce

Rollback Snapshot 4
ur
so

• Check the Running Status


Re
ng
ni
ar
Le
re
Mo

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 15

Page | 442 HCNA-storage V3 | OHC1109111 Huawei Storage Firmware and Features


The system will now roll back the Snapshot. Take a look at the Running Status. Once this is
completed, open Disk Management and set the partition back Online. Take a look at the partition
and notice that all files have been recovered.

e n
m/
co
i.
we
ua
.h
ng
ni
ar
le
: //
tp
ht
s:
ce
ur
so
Re
ng
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109111 Huawei Storage Firmware and Features Page | 443


Unmap Snapshot

Whenever the snapshot is no longer needed (the files have been backed up or the restore was
completed) that snapshot can be deleted. These are the steps that need to be taken to do this.

ne
m/
Unmap Snapshot

co
i.
• Click the right mouse button

we
ua
.h
ng
• Select Offline

ni
ar
le
: //
tp

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 16


ht
s:

After the missing files are copied to the original partition, the Snapshot needs to be unmapped.
ce

 Open Disk Management and select the Snapshot partition.


ur

 Click the right mouse button and select Offline to take the partition offline.
so
Re
ng
ni
ar
Le
re
Mo

Page | 444 HCNA-storage V3 | OHC1109111 Huawei Storage Firmware and Features


Remove Object

Remove object

e n
• Go to the LUN window

m/
• Click the LUN Group tab

co
• Select a LUN Group
• Click the Remove Object button

i.
we
ua
.h
ng
ni
ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 17
: //
tp

To completely unmap the snapshot, it needs to be removed from the LUN Group. To remove it
from the LUN Group, go to the Provisioning screen and select LUN. In the LUN window, click the
ht

LUN Group tab.


s:
ce

Select the LUN Group where the snapshot is added to. At the menu bar, click the Remove Object
button. The Remove Object wizard will show.
ur
so
Re
ng
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109111 Huawei Storage Firmware and Features Page | 445


Remove Object wizard

• Select a Snapshot from the Available Snapshots area and move it to the Selected
Snapshots area

ne
m/
co
i.
we
ua
.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 18

ar
le
The Remove Object wizard will now open. Click the Snapshots tab. The snapshot that was
//

added earlier, will be available in the Available Snapshots area. Select the snapshot and click the
:

right triangle to move it to the Selected Snapshots area and click OK.
tp
ht

Warning message
s:
ce


ur

Read the message


so
Re
ng

• Check the checkbox


ni

• Click OK
ar
Le
re
Mo

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 19

A warning message will show. Read this message. After reading the message, check the
checkbox and click OK to confirm that you have read the message.

Page | 446 HCNA-storage V3 | OHC1109111 Huawei Storage Firmware and Features


A snapshot that is deleted cannot be recovered. If for whatever reason the user temporarily does
not want to keep track of all changes in the original LUN anymore, the option is there to de-
activate the snapshot. At that point the changes will be deleted and no new changes will be
tracked. The snapshot itself will still be visible in the Snapshot tab. It can be activated in a later
stage if needed.

e n
m/
co
Execution Result

i.
we
ua
Execution Result

.h
ng
ni
The Execution Result box will display that the operation succeeded

ar
le
: //
tp
ht
s:
ce
ur

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 20


so
Re

The Execution Result box will display that the snapshot is successfully removed from to the LUN
Group.
ng
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109111 Huawei Storage Firmware and Features Page | 447


SmartThin

In traditional storage solutions the administrator would create a LUN on the request of one of his
customers. Those customers are his colleagues from departments like Finance, Logistics, HRM
etcetera. The customer requests storage capacity and the administrator would provide that

n
storage. A problem with this traditional way of working is that the requested storage must be

e
m/
physically present at the time the LUN is created. At that point there is no user data yet, and

co
maybe it will take the user weeks or months to actually create the user data. All this time the ICT

i.
department has invested in hardware (disks, enclosures) and in additional costs like cooling and
electrical power. Huawei offers a space efficient version of a traditional LUN called a Thin LUN.

we
For that the license SmartThin must be purchased.

ua
.h
A SmartThin LUN or ThinLUN will be created without allocating physical storage resources to it

ng
(or just a very small part for administrative reasons). However to the operating system the

ni
mapped ThinLUN will appear to be the full size. So a ThinLUN of 100 GB initially consumes no

ar
storage capacity until the user writes 100 GB of user data on it. le
//

SmartThin
:
tp
ht
s:
ce
ur

• Click Create
so

• Enter a Name and Description for the


LUN
Re

• Check the SmartThin Checkbox


• Fill in the Capacity
ng

• Fill in the Initially Allocated Capacity


ni

• Fill in the Quantity


• Select the Owning Storage Pool
ar
Le

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 21


re

 To create a thin provisioned LUN, open the Provisioning screen and click the LUN button.
Mo

 Click the Create button. This will open the Create LUN wizard.
 In the Create LUN wizard, enter a Name in the Name text box.
 In the Description text box, enter the function and properties of the Disk Domain. The
descriptive information helps identify the LUN.
 Check the Enable checkbox for the SmartThin feature to create a thin provisioned LUN.

Page | 448 HCNA-storage V3 | OHC1109111 Huawei Storage Firmware and Features


 Fill in the Capacity for the LUN. This is what the size will be for the operating system
 Fill in the Initially Allocated Capacity.
Example: When the total capacity is 50 GB and you fill in the Initially Allocated Capacity with
10 GB, the LUN will take just 10 GB on the Storage Pool. This LUN can grow till 50 GB. The
question is maybe why we would allocate physical space at the time creation. The answer is

n
simple. A Thin LUN can be created that is bigger than the physical available free space in the

e
OceanStor. Suppose we have 100 GB of free space in the storage pool. We want to create a

m/
500 GB LUN here and immediately store 200 GB of files on it. The creation of the Thin LUN

co
will work as initially we do not need storage capacity. The operating system would then see a

i.
volume that can hold 500 GB of files (or so it thinks!). As we start copying 200 GB of files to

we
the new thin provisioned volume we run out of physical storage.

ua
.h
If we would have pre-allocated 200 GB at the creation time in the wizard we would have gotten a

ng
message that 200 GB is not physically there. Probably the SAN administrator would have
purchased more disks and enclosures before he would actually create the Thin LUN.

ni
ar
Next option is: Fill in the Quantity. It is possible to create a maximum of 500 LUNs at the same
le
time. If the quantity is 5, the system will create 5 LUNs with the same capacity. Select the Owning
//

Storage Pool where the thin provisioned LUN belongs to.


:
tp

The final task is to click the OK button.


ht
s:

Execution Result
ce
ur
so

The Execution Result box will show that the operation succeeded
Re
ng
ni
ar
Le
re
Mo

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 22

HCNA-storage V3 | OHC1109111 Huawei Storage Firmware and Features Page | 449


The execution result window will then be shown to indicate that the Thin LUN was created
successfully.

In the LUN window we can now track how much of the indicated capacity of a Thin LUN is
actually used with physical allocated storage resources. Below is an example of the properties of

n
a Thin LUN with a reported size of 5 GB for the capacity. Allocated is 64 MB as this is the

e
smallest amount we must “invest” in for a Thin LUN. As data will be written to the ThinLUN in the

m/
future the orange section will expand.

co
i.
we
ua
.h
ng
ni
ar
le
: //
tp
ht
s:
ce
ur
so
Re
ng
ni
ar
Le
re
Mo

Page | 450 HCNA-storage V3 | OHC1109111 Huawei Storage Firmware and Features


SmartTier

The SmartTier can leverage two or three storage tiers in a storage pool for data relocation. Data
has a lifecycle. As data processes through its lifecycle, it experiences different levels of activity.
When data is just created, it will usually be used a lot. When data ages, it is accessed less often.

n
The SmartTier divides the disks into three storage tiers based on their performance levels. Each

e
m/
storage tier contains only one type of disks and adopts one RAID policy.

co
i.
Tier Disk Type Application Data

we
High SSDs Applicable to Hot data: data that is promoted to a

ua
performance applications with high-performance tier with significantly

.h
intensive random improved read performance
access requests

ng
ni
Performance SAS Applicable to storage Warm data: data that can either be
applications with promoted or demoted depending on
moderate access ar
the precise workload levels and
le
requests configuration
//

Capacity NL-SAS Applicable to storage Cold data: data that is demoted to a


:
tp

applications with light low-performance tier without any


access requests application performance reduction
ht
s:
ce

SmartTier
ur
so

• SmartTier requires two or three tiers to be functional


Re

• SmartTier monitors usage level on individual chunks of data

• Depending on usage, data is: hot data, warm data or cold data
ng
ni

Parameters to consider:
ar

□ Initial location □ Service Monitoring Period


Le

□ Data Migration Speed □ Data Migration Plan


□ Data Migration Granularity □ SmartTier Policy
re
Mo

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 23

HCNA-storage V3 | OHC1109111 Huawei Storage Firmware and Features Page | 451


If a storage pool only contains one disk type SmartTier functionality is not available. SmartTier
monitors the usage of chunks and not of complete files.

Data (or better the chunks of data) can be in three different statuses: hot data, warm data and
cold data. It is Huawei’s algorithm that decides when chunks are hot, warm or cold. Once that

n
decision is made the SmartTier function can conclude that chunks are not on the appropriate type

e
of disk and relocate the chunk.

m/
co
When using SmartTier the following parameters must be considered:

i.
we
 Initial allocation.

ua
This is a setting when creating a LUN. The default allocation is to use all available tiers when

.h
new data is written to the LUN. Optionally one can decide to have new data written to a

ng
specific tier. For instance: if a lot of static data (images, audio files) must be written it is
maybe an idea to force the data directly to the capacity tier as this is the location where it

ni
would end up eventually anyway. This means no high performance space will be used for
these static files. ar
le
//

 Data Migration Speed.


:

Relocating chunks has a little bit of impact on the system. Optionally the data migration speed
tp

can be changed to a lower priority to even further minimize the impact.


ht

 Data Migration Granularity.


s:

Here the size of the chunks that will be monitored and relocated can be changed.
ce
ur

 Service Monitoring Period.


so

A setting that will tell the system at what times of the week or day the monitoring of the usage
Re

of the chunks should be done. It can help determine busy or quiet periods in the system.
ng

 Data Migration Plan.


ni

The option is here to have a manual relocation/migration or to use the best time that the
ar

Service Monitoring Period has found.


Le

 SmartTier Policy.
re

This parameter is set on individual LUNs and must be set to enable. Per default the setting is
Mo

disabled which means no data relocation will take place.


The settings are: Automatic, Highest, Lowest and No relocation. The settings determine what
the preferences for the migration will be.

Page | 452 HCNA-storage V3 | OHC1109111 Huawei Storage Firmware and Features


SmartTier Stages

SmartTier Stages

e n
m/
I/O monitoring module identifies
I/O monitoring
I/O activities on each data block

co
i.
we
The data placement analysis module
Data placement analysis
indentifies between hot data and cold data

ua
.h
ng
I/O monitoring module identifies
Data relocation

ni
I/O activities on each data block

ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 24
: //
tp

The above picture shows the three stages of the SmartTier process. The I/O monitoring can be
configured using the Service Monitoring Period. That results in the identification of hot, warm and
ht

cold chunks that then can be moved.


s:
ce

SmartTier Data Relocation


ur
so

Initial allocation in After data


Re

the storage pool relocation


ng

High Performance
ni
ar

Performance
Le
re

Capacity
Mo

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 25

HCNA-storage V3 | OHC1109111 Huawei Storage Firmware and Features Page | 453


Remember that, although three separate tiers are indicated in the above picture, all capacity is
within the storage pool and user data typically is stored across all disks of the disk domain.

When the SmartTier license is purchased, the user will unlock the storage tiering feature in the
Huawei OceanStor V3 storage system. There is a maximum of three tiers per disk domain. Solid

n
State Disk will form the High-Performance Tier, SAS disks (both 10k and 15k RPM) will be in the

e
Performance Tier. Last the third tier called Capacity Tier will be containing NL-SAS disks (7200

m/
RPM).

co
i.
LUNs are created in storage pools and that is the place to set up SmartTier. For that we go to the

we
Provisioning window and select to configure the Storage Pool.

ua
.h
ng
SmartTier

ni
ar
le
: //
tp

• Create new Storage Pool


• Fill in a Name and Description
ht

• Set Usage type


• Select Disk Domain
s:

• Select at least two tiers in the Storage


ce

Medium section and enter capacity


and RAID policy
ur

• Click Set SmartTier Policy


so

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 26


Re
ng

To use the SmartTier feature, we create a new Storage Pool.


ni
ar

To create a Storage Pool, open the Provisioning screen and click the Storage Pool button. Click
Le

the Create button. This will open the Create Storage Pool wizard.
re

In the Create Storage Pool wizard, Enter a Name in the Name text box. In the Description text box,
Mo

enter the function and properties of the Disk Domain. The Usage type is set to Block Storage
Service. It is also possible to select File Storage Service. Select the Disk Domain where the
Storage Pool needs to be created. This Storage Pool needs at least two types of Storage Medium
for the SmartTier feature. Select the available storage types, set the RAID Policy and fill in the
capacity of each storage type. Click the Set SmartTier Policy button.

Page | 454 HCNA-storage V3 | OHC1109111 Huawei Storage Firmware and Features


Set SmartTier Policy

Set SmartTier Policy

e n
m/
• Optionally set the Service Monitoring

co
Period

i.
we
ua
• Set the Data Migration Plan

.h
ng
ni
ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 27
: //
tp

In the Set SmartTier Policy menu, it is possible to enable the Service Monitoring Period. This
feature monitors hotspot data within the set time period. These results can serve as reference for
ht

migration between storage tiers.


s:
ce

Set the Data Migration Plan to Manual or Periodical and click OK. In the Create Storage Pool
wizard click OK.
ur
so
Re
ng
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109111 Huawei Storage Firmware and Features Page | 455


Execution Result

The Execution Result box will show that the operation succeeded

ne
m/
co
i.
we
ua
.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 28

ar
le
The Storage Pool is successfully created. The next step is to create a new LUN in the Storage
//

Pool.
:
tp

In the LUN window, click the Create button to start the Create LUN wizard.
ht
s:

Create LUN wizard


ce
ur
so
Re
ng


ni

Enter a Name and Description for


the LUN
ar

• Fill in the Capacity


Le

• Fill in the Quantity


• Select the Owning Storage Pool
re

• Click the Advanced button


Mo

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 29

 Enter a Name and Description for the LUN.


 In the Name text box, enter a name for the LUN.

Page | 456 HCNA-storage V3 | OHC1109111 Huawei Storage Firmware and Features


 In the Description text box, enter the function and properties of the LUN. The descriptive
information helps identify the LUN.

Optional If the SmartThin licensed feature is purchased, it is possible to create thin


provisioned LUNs. To enable this feature, check the Enable checkbox. When the SmartThin

n
feature is enabled, the Create LUN wizard will show an option called Initially Allocated

e
Capacity. Example: When the total capacity is 50 GB and you fill in the Initially Allocated

m/
Capacity with 10 GB, the LUN will take just 10 GB on the Storage Pool. This LUN can grow

co
until it reached 50 GB.

i.
we
 Fill in the Capacity for the LUN.

ua
 In the dropdown box, select one of the following options: Blocks, MB, GB and TB.

.h
 Fill in the Quantity.

ng
 It is possible to create a maximum of 500 LUNs at the same time. If the quantity is 5, the

ni
system will create 5 LUNs with the same capacity.
 Select a Owning Storage Pool from the dropdown list. The LUN will be created in the
Storage Pool that is selected. ar
le
 Set the advanced properties for the LUN by clicking the Advanced button.
: //
tp

Create LUN - Advanced Settings


ht
s:

• Click the Tuning tab


ce


ur

Set SmartTier Policy


so
Re
ng
ni
ar
Le
re

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 30


Mo

 Click the Tuning tab and choose the SmartTier Policy in the SmartTier area. In this example
we choose the option Relocate to low-performance tier.
 Finish the Create LUN wizard, map the LUN to the Operating System and add some data on
it.

HCNA-storage V3 | OHC1109111 Huawei Storage Firmware and Features Page | 457


Example:

In this example we chose Relocate to the low-performance tier. So, if we create a LUN on this
Storage Pool and use the LUN in any Operating System, all data is written in the fastest storage
tier. When the SmartTier Policy is scheduled, and the data is not used very often, it will
automatically relocate that data to the slower storage tier to save space in the high performance

n
tier.

e
m/
co
i.
we
ua
.h
ng
ni
ar
le
: //
tp
ht
s:
ce
ur
so
Re
ng
ni
ar
Le
re
Mo

Page | 458 HCNA-storage V3 | OHC1109111 Huawei Storage Firmware and Features


Storage Pool Properties

It is possible to change SmartTier settings for the Storage Pool. Go to the Provisioning screen
and select the Storage Pool button. In the Storage Pool window, select the storage pool and click
the Properties button.

e n
m/
Storage Pool Properties

co
i.
we
ua
.h
ng
ni
• Click the SmartTier Policy tab

ar
le
: //
tp

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 31


ht

The Storage Pool properties window will open.


s:

 Click the SmartTier Policy tab. It is possible to set the Cache Mode, Service Monitoring Period
ce

and Data Migration Plan.


ur

 Change the settings based on your service needs and click Apply, followed by clicking OK.
so

 The Execution Result window will appear showing that the changes were made successfully.
Re

Execution Result
ng
ni

The Execution Result box will display that the operation succeeded
ar
Le
re
Mo

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 32

HCNA-storage V3 | OHC1109111 Huawei Storage Firmware and Features Page | 459


SmartTier Monitoring

SmartTier Monitoring

ne
• Click the LUN

m/
• Click the Properties button

co
i.
we
ua
.h
ng
ni
ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 33
: //
tp

It is possible to monitor the SmartTier process. This is possible at two different ways. The first one
is to go to the properties window of the LUN. Go to the Provisioning screen and click the LUN
ht

button. In the LUN window, select the LUN where the SmartTier function is configured to and click
s:

the Properties button.


ce
ur
so
Re
ng
ni
ar
Le
re
Mo

Page | 460 HCNA-storage V3 | OHC1109111 Huawei Storage Firmware and Features


LUN Properties

LUN Properties

e n
• Click the SmartTier tab

m/
co
i.
we
ua
.h
ng
ni
ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 34
: //
tp

When the Properties window is opened, click the SmartTier tab. It is possible to select a
SmartTier Policy . In this example we choose the Relocation to low-performance tier. You can
ht

monitor the Capacity Distribution between the storage tiers.


s:
ce

NOTE: This percentage will not update automatically. In the next slide we will show a live view
between two storage tiers
ur
so
Re
ng
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109111 Huawei Storage Firmware and Features Page | 461


Storage Pool Properties

Storage Pool Properties

ne

m/
Click the SmartTier
Status tab

co
i.
we
ua
.h
ng
ni
ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 35
: //
tp

It is possible to get a live view of the SmartTier feature. Open the Provisioning screen and click
the Storage Pool button. In the Storage Pool window, select the Storage Pool where the LUN
ht

belongs to. Click the Properties button. When the Storage Pool properties window is opened, click
s:

the SmartTier Status tab. In the Status area, you can monitor the following information:
ce

 Feature Status:
ur

Should be Active, otherwise the data will not move between the available storage tiers.
so


Re

Migration Status:
When it is relocating, it shows Relocating.
ng

 To Be Moved Up:
ni

The amount of data that will move from a lower storage tier to a higher storage tier. For
ar

example from SAS disks to SSDs.


Le

 To Be Moved Down:
re

The amount of data that will move from a higher storage tier to a lower storage tier. For
Mo

example from SSDs to SAS disks.

 Estimated Duration: Is the time before the data migration is completed.

In the Storage Tier Information area, you will see that the amount of data will grow at the other
storage tier.

Page | 462 HCNA-storage V3 | OHC1109111 Huawei Storage Firmware and Features


HyperClone

HyperClone

e n
m/
co
To create an Splitting Automatically
available copy splitting the pair
after reverse

i.
To restore synchronization
To update
data on the
the copy

we
primary LUN

Reverse

ua
Clone creation Synchronization
synchronization

.h
ng
ni
ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 36
: //
tp

The clone feature allows you to obtain full copies of LUNs without interrupting host services.
These copies apply to scenarios such as data backup and restoration, application testing, and
ht

data analysis.
s:
ce

Synchronization: Data is copied from the primary LUN to a secondary LUN. Then dual write is
performed to the primary LUN and secondary LUN.
ur
so
Re
ng
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109111 Huawei Storage Firmware and Features Page | 463


Synchronization

Synchronization

n
Dual Write
Synchronization

e
After the Synchronization

m/
Primary LUN Secondary LUN

co
A A
B B

i.
Case 1: Full copy performed in the initial Writing data D

we
synchronization to replace data A

ua
Primary LUN Secondary LUN Primary LUN Secondary LUN

A A D D

.h
B B B B
C C C C

ng
Case 2: Incremental copy performed in the In a data write scenario, the same data is written
synchronization after a split to both primary and secondary LUNs (dual write)

ni
A B C Data already stored Internal signal flow in the storage system
A B C D

ar
Data to be stored Host signal flow
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 37
: //

Synchronization: Data is copied from the primary LUN to a secondary LUN. Then dual write is
tp

performed to the primary LUN and secondary LUN.


ht
s:

Split
ce
ur

Split
so
Re

Independent Use
Splitting a Pair
of the Secondary LUN
ng

Primary LUN Secondary LUN Primary LUN Secondary LUN

X
ni

A A
B X B
ar

After a pair is split, dual write is no longer imple-


mented, and the secondary LUN stores a copy of The Secondary LUN can be accessed independently
all data on the Primary LUN at the time when the
Le

without affecting the Primary LUN.


pair was split.

Primary LUN Secondary LUN


Clone 1
A A
re

Clone 2 B X B
C DCL
Mo

Clone 3

Subsequent data changes made to the primary and second


Multiple pairs can be split in batches as long as LUNs are recorded by the DCL for incremental copy per-
each pair belongs to a unique clone. formed in later synchronization or reverse synchronization.

A B C Data already stored Internal signal flow in the storage system


DCL Data Change Log Host signal flow

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 38

Page | 464 HCNA-storage V3 | OHC1109111 Huawei Storage Firmware and Features


Splitting: After a synchronization is complete, the pair can be split at a certain point in time. Then,
the secondary LUN becomes an available copy of the primary LUN and stores all the data on the
primary LUN at the time when the pair was split. After a pair is split, the secondary LUN is
accessible to hosts, allowing hosts to access data identical to that on the primary LUN at the
splitting time point without affecting the performance of the primary LUN. After a pair is split, a

n
synchronization or reverse synchronization can be performed again between the primary LUN

e
and the secondary LUN.

m/
co
i.
we
Reverse Synchronization

ua
.h
ng
Reverse Synchronization (1)

ni
Reverse Synchronization ar
le
Another 1 Other pairs are automatically
//

Primary LUN Secundary LUN split if any.


1
:

A A
tp

B B
C C
ht

2
s:

Secundary LUN
ce

3 A
B
ur

C
so
Re

A B C Data already stored Internal signal flow in the storage system


C Data to be stored Host signal flow
ng
ni

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 39


ar
Le
re
Mo

HCNA-storage V3 | OHC1109111 Huawei Storage Firmware and Features Page | 465


Reverse Synchronization (2a)

Host I/O Processing During the


Reverse Synchronization
1 Other pairs are automatically

n
Case 1: If the data block to be accessed has been reverse synchronized split if any.

e
m/
Primary LUN Secondary LUN
2 Reverse synchronization is
A A executed for the selected pair

co
B B
C C
(Incremental copy)

i.
The Primary LUN is directly accessed.

we
ua
.h
ng
ni
A B C Data already stored Internal signal flow in the storage system

ar
C Data to be stored Host signal flow
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 40
: //
tp
ht

Reverse Synchronization (2b)


s:
ce

Host I/O Processing During the


Reverse Synchronization
ur

1 Other pairs are automatically


Case 2: If the data block to be accessed is not reverse synchronized split if any.
so

Primary LUN Secondary LUN 2 Reverse synchronization is


Re

A A executed for the selected pair


B 2 B 1 (Incremental copy)
C C
ng

In terms of a read request, reverse synchronization is completed


ni

after the secondary LUN is read.

Primary LUN Secondary LUN


ar

A A
2 1
Le

B B
C C

In terms of a write request, reverse synchronization is completed


re

before new data is written to the primary LUN.


Mo

A B C Data already stored Internal signal flow in the storage system


C Data to be stored Host signal flow

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 41

Page | 466 HCNA-storage V3 | OHC1109111 Huawei Storage Firmware and Features


Reverse Synchronization (3)

Reverse Synchronization
1 Other pairs are automatically

n
Another
split if any.

e
Primary LUN Secundary LUN

m/
1
A A 2 Reverse synchronization is
B B

co
executed for the selected pair
C C
(Incremental copy)

i.
2 3 After the reverse

we
Secundary LUN synchronization is complete,
the pair is automatically split.

ua
3 A
B

.h
C

ng
ni
A B C Internal signal flow in the storage system

ar
Data already stored
C Data to be stored Host signal flow
le
//

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 42


:
tp
ht

Reverse synchronization: To restore data on the primary LUN, a reverse synchronization to


copy data from the secondary LUN to the primary LUN can be executed. After the reverse
s:

synchronization is complete, the pair is automatically split.


ce
ur

During a synchronization or reverse synchronization, hosts are still allowed to access the primary
so

LUN, ensuring service continuity.


Re
ng
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109111 Huawei Storage Firmware and Features Page | 467


HyperReplication: Synchronous mode

Replication is a feature associated with disaster recovery. Making backups is sometimes not
enough when the requirements are higher. For instance when identical copies of the data should
exist in a remote site. Also replication is an option when restore times are minutes and not hours

n
like with traditional tape backups.

e
m/
co
Replication has the goal to have a standby copy of the data ready to be used in case of a serious

i.
disaster. Examples of such disaster would be fires, floodings or earthquakes.

we
Two types of replication exist: Synchronous and Asynchronous mode.

ua
.h
ng
HyperReplication Synchronous mode

ni
ar
le
1
//

2
:
tp

5 3
ht

4
s:
ce

1 I/O from host stored at site A 4 Acknowledgment across link


ur

2 Data across intersite link to site B 5 Host receives message : I/O complete

3 Data “stored” on site B


so
Re

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 43


ng

A synchronous remote replication session replicates data in real time from the primary storage
ni

system to the secondary storage system. The characteristics of synchronous remote replication
ar

are as follows:
Le

-After receiving a write I/O request from a host, the primary storage system sends the request to
re

the primary and secondary LUNs.


Mo

-The data write result is returned to the host only after the data is written to both primary and
secondary LUNs. However, if data fails to be written to the secondary LUN, the secondary LUN
returns a message indicating data write failure to the primary LUN. The controller changes the

Page | 468 HCNA-storage V3 | OHC1109111 Huawei Storage Firmware and Features


dual-write mode to the single write mode at the same time. The remote replication task enters the
abnormal state.

After a synchronous remote replication pair relationship is set up between the primary LUN and
the secondary LUN, a manually triggered synchronization needs to be performed so that the two

n
LUNs have consistent data. Every time a host writes data to the storage system after the

e
synchronization, the data is copied from the primary LUN to the secondary LUN in real time.

m/
co
i.
we
HyperReplication: Asynchronous mode

ua
.h
ng
HyperReplication Asynchronous mode

ni
ar
le
1
//

3
:
tp

2 4
ht

5
s:
ce

1 I/O from host stored at site A 4 Data stored at remote site


ur

2 Host receives message : I/O complete 5 Acknowledgment across link

3 Data via link to site B


so
Re

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 44


ng

An asynchronous remote replication session periodically replicates data from the primary
ni

storage system to the secondary storage system. The characteristics of asynchronous remote
ar

replication are as follows:


Le

- Asynchronous remote replication relies on the snapshot technology. A snapshot is a point-in-


re

time copy of source data.


Mo

- When a host writes data to a primary LUN, the primary storage system returns a response
indicating a successful write to the host, as soon as the primary LUN returns a response
indicating a successful write.

- Data synchronization is triggered by a user manually or by the system periodically to keep


data consistent between the primary LUN and the secondary LUN.

HCNA-storage V3 | OHC1109111 Huawei Storage Firmware and Features Page | 469


After an asynchronous remote replication relationship is set up between a primary LUN and a
secondary LUN, initial synchronization is performed to copy all of the data from the primary LUN
to the secondary LUN so that the two LUNs have consistent data. After the initial synchronization
is complete, the storage system processes host writes as follows:

n
When receiving a host write, the primary storage system sends the data to the primary LUN. As

e
soon as the primary LUN returns a response indicating a successful write, the primary storage

m/
system returns a response indicating a successful write to the host. At the scheduled

co
synchronization time, new data on the primary LUN is copied to the secondary LUN.

i.
we
In the situation where the primary site is destroyed the administrator should initiate a failover. This

ua
essentially means that the replicated LUNs on the remote site will be activated. At that point hosts

.h
on the remote site can pick up on the data again and business can be continued. Of course the

ng
host at the remote site must be running the same applications as the local hosts did.

ni
ar
le
: //
tp
ht
s:
ce
ur
so
Re
ng
ni
ar
Le
re
Mo

Page | 470 HCNA-storage V3 | OHC1109111 Huawei Storage Firmware and Features


Firmware Updates
With almost all products Huawei will add new features and/or improve the current features. This is
done using so-called firmware updates. In most cases this is a process that is guided by a Huawei
engineer or by the Huawei support team.

n
The process itself is almost fully automated and to perform a firmware upgrade two things are

e
needed:

m/
co
1. The OceanStor Toolkit

i.
2. The actual new firmware.

we
In the next section we will briefly explain the procedure. Here we assume that the OceanStor

ua
Toolkit is available and the firmware is accessible. Firmware is a special file that can be

.h
downloaded from the support site. The format of the firmware file is often a file with the

ng
extension .TGZ. This that it is a Linux based compressed file. (TGZ=Tar Gzipped File). For some
products it is not even necessary to physically download the firmware file as the upgrade process

ni
will download and install it as part of the upgrade.

ar
le
The first step is to start the OceanStor Toolkit.
//

Firmware updates
:
tp
ht

• Open the OceanStor Toolkit


s:

• Click the ToolCase tab

• Click Upgrade from the left menu


ce

• Click the Upgrade button


ur
so
Re
ng
ni
ar
Le

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 45


re

Once the OceanStor Toolkit is started, we need to download the Upgrade software features from
Mo

the ToolStore. Once these are installed, go back to the ToolCase tab. In the left hand menu click
Upgrade. At the right hand side, an Upgrade button will show. Click the Upgrade button. The
Upgrade page will open.

HCNA-storage V3 | OHC1109111 Huawei Storage Firmware and Features Page | 471


Upgrade page

• Click the Add Device button

ne
m/
co
i.
we
ua
.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 46

ar
le
On the Upgrade page, we need to add the device. To do that, click the Add Device button. This
//

will open the add device wizard.


:
tp
ht

Add device wizard (1)


s:
ce

• Click the Add Device text


ur
so
Re
ng
ni
ar
Le
re

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 47


Mo

The add device wizard is opened. In this example there is no device available. Read the text and
click the highlighted Add Device text.

Page | 472 HCNA-storage V3 | OHC1109111 Huawei Storage Firmware and Features


Add device wizard (2)

e n
m/
• Fill in the IP address

co
i.
we
ua
.h
• Click Next

ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 48

ar
le
Enter the IP address of the device that you want to add. Note that it is also possible to specify an
//

IP segment and to select a proxy. Once the IP address is added, click the Next. In the next screen
:

we need to add some additional login information.


tp
ht

Add device wizard (3)


s:
ce
ur

• Enter the Username

• Enter the Password


so

• Fill in the Port number


Re
ng
ni
ar
Le
re
Mo

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 49

In this window we need to add the login information for the storage device. Fill in the Username,
Password and Port number. Click Finish.

HCNA-storage V3 | OHC1109111 Huawei Storage Firmware and Features Page | 473


Add device wizard (4)

• The device is successfully added

• Select the array

ne
m/
co
i.
we
ua
.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 50

ar
le
The storage device is now successfully added. Now we can choose an array that we want to
//

upgrade. Select the checkbox that is located in front of the device model and click Next.
:
tp
ht

Add device wizard (5)


s:
ce

• In Select Upgrade Package, click Browse


ur
so
Re
ng
ni
ar
Le
re

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 51


Mo

We need to configure the Upgrade Settings. First we click the Browse button in the Select
Upgrade Package area. The upgrade package has a .tgz file extension and is downloadable from
the Huawei support website.

Page | 474 HCNA-storage V3 | OHC1109111 Huawei Storage Firmware and Features


Add device wizard (6)

• Choose a backup path

• Click Browse

e n
m/
co
i.
we
ua
.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 52

ar
le
Select a Backup data path. Click the Browse button. Once the correct backup location is selected,
//

click the Save button.


:
tp
ht

Add device wizard (7)


s:
ce

• Select the Upgrade Mode


ur
so
Re
ng
ni
ar
Le
re

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 53


Mo

After selecting the upgrade package and data backup path, we need to select an upgrade mode.
It is possible to do the upgrade while the system is online, but also to do the upgrade offline.

HCNA-storage V3 | OHC1109111 Huawei Storage Firmware and Features Page | 475


Online upgrade

Online upgrade features high reliability and availability without service interruption. It is applicable
to the scenario where services cannot be interrupted. Before starting an online upgrade, ensure
that the upgrade package supports online upgrade from the current version to the target version.

n
During an online upgrade, the controllers are upgraded in sequence. In the dual-controller

e
scenario, the secondary controller is upgraded first, and then the primary controller is upgraded.

m/
In the multi-controller scenario, one controller (experimental controller) is upgraded first. Then, all

co
the controllers (excluding the experimental controller) on the peer plane of the primary controller

i.
are upgraded. After that, all controllers (excluding the experimental controller) on the plane where

we
the primary controller resides are upgraded. Before controllers on one plane are upgraded, the

ua
system switches services from these controllers to the controllers on the peer plane, and then the

.h
system automatically detects firmware to be upgraded and upgrades it. After these controllers are

ng
upgraded, the system restarts them. After they are powered on, services that belong to them are
switched back to them. Then, the system upgrades the controllers on the other plane in the same

ni
way.

ar
le
Offline upgrade
: //

Offline upgrade requires users to stop host applications before upgrading controller software.
tp

During an offline upgrade, the primary and secondary controllers are upgraded simultaneously.
ht

Therefore, the upgrade period is much shorter. Because all host services are stopped before the
upgrade, data loss and service interruption risks are reduced during the upgrade.
s:
ce
ur

Add device wizard (8)


so
Re

• Check the Enable professional mode checkbox


ng
ni
ar
Le
re
Mo

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 54

Page | 476 HCNA-storage V3 | OHC1109111 Huawei Storage Firmware and Features


In this example we choose to enable the professional mode. Check the checkbox and click the
Finish button

In the Professional Mode, if a node fails to be upgraded, the cluster upgrade is suspended. Then
the operators have three options: Roll back, Retry, and Continue. After the upgrade is

n
suspended, Huawei R&D engineers need to locate the causes of the upgrade failure. Then R&D

e
engineers instruct operators to roll back the upgrade, perform the upgrade again, or ignore the

m/
node upgrade failure.

co
i.
we
Upgrade page

ua
.h
• Select the storage array

ng
• Click the Perform Upgrade button

ni
• Check the checkbox and click OK

ar
le
: //
tp
ht
s:
ce

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 55


ur
so

Select the storage array that needs to be updated. Check the checkbox in front of the device
Re

name. Once the storage array is selected, click the Perform Upgrade button. The Upgrade
Confirm window will show. Check the settings and check the checkbox that you’ve read the
ng

previous information and understood the consequences of the operation. Click OK. The upgrade
ni

process will now start.


ar
Le
re
Mo

HCNA-storage V3 | OHC1109111 Huawei Storage Firmware and Features Page | 477


Upgrade process (1)

• Monitor the Upgrade Package Import process

ne
m/
co
i.
we
ua
.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 56

ar
le
The system will now automatically import the upgrade package. You can monitor the process at
//

the bottom part of the screen. When a step is completed, the system will automatically go to the
:

other tab. The progress bar at the top will finally show 5 green dots when the upgrade process is
tp

completed.
ht
s:

Upgrade process (2)


ce
ur

• Monitor the Pre-Upgrade Check


so
Re
ng
ni
ar
Le
re
Mo

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 57

After the upgrade package is imported, the system automatically starts to perform a pre-upgrade
check.

Page | 478 HCNA-storage V3 | OHC1109111 Huawei Storage Firmware and Features


Upgrade process (3)

• Monitor the Data Backup process

e n
m/
co
i.
we
ua
.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 58

ar
le
The system will now backup the controller data.
: //
tp

Upgrade process (4)


ht
s:

• Monitor the Upgrade process


ce
ur
so
Re
ng
ni
ar
Le

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 59


re
Mo

The system is now executing the upgrade process.

HCNA-storage V3 | OHC1109111 Huawei Storage Firmware and Features Page | 479


Upgrade process (5)

• Monitor the Post-Upgrade Verification

ne
m/
co
i.
we
ua
.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 60

ar
le
//

Upgrade process (6)


:
tp
ht

• Monitor that the upgrade process is succeeded


s:
ce
ur
so
Re
ng
ni
ar

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 61


Le
re

Take a look at the progress bar and notice that there are five green dots showing that the status is
Mo

succeeded.

Page | 480 HCNA-storage V3 | OHC1109111 Huawei Storage Firmware and Features


Questions

Questions

e n
m/
1. Explain the difference between a snapshot and a clone

co
2. What two methods can be used to restore data using a
snapshot?

i.
3. How much storage capacity is consumed when a Thin LUN of

we
500 GB is created?

ua
4. Name the three stages of the SmartTier process

.h
5. Describe what is meant with synchronous replication

ng
ni
ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 62
: //
tp

Answers:
ht

1. Snapshots initially do not consume space. Snapshot size is equal to the number of changed
blocks. Clones are identical copies of a LUN and consume just as much space.
s:
ce

2. Method one is side-by-side recovery: a mapping view of the snapshot LUN is created for the
ur

host that “sees” the original LUN. Data can then be copied on the operating system level.
so

Method two is the Rollback function. Here the volume is almost instantly restored to the state
of the snapshot LUN.
Re
ng

3. At creation time a single block of 64 MB is created for administrative purposes. If we ignore


ni

that small amount then a Thin LUN does not consume space until user data is written.
ar

4. Stage 1: I/O monitoring Stage 2: Data Placement Analysis Stage 3: Data relocation
Le

5. Synchronous replication: the host first writes to its local LUN. This will be stored in the local
re

OceanStor. No confirmation of the write will be given. The second step is to copy the written
Mo

data to the remote site. There the data will be stored. A confirmation will be sent from the
remote OceanStor to the local OceanStor. Then, finally, will the host receive an
acknowledgement of the write.

HCNA-storage V3 | OHC1109111 Huawei Storage Firmware and Features Page | 481


Exam Preparation

Exam preparation (1)

ne
m/
1. Which of the licensed features can be described as: the, almost
instant, creation of a full copy of an active LUN without impacting

co
the access to the active LUN?

i.
a. HyperSnap

we
b. HyperMirror

ua
c. HyperClone

.h
d. HyperReplication

ng
ni
ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 63
: //
tp
ht

Exam preparation (2)


s:

2. Which of the following statements about the SmartTier feature is


ce

true? (check all that apply)


ur

a. Saves space in the disk domain


so

b. Lowers the cost of storing aged data


Re

c. Data relocating data is a heavy burden on the system and should


only be executed in quiet time of the system
ng

d. SmartTier needs all three tiers to be filled with disks


ni

e. SmartTier works on individual LUNs


ar
Le
re

Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 64


Mo

Answers:

1. C
2. B, E

Page | 482 HCNA-storage V3 | OHC1109111 Huawei Storage Firmware and Features


Summary

Summary

e n

m/
HyperSnap is the Copy-On-Write snapshot implementation.
• HyperClone is the instant creation of a synchronize full copy of a

co
LUN

i.
• SmartTier is the feature that relocates chunks from disks in a tier

we
to disks of another tier. Goal is to store chunks on appropriate
disk types

ua
• HyperThin creates LUNs that only consume space when actual

.h
user data is written to it.

ng
HyperReplication is offered in synchronous and asynchronous
modes and is a disaster recovery feature that offers a near-

ni
dentical copy of a LUN on a remote OceanStor

ar
le
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 65
: //
tp
ht
s:
ce
ur
so
Re
ng
ni
ar
Le
re
Mo

HCNA-storage V3 | OHC1109111 Huawei Storage Firmware and Features Page | 483


ne
Thank you

m/
co
www.huawei.com

i.
we
ua
.h
ng
ni
Copyright © 2015 Huawei Technologies., Ltd. All rights reserved. Slide 46

ar
le
: //
tp
ht
s:
ce
ur
so
Re
ng
ni
ar
Le
re
Mo

Page | 484 HCNA-storage V3 | OHC1109111 Huawei Storage Firmware and Features


Mo
re
Le
ar
ni
ng
Re
so
ur
ce
s:
ht
tp
://
le
ar
ni
ng
.h
ua
we
i.
co
m/
en

También podría gustarte