Está en la página 1de 31

CCN Info Center

Email: CoreNetworkDoc@huawei.com
Copyright © Huawei Technologies Co., Ltd. 2017. All rights reserved.
Contents
01 Overview
The SoftCOM Strategy
Standard NFV Network
Huawei NFV Solution
CloudCore
CloudEdge

02 Architecture
Cloud-based Architecture Transition
Phase 1: NFV
Phase 2: NFC
O verview
03 Deployment
VNF Co-deployment
Automated VNF Deployment
One-Click Deployment Tool

04 Maintenance
Unified O&M
Intuitive NFV Topology Views
Auto Scaling
High VM Availability

05 Troubleshooting
Cross-Layer Alarm Correlation
Failure Detection and Recovery
IP Packet Coloring

This brochure may contain predictive statements, including but not limited to statements regarding future product
combinations and new technologies. There are many unknown factors that may cause future conditions to differ from what
the brochure states or implies. Therefore, this brochure is only for your reference and does not constitute any warranty or
commitments. The statements in this brochure are subject to change without notice.
NFV Fundamentals
Network Functions Virtualization

The SoftCOM Strategy


In this era of information, user behavior and information services are undergoing great and rapid changes. Cloud Cloud
To keep pace with these changes, Huawei has put forward the SoftCOM strategy to assist carriers with the FBB Operation
transformation of their network structures. What is SoftCOM and how does it re-shape the telecom
network?

Agile Agile
Cloud Cloud
Background Network
Edge Network Service

The two core concepts in the information age are "data-based intelligence" and that "ICT gives
users access to any data, anytime, anywhere", essential to the Real Time, On Demand, All Cloud
Cloud
Online, DIY, and Social (ROADS) experience. To achieve this, carriers need to pay closer attention MBB Core
to end users and ensure that their agile networks help transform their marketing, R&D,
operations, and service models. Hardware Pool
SoftCOM, short for software defined telecom, is Huawei's 10-year strategy for transforming the
network architecture, helping carriers seize opportunities and embrace the challenges of the
information era. SoftCOM also ensures the ROADS experience for end users by using cloud
computing, the software defined network (SDN) [1], network functions virtualization (NFV), and
Internet-based operations to build a layered and agile network on which optimum network and
IT resource usage is achieved. A service-centric ICT infrastructure is constructed using the
distributed data center model to simplify interconnections and elastic resource scheduling. CloudCore: a new core network solution reconstructed using NFV and SDN technologies.
CloudCore includes a variety of virtual functional units which can be elastically scaled and
flexibly deployed, such as CloudIMS, CloudSPS, CloudDB, CloudPCRF, and CloudSBC.

CloudEdge: a new-generation mobile broadband solution developed using NFV technology


over the service oriented architecture (SOA) and cloud-based architecture. CloudEPC (evolved
packet core), CloudMSE (multimedia service engine), and CloudUIC (unified intelligence
Seizing Opportunities, controller) jointly form CloudEdge, where data of various access types converge.
Embracing Challenges
CloudFBB: an open access cloud facility re-shaped using NFV and SDN technologies. It provides
wired access, such as DSL and OLT.

CloudMBB: a radio access network reconstructed using NFV and SDN technologies. It provides
wireless access services.

Agile Network: interconnects ICT networks which users access. The SDN-driven Agile Network
Technical Solution schedules network resources on demand and in real time.

With the SoftCOM concept, Huawei will help carriers leverage SDN technology and construct an
Agile Network for the following six service-centric infrastructure clouds:
[1] SDN: software defined network. This network architecture separates the control plane from the data plane so that traffic is
CloudOperation: the next-generation operation system that functions as the BSS and OSS and centrally and flexibly controlled. SDN also programs network behaviors using well-defined interfaces.
supports on-demand, automatic resource orchestration. It smartly schedules resources by NFV versus SDN: NFV was created by a consortium of service providers and is an innovation in telecom equipment, whereas
doing big data analysis. Additionally, it opens service interfaces to third-party ICT players, SDN was put forward by researchers and data center architects and is an innovation in the network architecture. NFV
decouples network functions from proprietary hardware devices, so that they can run on software to accelerate service
accelerating service innovation.
innovation and provisioning. SDN separates the control plane from the data plane and programs network behaviors, for
CloudService: an open cloud service facility that enables carriers to swiftly provide digital centralized network management and optimal network performance. NFV focuses on network layers 4-7 while SDN on layers
1-3. NFV is highly complementary to SDN, but not dependent on it (or vice-versa). SDN software can run in any NFV
services to customers, including CRBT, MMS, VAS, videos, games, e-commerce, Internet apps, environment. NFV can be implemented without SDN, although the two concepts can be combined to potentially provide a
smart city apps, IoT apps, and digital network apps, with the aim to boost digital service greater solution. When NFV and SDN work together, network configurations are automated, services are provided on demand,
innovation and accelerate their transformation and monetization. and QoS can be guaranteed from end to end.

Over view 02
NFV Fundamentals
Network Functions Virtualization

NFV components:
Standard NFV Network OSS/BSS network management systems offered by service providers. OSS and BSS are
not defined in the standard NFV architecture, but MANO and VNFs must
What is NFV? How do international standards organizations define it?
provide interfaces to connect to them.
VNF The virtual network functions are hosted on VMs and provide services on
software.
Background
NFVI The NFV infrastructure provides a running environment for VNFs. These include
hardware devices which provide compute, network, and storage resources and
NFV decouples network functions from dedicated hardware and deploys these network
a virtualization layer which uses a hypervisor[1] to abstract hardware resources
functions on general-purpose x86 servers, storage, and network devices. On an NFV network,
into virtual compute, storage, and network resource pools.
hardware resources are abstracted into pools and carriers can rapidly roll out services using the
resources from these pools. Additionally, an NFV network allows for elastic scaling and MANO management and orchestration, which centrally manages and schedules NFVI
automated O&M. resources and VNFs. MANO is composed of the:

VIM: virtualized infrastructure manager which discovers, manages, and


allocates virtual resources, and handles faults.

VNFM: VNF manager which manages VNF life cycles. It on-boards, instantiates,
scales, updates, and terminates VNFs.

NFVO: NFV orchestrator which orchestrates, schedules, and manages all the
software resources and network services on an NFV network.

NFV interfaces:

Interface Function
Technical Solution
Vi-Ha interfaces the virtualization layer with COTS hardware. The virtualization layer obtains hardware
resources for VMs over this interface.
ETSI-standard NFV architecture:
interfaces VMs with the NFVI to ensure that VMs can be deployed on the NFVI and fulfill the
Vn-Nf performance, reliability, and scalability requirements. The NFVI needs to meet the VM
operating system compatibility requirements.

NFV Management and Orchestration interfaces virtualization layer management software with the NFVI to manage the virtual
Nf-Vi compute, storage, and network resources on the NFVI. Also used for configuration of and
MANO connections in the basic virtual infrastructure, monitoring of system resource usage and
Os-Ma
performance, and fault management.
OSS/BSS NFV
Orchestrator interfaces the VNFM with VNFs to manage life cycles, configuration, performance, and
Ve-Vnfm
faults of VNFs.
VNF Or-Vnfm
EM 1 EM 2 EM 3 OS-Ma used to manage network services and VNF life cycles.
Ve-Vnfm Service, VNF, and
VNF Infrastructure
Manager(s) Description interfaces the service application management system or service orchestration system with
VNF 1 VNF 2 VNF 3 Vi-Vnfm virtualization layer management software.
Vn-Nf used to issue configurations to the VNFM, interface the orchestrator with the VNFM, allocate
Vi-Vnfm Or-Vnfm
NFVI NFVI resources to a VNF, and exchange VNF information.
Virtualization Layer
used to issue requests for reserving and allocating resources required by the orchestrator
Virtual Compute Virtual Storage Virtual Network Nf-Vi Virtualized Or-Vi and exchange virtual hardware resource configuration and status information.
Resources Resources Resources Infrastructure Or-Vi
Manager(s)
Vi-Ha
Hardware
Compute Storage Network
Hardware Hardware Hardware [1] Hypervisor is the virtualization software layer between physical servers and operating systems. It takes the role of the virtual
machine monitor (VMM) and allows multiple operating systems and applications to share the hardware. Mainstream
Execution reference points Other reference points Main NFV reference points hypervisors include open-source KVM and Xen, Microsoft HyperV, and VMware ESXi.

Over view 04
NFV Fundamentals
Network Functions Virtualization

The FusionSphere OpenStack solution consists of the KVM[4], FusionStorage, FusionNetwork,


Huawei NFV Solution FusionSphere OpenStack, and FusionSphere OpenStack OM.

How does Huawei optimize the standard NFV architecture for commercial use?
Virtualization Layer

Background KVM: virtualizes hardware resources. Huawei's FusionSphere OpenStack solution uses the KVM
and enhances its performance and reliability.

As an open source project in which developers worldwide participate, OpenStack[1] aims to build FusionStorage: a distributed storage software which is able to virtualize both storage and
a versatile cloud platform that is easy to deploy and scale. This platform is oriented to both compute resources. If deployed on x86 servers, FusionStorage can consolidate all local disks on
public[2] and private clouds[3] and provides an open and standard cloud operating system for the servers into a virtual storage resource pool to provide the block storage function.
data centers. Huawei bases its FusionSphere OpenStack solution on the native OpenStack FusionNetwork: a transition solution from traditional virtual switching to forward-thinking
architecture and tailors it to suit NFV needs. With FusionSphere OpenStack, the E9000 or software-defined networking. FusionNetwork uses Virtual Extensible LAN (VXLAN) Layer 2
RH2288 servers, and telecom services, Huawei can deliver an NFV solution from end to end. tunnel encapsulation technologies and Huawei SDN controllers to automatically deploy and
configure SDN networks, satisfy the SLA, and isolate multiple tenants.

VIM

FusionSphere OpenStack: centrally schedules and manages virtual compute, storage, and
network resources over unified RESTful interfaces. It also reduces OPEX and provides high
system security and reliability, helping telecom carriers build secure, energy-saving data
centers.

FusionSphere OpenStack OM: FusionSphere OpenStack's O&M component. It monitors and


Technical Solution manages hardware resources and the FusionSphere OpenStack, as well as automatically
provisions services, performs O&M, and provides a management portal.
In Huawei's NFV solution, FusionSphere OpenStack is the virtualization software and also acts
as the VIM. FusionSphere OpenStack can virtualize compute, storage, and network resources
and centrally manage, monitor, and optimize these resources.

NFV Management and Orchestration

MANO

Os-Ma
OSS/BSS NFV
Orchestrator

VNF Or-Vnfm
EM 1 EM 2 EM 3
Ve-Vnfm Service, VNF, and [1] OpenStack: OpenStack began as a joint project of Rackspace Hosting and NASA and is released under the terms of the
VNF Infrastructure
Manager(s) Description Apache license. OpenStack is a free and open-source project for managing virtual resources in data centers. OpenStack has
VNF 1 VNF 2 VNF 3 sub-projects to develop different services, for example, Nova (compute), Swift (object storage), Glance (image), Neutron
(network), Cinder (block storage), Keystone (authentication), and Horizon (web-based user interface).
Vn-Nf
Vi-Vnfm
NFVI [2] Public cloud: a cloud based on the standard cloud computing model, in which service providers make resources available to
Virtualization Layer the general public over the Internet. Public cloud services may be free or offered in a pay-per-usage model. On a public cloud,
VIM
KVM FusionStorage FusionNetwork Nf-Vi Huawei resources are shared among tenants, who can enjoy advanced IT services without installing any devices or arranging device
FusionSphere Or-Vi
management personnel.
OpenStack OM
Vi-Ha [3] Private cloud: a cloud that delivers similar advantages to public clouds, including scalability and self-service, but through a
Hardware proprietary architecture. Unlike public clouds, a private cloud is dedicated to a single organization. Private clouds give
Compute Storage Network customers direct control over their data and applications, thereby ensuring security and performance.
Hardware Hardware Hardware
[4] KVM: kernel-based virtual machine, which is a free, open source virtualization solution for the Linux kernel. It is a mainstream
Execution reference points Other reference points Main NFV reference points hypervisor in the industry.

Over view 06
NFV Fundamentals
Network Functions Virtualization

Huawei CloudCore is located at the VNF layer on an NFV network. CloudCore can work with
CloudCore Huawei MANO entities and use the cloud OS and COTS hardware from either Huawei or other
vendors.
How and which core network elements have been cloudified?

Huawei products for each MANO function entity:


NFVO: CloudOpera Orchestrator
Background O&M Layer
VNFM: CloudOpera CSM
VIM: FusionSphere OpenStack OM
CloudCore is one of the "six clouds" defined in the SoftCOM strategy. Core network cloudification
is the third revolutionary change following digitalization and the IP transformation. Virtualization
technology is used to reconstruct the silo network. Voice service entities are separated from Each CloudCore NE can be used as a VNF.
dedicated hardware and rolled out on COTS hardware. CloudIMS: a cloudified IP multimedia subsystem which provides
real-time HD audio and video as well as multimedia services to
individuals, homes, enterprises, and industries.
CloudDB: a cloudified database which converges abundant
subscriber service data for centralized management, and opens a
variety of interfaces to third parties. CloudDB helps carriers efficiently
manage subscriber data, reduce OPEX, hasten service innovation, and
monetize their data assets.
CloudSBC: a cloudified session border controller which guarantees
VNF Layer VoIP service security and continuity and provides multimedia
services. CloudSBC is deployed at the edge access layer on the core
network.
CloudPCRF: a cloudified policy and charging rules function which
Technical Solution controls how dynamic QoS policies are implemented, how services
are charged based on flows, and how subscribers are authorized
CloudCore complies with cloud-based software architecture and design requirements, and based on subscription information.
includes fully cloudified products like CloudIMS, CloudDB, CloudSBC, CloudPCRF, and CloudSPS: a cloudified signaling service processing system which
CloudSPS. CloudCore can be deployed on COTS hardware and a cloud OS[1] from your choice of translates and converts Diameter signaling destination addresses to
vendor. transfer messages for subscriber authentication, location updates,
and charging management on LTE networks.

Virtualization layer: CloudCore can be deployed above mainstream


Cloud Cloud Cloud Cloud Cloud MANO cloud OSs, including Huawei FusionSphere OpenStack, VMware
IMS DB SBC PCRF SPS vSphere, Ubuntu OpenStack, and Red Hat OpenStack.
Vertical Integration

NFVI Layer COTS hardware layer: CloudCore can be rolled out on hardware from
NFVO
multiple vendors.
Cloud OS Servers: Huawei E9000 and RH2288, HP C7000, Cisco UCS, etc.
(Huawei FusionSphere OpenStack, VMware vSphere,
Ubuntu OpenStack, and Red Hat OpenStack) Storage devices: Huawei OceanStor 5500 V3, HP 3PAR, etc.
VNFM
Network devices: Huawei CE6851 and CE12804, HP 5900, etc.

Data Center (Huawei, HP, Cisco…)


VIM

[1] Cloud OS: an OS designed to operate within cloud computing and virtualization environments and abstract physical resources
into virtual resource pools. A cloud OS is composed of a management component (OpenStack, for example) and a virtualization
Horizontal Integration
component (namely a hypervisor). The mainstream cloud OSs in the industry include Huawei FusionSphere OpenStack,
VMware vSphere, Ubuntu OpenStack, and Red Hat OpenStack.

Over view 08
NFV Fundamentals
Network Functions Virtualization

Huawei CloudEdge is located at the VNF layer on an NFV network. CloudEdge can work with
CloudEdge Huawei MANO entities and use the cloud OS and COTS hardware from either Huawei or other
vendors.
How and which core network elements have been cloudified?

Huawei products for each MANO function entity:


NFVO: CloudOpera Orchestrator
Background O&M Layer
VNFM: CloudOpera CSM

CloudEdge is one of the "six clouds" defined in the SoftCOM strategy. Core network cloudification VIM: FusionSphere OpenStack OM
is the third revolutionary change following digitalization and the IP transformation. Virtualization
technology is used to reconstruct the silo network. Packet service entities are separated from Each CloudEdge NE can be used as a VNF.
dedicated hardware and rolled out on COTS hardware.
CloudUGW: functions as a GGSN, S-GW, P-GW, or any combination
of them and supports 3GPP access to GPRS, UMTS, and LTE
networks.
CloudUSN: supports logical entity applications of various radio
access technologies and provides mobility management and session
management for UEs, and other service functions, such as CSFB,
pooling, and network sharing.

CloudEdge
CloudEPSN: functions as an external PCEF that complies with 3GPP
specifications. It is deployed between a wireless gateway and a PDN.
VNF Layer CloudEPSN analyzes and processes service packets and provides
intelligent and flexible service control functions, including service
awareness, service control, charging, and bandwidth control.

Technical Solution CloudPDG: allows for non-3GPP access to EPC networks and
connects non-3GPP access UEs to a P-GW.
CloudSCEF: a network capability openness platform developed
CloudEdge complies with cloud-based software architecture and design requirements, and
based on the Huawei CloudEdge solution. CloudSCEF combines and
includes fully cloudified products like CloudUGW, CloudUSN, CloudEPSN, CloudePDG, and orchestrates basic network capabilities and provides them to
CloudSCEF. CloudEdge can be deployed on COTS hardware and a cloud OS from your choice third-party apps to invoke. CloudSCEF improves user experience for
of vendor. third-party apps. It increases the bandwidth available for videos and
shortens the delay for real-time online gaming. CloudSCEF enhances
service flexibility and simplifies the deployment of third-party apps.

Cloud Cloud Cloud Cloud Cloud MANO


UGW USN EPSN ePDG SCEF Virtualization layer: CloudCore can be deployed above mainstream
cloud OSs, including Huawei FusionSphere OpenStack, VMware
Vertical Integration

NFVO vSphere, Ubuntu OpenStack, and Red Hat OpenStack.

Cloud OS NFVI Layer COTS hardware layer: CloudCore can be rolled out on hardware from
(Huawei FusionSphere OpenStack, VMware vSphere, multiple vendors.
Ubuntu OpenStack, and Red Hat OpenStack) VNFM Servers: Huawei E9000 and RH2288, HP C7000, Cisco UCS, etc.
Storage devices: Huawei OceanStor 5500 V3, HP 3PAR, etc.
Network devices: Huawei CE6851 and CE12804, HP 5900, etc.
Data Center Network (Huawei, HP, Cisco...)
VIM

Horizontal Integration

Over view 02
Cl ou d- ba s e d
Arc h i tec tu re
Tra n s ition

Phase 1: NFV
Software and Hardware Decoupling
Compute Virtualization
Storage Virtualization
Network Virtualization

Phase 2: NFC
Three-Layer Cloud Architecture
Distribution Layer: CSLB
Processing Layer: Stateless

A rchitecture
Data Layer: CSDB
Microservice Architecture
NFV Fundamentals
Network Functions Virtualization

NFV NFC (Cloud Native)


Cloud-based Architecture
Transition VNFs VNFs

Data Layer
How has the CT architecture transformed to cater for users who expect Real Time,
On Demand, All Online, DIY, and Social (ROADS) experience?

Background
Processing Layer

MANO
Cloud Cloud VNFn
IMS EPC

Data centers provide an ICT infrastructure for Distribution Layer

all carrier services and lay the foundation for


the transition to the cloud. To fully unleash a
data center's potential, Huawei uses NFV and Service Management Architecture
NFC (Cloud Native) to reconstruct its core
Decoupled Infrastructure Provided by Various Vendors
network product architectures and deploy CT
services onto the cloud.

Technical Solution NFV

The transition to a cloud-based architecture undergoes two phases: Software and hardware decoupling: virtualizes compute, storage, and network resources on
COTS servers so that they can be shared by upper-layer VNFs.
P ha s e 1 : NFV
Compute virtualization: allows multiple VMs to run on a single physical host, virtualizes CPU and
This phase decouples telecom services from dedicated hardware and rolls out memory resources, and standardizes interfaces for VMs to access and manage these resources. By
services on COTS servers, reducing the costs in procuring and maintaining devices, doing this, compute resources can be centrally managed, flexibly scheduled, and fully used.
shortening TTM, and accelerating service innovation. Storage virtualization: abstracts storage resources into resource pools for central management
and makes upper-layer applications unaware of the differences in performance and interfaces of
P ha s e 2 : NFC (C l ou d N a t i ve ) storage devices.

Network virtualization: uses a virtual switch (vSwitch) to enable layer-2 communication


This phase separates applications from data and uses stateless machines to process between VMs and between VMs and external networks.
services. In this phase, all compute resources are pooled for higher reliability. The
cloud session load balancer (CSLB) and cloud session database (CSDB) jointly work
to evenly distribute services and allow the distribution, processing, and data layers NFC (Cloud Native)
to scale separately. Additionally, the software architecture and services are reconstructed
so that automatic deployment, O&M, scaling, and gray upgrades can be performed Distribution, processing, and data layers are divided to separate applications from data. The
for each individual service. microservice architecture is introduced to hasten service delivery and enhance system security.

Distribution layer: The CSLB allows for services and interfaces to have their own independent
IP addresses so that service flows can be evenly distributed and VMs can be automatically scaled.

Processing layer: Processes use load-sharing, and are stateless and pooled to ensure high
system availability and on-demand service provisioning.

Data layer: The CSDB, a distributed memory database for a cloud-based environment with x86 servers, is
used to support service scaling while ensuring carrier-grade reliability and service experience.

Microservice architecture: The microservice management architecture is used to allow


developers to develop and manage applications via individual microservices. The architecture
also provides diverse functions to ensure service security and reliability.

Architecture 02
NFV Fundamentals
Network Functions Virtualization

Phase 1: NFV The four mainstream cloud OSs:


Software and Hardware Decoupling
Compute Virtualization
Software and Hardware
Storage Virtualization
Network Virtualization Decoupling Huawei FusionSphere
OpenStack
VMware vSphere Ubuntu OpenStack Red Hat OpenStack

Decoupling software from hardware is the first step towards NFV.


What hardware and software does Huawei support for an NFV network? The four mainstream COTS hardware types:

Background
Huawei E9000 Huawei RH2288 HP C7000 Cisco UCS

Traditional core network applications that provide


carrier-grade voice, video, and multimedia services Although ETSI has defined the standard NFV architecture and interfaces, vendors have their
are deployed on proprietary hardware, which is own understanding of these standards and have adopted different mechanisms to achieve NFV.
costly and time-consuming in deployment. To Due to their differences, the cloud OS and COTS hardware from different vendors may not be
tackle this, NFV technology is used to decouple fully compatible with each other. The following table lists the cloud OS and COTS hardware that
software from the underlying hardware so that is best used together:
core network applications can be deployed on
COTS hardware and various cloud OSs.
Huawei FusionSphere VMware Ubuntu Red Hat
OpenStack vSphere OpenStack OpenStack

Technical Solution Recommended in


E9000 - - -
E2E scenarios

Compared to traditional ATCA-based networks, a cloud-based network is more open, agile, and Determined
efficient. After the decoupling of software and hardware, carriers are free to select the COTS RH2288 by project - Tested Tested
requirements
hardware, cloud OS, and VNFs of their choice.
Determined Recommended
COTS hardware is mainly provided by IT vendors, yet in the last few years, more and more CT HP C7000 by project third-party - -
requirements combination
vendors have devoted themselves to the development of this standardized hardware, which has
laid the foundations for ICT convergence.
Cisco UCS - - - -
Cloud OS is mainly provided by software vendors focused on IT and virtualization. CT vendors
have since developed their own cloud OS or made enhancements to the native OpenStack
cloud OS to meet the CT service deployment and operation requirements. Recommended in E2E scenarios: Huawei FusionSphere OpenStack and E9000 are recommended
VNFs are mainly provided by CT vendors who decouple network functions from dedicated for all E2E NFV projects.
hardware. To meet hardware and software decoupling requirements, subrack and slot numbers Recommended third-party combination: Huawei is capable of integrating such a combination.
are removed from MML commands, and hardware arbitration is abandoned.
Determined by project requirements: These combinations can be put into commercial use if
customers require such a combination.
EMS/OSS/BSS MANO
Tested: These combinations have passed the integration tests at Huawei Open Labs.
Cloudify
VNF

Decoupled
Cloud OS

COTS

Architecture 04
NFV Fundamentals
Network Functions Virtualization

Phase 1: NFV
Software and Hardware Decoupling
Compute Virtualization
Compute Virtualization NUMA affinity A vCPU is pinned to a dedicated pCPU.

Storage Virtualization What compute virtualization technologies has Huawei used to ensure CT VM VM
Network Virtualization service performance?
vCPU vCPU vCPU vCPU vCPU vCPU vCPU vCPU

Background
pCPU pCPU pCPU pCPU pCPU pCPU pCPU pCPU
Core11 Core12 Core11 Core12
Compute virtualization is a technique of

NFVI and service VMs are isolated from each other.


Resource isolation: Physical CPU cores for the
separating the physical hardware from operating pCPU pCPU pCPU pCPU pCPU pCPU pCPU pCPU
systems and running multiple OSs on a single Core9 Core10 Core9 Core10
physical machine. Compute virtualization

Service VMs
provides standard I/O interfaces for access to pCPU pCPU pCPU pCPU pCPU pCPU pCPU pCPU

compute resource and management, in addition Core7 Core8 Core7 Core8

to significantly optimizing resource usage.


pCPU pCPU pCPU pCPU pCPU pCPU pCPU pCPU
Applications highly benefit from compute
Core5 Core6 Core5 Core6
virtualization technologies but also encounter a
slump in performance when compared to
pCPU pCPU pCPU pCPU pCPU pCPU pCPU pCPU
hardware on legacy networks. What compute Core3 Core4 Core3 Core4
virtualization technologies has Huawei used

management
NFVI
to improve application performance? pCPU pCPU pCPU pCPU pCPU pCPU pCPU pCPU
Core1 Core2 Core1 Core2

Technical Solution CPU1 NUMA node 1 CPU2 NUMA node 2

At the core of compute virtualization is the CPU. Service VM performance is highly dependent
on how CPUs are allocated. The Huawei CloudCore solution uses the following important CPU
allocation techniques to ensure service VM performance:

Resource isolation: On each blade, physical CPU (pCPU)[1] cores for the NFVI and service VMs
CPU1 CPU2
are isolated from each other, avoiding CPU resource scrambles. Using the figure on the right as
an example, two dedicated physical CPU cores are allocated to the NFVI.
Non Uniform Memory Access (NUMA)[2] affinity: VM performance deteriorates if it spans
multiple NUMA nodes. The Huawei NUMA affinity feature enables the system to automatically
deploy VMs on the same NUMA node and balance loads over different NUMA nodes, which
helps decrease the memory access delay and improve VM performance.
[1] pCPU: pCPU cores reflect the processing capabilities of a blade. The number of pCPU cores provided by a blade can be
CPU pinning: CPU pinning enables the system to pin, or establish a mapping between a virtual
calculated using the following formula:
CPU (vCPU[3]) and a pCPU core so that the vCPU can always run on the same pCPU core, which
Number of pCPU cores = Number of CPUs x Number of physical CPU cores provided by each CPU x The hyperthreading
means VMs can use their dedicated pCPUs. coefficient
For example, if a blade has two CPUs, each CPU provides 12 physical CPU cores, and the hyperthreading coefficient is 2, the
number of pCPUs in this blade is 48.
The hyperthreading technology enables a CPU to behave like two logical CPUs, so a CPU can run two independent
applications at the same time.
[2] NUMA: Each blade usually has multiple CPUs of which each has its own memory resources. Each CPU and its associated
memory are called a NUMA node, for example, if a blade has two physical CPUs, it has two NUMA nodes. A CPU accesses
the memory on its NUMA node much faster than that on a remote NUMA node.
[3] vCPU: A vCPU is a proportion of a physical CPU that is assigned to a VM. Generally, one pCPU is assigned to one vCPU. If
one pCPU is assigned to multiple vCPUs, the VM performance will deteriorate.

Architecture 06
NFV Fundamentals
Network Functions Virtualization

Phase 1: NFV
Software and Hardware Decoupling
Compute Virtualization
Storage Virtualization
Storage Virtualization What storage virtualization technologies has Huawei used to ensure CT
Network Virtualization service performance?
Server

VM

Background
VM Multipath
Storage virtualization is the pooling of physical
storage resources from multiple network The LUN is
mounted to a VM. HBA [5] HBA
storage devices into what appears to be a single
storage device that is managed from a central
console.
SAN
If SAN[1] is used, physical storage devices are
connected to blade servers over a dedicated
storage network. How does Huawei virtualize Storage

storage resources and ensure efficient, reliable LUN


Controller A Controller B
data transmission on SAN? A LUN is created
on the storage pool.

Storage pool
Technical Solution LUN LUN LUN LUN
A storage pool is
created on a disk domain. Storage pool Storage pool Storage pool
To put it simply, storage virtualization is the process of abstracting physical disks into a storage Disk domain
pool[2] and creating virtual disks using resources in this pool. The general storage virtualization Disk domain Disk domain Disk domain
process is as follows:

1 Add disks to a disk domain[3] and create a storage pool for the domain.

2 Create backend storage and associate it with the storage pool. Create a volume type and The Huawei CloudCore solution uses storage multipathing technology to ensure efficient and
associate it with the backend storage. Create a LUN[4] on the storage pool. reliable data transmission on a SAN.
3 Mount the LUN to the VM. Storage multipathing technology achieves storage path redundancy by establishing multiple
physical routes between a server and a storage device. When one storage path fails, the
multipathing software automatically selects another path to transmit storage data. This
technology also allows multiple paths to be combined as one path for higher storage bandwidth.
To implement storage multipathing, you need to install multipathing software on servers that
connect to the storage devices.

[1] SAN: storage-area network, which is a high-speed dedicated network that connects servers to the various storage devices
those servers use.
[2] Storage pool: A container of storage resources, in which multiple file systems can be created. Storage resources required by
application servers all come from storage pools. One storage pool corresponds to one disk domain.
[3] Disk domain: includes the same or different types of disks. Disk domains are isolated from each other. Therefore, services
carried by different disk domains do not affect each other in terms of performance or faults (if any).
[4] LUN: A logical unit number (LUN) identifies a logical unit, which is a device addressed by the Small Computer Systems
Interface (SCSI) protocol or SAN protocols which encapsulate SCSI, such as Fiber Channel (FC) or Internet Small Computer
Systems Interface (iSCSI).
[5] HBA: A host bus adapter (HBA) is a circuit board and/or an integrated circuit adapter that provides I/O processing and physical
connectivity between a server and a storage device.

Architecture 08
NFV Fundamentals
Network Functions Virtualization

Phase 1: NFV
Software and Hardware Decoupling
Compute Virtualization
Network Virtualization
Storage Virtualization What network virtualization technologies has Huawei used to ensure CT
Network Virtualization service performance?

OpenStack-native OVS Huawei-enhanced EVS


Background Application layer
GUEST OS
Network virtualization uses virtual switches to
APP APP
exchange traffic between physical hosts and
VMs at layer 2. VMs are connected to external
networks through virtual switches that are bound vNIC vNIC
to physical NICs. VM VM
Virtualization layer
CT services require high forwarding performance HOST OS
and little to no delays, which is assured by
purpose-built hardware on the traditional ATCA OVS user space EVS user space
platform. COTS hardware is used on an NFV

Huge-page
network. How does Huawei ensure the forwarding

Exclusive
Virtual Switch

CPU core
memory
performance on such a network?
OVS kernel space EVS kernel space

Technical Solution
Hardware layer
Overhead at the network layer is mainly caused by packet dispatching, data copying from kernel
NIC NIC (DPDK)
to user space, and context switching on blocking I/O. Huawei uses the Elastic Virtual Switch
(EVS) and Intel Data Plane Development Kit (DPDK)[1] to reduce network overhead and ensure
the forwarding performance.

The Huawei EVS solution was developed based on OVS (Openstack vSwitch) OpenStack-native
forwarding technology. The EVS runs in user space of the host OS at the virtualization layer and
uses the DPDK-based high-speed I/O channels for packet transmission. Key EVS technologies:

EVS provides higher forwarding performance and better meets CT service requirements than NIC: Physical NICs use Intel DPDK to boost the packet processing performance.
OVS. EVS: The EVS runs in user space[2] on the host OS and leverages DPDK and huge-page memory[3]
to improve network performance. In OVS, packets are received and sent in kernel space.
However, in EVS, a dedicated thread is used to bypass the kernel space and send and receive
packets in user space, thereby improving the packet forwarding performance.

Exclusive CPU core: A dedicated CPU core is allocated to the EVS to receive and send packets.

[1] DPDK: The Intel DPDK is a set of data plane libraries and network interface controller drivers for fast packet processing. It
provides a programming framework for x86 processors and enables faster development of high speed data packet networking
applications by using technologies such as kernel bypass, polling (no interrupts overhead), and huge-page memory.
[2] Kernel space and user space: The most significant distinction between programs running in kernel space and user space is
their privileges. Programs in kernel space have unlimited access to storage and external devices, while programs in user
space can only access specified ports or storage space.
[3] Huge-page memory: Huge-page memory means less page query time. Programs in user space use huge-page memory to
improve processing performance.

Architecture 10
NFV Fundamentals
Network Functions Virtualization

Phase 2: NFC
Three-Layer Cloud Architecture
Distribution Layer: CSLB
Three-Layer Cloud
Processing Layer: Stateless
Data Layer: CSDB
Microservice Architecture
Architecture
How can NFV evolve to NFC? Data VNF
How does Huawei build the three-layer architecture? Data CSDB Unified storage
layer
separation
Background SaaS Processing
OMU APP Service Stateless
layer
As NFV technologies have matured and ICT convergence further deepened, Cloud Native (NFC)
has become the future of NFV. Cloud Native aims to build a fully distributed, fully automated, CSLB Distribution Even distribution
layer
elastic, robust, and agile telecom network on general-purpose hardware.

PaaS CGP

Fully distributed Cloud OS


IaaS
Fully automated
COTS

NFV NFC (Cloud Native)

Technical Solution Distribution layer: Service processes in this layer are pooled and use the CSLB to evenly
distribute service requests.

Processing layer: Service processes in this layer are pooled and stateless.
Huawei uses the three-layer cloud architecture or data separation architecture, to evolve NFV to
Cloud Native (NFC). This architecture is built over the Carrier Grade Platform (CGP) and Data layer: uses the x86-based, distributed CSDB to store subscriber and session data.
separates data from applications.

Comparison of a legacy and cloud-based network architecture


CGP

Legacy ATCA-based Cloud-based


CT vendors have absolute technical advantages over IT vendors in the Platform as a Service Architecture Architecture
(PaaS). Huawei CGP provides the PaaS for CT services on a cloud-based network and is
dedicated to system and device management, soft forwarding, as well as packet protocol
Hardware and software Tightly coupled Decoupled
processing for upper-layer applications.

Three separated layers


Software architecture Vertically integrated
(easy to scale)
Data separation

Deployment Manual Automated


Data separation is a mainstream architecture used by the IT industry. In this architecture,
stateful data is separated from service processes and uniformly stored in the Cloud Session
Performance Hardware acceleration Software optimization
Database (CSDB). Service processes working in load-balancing mode are pooled and stateless. and CSLB
When a process is faulty, any process in the same pool can obtain subscriber and session data
Both hardware Software has carrier-grade
from the CSDB to take over services from the faulty process. This architecture makes Availability and software have availability, and NFVI uses the HA
control-plane applications more adaptable to a cloud environment and ensures elastic scaling carrier-grade availability. feature to ensure high availability.
and high availability of NFV services.
Architecture 12
NFV Fundamentals
Network Functions Virtualization

Phase 2: NFC
Three-Layer Cloud Architecture
Distribution Layer: CSLB
Distribution Layer: CSLB Active/standby
Processing Layer: Stateless How are CSLB advantages suitable for cloud features?
Data Layer: CSDB
SCU [5]
Microservice Architecture
DPU
BSU [4]

Background DPU
SCU

Peer BSU
Huawei CloudCore uses the basic NFV architecture
to cloudify service solutions including IMS, SDM, DPU
Manually add links. SCU
and PCRF, and provides common functions, Automated
such as balanced service distribution, elasticity, module BSU
elasticity DPU
and distributed data storage.

The Cloud Session Load Balancer (CSLB) lays Gray SCU


Manually add DPUs.
the foundation for elastic networks. This upgrades
Cross-DC of services Manually add service IP
technology helps service solutions with message addresses.
automatic module elasticity, cross-DC message distribution
distribution, and gray upgrades of services,
Load-sharing SCU
improving service robustness and facilitating
the telecom network transition from NFV to Load-sharing
Cloud Native. DPU BSU

CSLB SCU
Technical Solution
Peer DPU BSU

Before the CSLB is introduced, DPUs work as active/standby pairs. A local VNF interconnects to CSLB SCU
The peer is
the peer using the service IP addresses[1] of DPUs[2]. A standby DPU can take over services only unaware of capacity
from its active DPU, and services may fail if both the active and standby DPUs are faulty. In expansion on the DPU BSU
The CSLB
addition, DPU processing is not fully utilized. If additional DPU pairs are required, service IP other side. automatically scales
addresses and links must be manually added and configurations on the peer must be adjusted out. Interface IP The DPU SCU
accordingly. addresses are automatically scales
After introducing the CSLB, both the CSLB and DPU work in load-sharing mode, which improves automatically out. Interface IP
assigned to new addresses are
reliability and maximizes DPU performance. The CSLB shields the internal links of the local VNF
CSLBs. automatically
and directly connects to the peer. It provides a unified external service IP address. The CSLB
assigned to new
connects to the DPU through an internal interface. When more DPUs and CSLBs are added, the
DPUs.
system automatically assigns interface IP addresses[3] to the new DPUs and CSLBs, helping them Service IP address
automatically scale out with fixed service IP addresses but without affecting the peer. Interface IP address

[1] Service IP address: refers to the source or destination IP address contained in a service packet. The peer uses the received
service IP address for addressing and IP-based routing.
[2] DPU: Dispatching Unit. It dispatches messages.
[3] Interface IP address: configured for a physical or virtual port and maps to a fixed MAC address. It is primarily used to detect
network connections.
The CSLB interworks with DPUs using internal interface IP addresses.
The CSLB interworks with switches using external interface IP addresses.
[4] BSU: Broadband Signaling Unit. It dispatches IP, SCTP, and TCP messages.
[5] SCU: Session Control Unit. It processes services of each logical VNF.
Architecture 14
NFV Fundamentals
Network Functions Virtualization

Phase 2: NFC
Three-Layer Cloud Architecture
Distribution Layer: CSLB
Processing Layer: Stateless Data layer
Processing Layer: Stateless How can the technical advantages of the service processing
Centralized storage Distributed storage of subscriber
Data Layer: CSDB layer be used on cloud-based networks? of global data and session data
Microservice Architecture

CDB CSDB
1+1 redundancy Redundancy pool
Background Subscriber Subscriber Subscriber Subscriber
Global Global data data data data
data data Session Session Session Session
The legacy embedded architecture of telecom networks data data data data
is evolving towards a service-and-data-separated
architecture, on which layer-based elasticity is required.

Stateful data is abstracted from service processes and


uniformly stored in the CSDB. Service processes Processing layer
Redundancy pool
deployed in a pool evenly share loads. Stateless[1] service
Separate processing and storage, stateless
processing and redundancy pool help any process obtain Call
services, and redundancy pool processing
session data[2] and subscriber data[3] from the CSDB in Service Data
real time and take over services from faulty processes, processing storage
ensuring fast elastic scale-in/out and high availability.
SCU SCU SCU SCU

Technical Solution

In the legacy embedded architecture, service processes and data storage are integrated, and
compute and storage resources cannot be separately scaled. Moreover, subscribers on active
Redundancy pool Distribution layer
sessions cannot be quickly migrated when processes are scaled.
Service-based distribution, not bound to subscribers
The service-and-data-separated architecture addresses these issues by abstracting subscriber
and session data from the processing layer. Such data is stored at the data layer so that the BSU BSU BSU BSU
processing layer can focus on service processing. This advanced architecture outperforms the
legacy embedded architecture in the following aspects:
DPU DPU DPU DPU

High utilization rate

Service processes no longer run as 1+1 active/standby pairs but in a pool, which greatly improves the DPUs, which are deployed in a pool and not bound to subscribers, evenly distribute service
utilization of compute resources. messages to SCUs.

SCUs, which are also deployed in a pool, obtain the required subscriber and session data from
Layer-based elasticity the CSDB when processing services. Service processing is stateless. If any SCU becomes faulty,
other SCUs automatically take over its services and obtain the required subscriber and session
data from the CSDB, ensuring service continuity.
During off-peak hours, compute resources automatically scale in, and are independent of storage resources.

[1] Stateless: Each request is independent and unrelated to any other requests. All information required to process a request is
Service continuity either contained in the request itself or obtained from an external device, for example, a database. Servers do not store any
information required for processing the request. Stateless services are of great importance for elastic scale-in/out.
[2] Session data: identifies a session. It is usually dynamic data that is generated during a session.
Service processing is now stateless, so that services are not interrupted even if one process
[3] Subscriber data: identifies a specific subscriber. It is usually static data stored after the subscriber is defined, for example, the
becomes faulty or a scale-in/out occurs. subscriber's numbers or services information.

Architecture 16
NFV Fundamentals
Network Functions Virtualization

Phase 2: NFC
Three-Layer Cloud Architecture
Distribution Layer: CSLB
Data Layer: CSDB Data layer
Processing Layer: Stateless How does CSDB empower cloudification?
Data Layer: CSDB
Centralized storage Distributed storage of subscriber
Microservice Architecture
of global data and session data
CDB CSDB
1+1 redundancy Redundancy poolAutomatic scaling
Background Subscriber Subscriber Subscriber Subscriber
Global Global data data data data
data data Session Session Session Session
Cloud session database (CSDB) is a key element data data data data
of the CloudCore solution. It helps in application
and data separation, unified storage, elastic
scaling, and cross-DC[1] deployment on NFV
networks.
Redundancy pool Processing layer
The CSDB, an NFV-oriented technology,
provides a distributed database based on x86 Call
processing
servers and cloud environments. It stores the
session and subscriber data required to process
CSDB
services, meets the requirements for elastic
service scaling, and ensures carrier-grade user SCU SCU SCU SCU
experience and high reliability.

Technical Solution
Distribution layer
As the data management layer in the SoftCOM solution, the CSDB outperforms legacy memory Redundancy pool
database because:
1 It is completely decoupled from service logic and serving as a common database
BSU BSU BSU BSU
component.
2 It is shared by all VNFs and managing data, including data backup, migration, and load
balancing.
DPU DPU DPU DPU

3 It supports elastic scaling with a distributed data storage architecture.


4 Its utilization of compute resources has greatly improved when compared to active/standby
pairs, by all-active load-sharing deployment.
5 It better supports cross-DC data backup to ensure service continuity.
SCUs obtain data from the CSDB in real time. After the CSDB automatically scales out, the SCUs
automatically interconnect to the new CSDBs to obtain data.

[1] Cross-DC: The distributed CSDB backs up data over multiple DCs to support cross-DC network-level switchovers, ensuring
service continuity and effectively improving system reliability.

Architecture 18
NFV Fundamentals
Network Functions Virtualization

Phase 2: NFC
Three-Layer Cloud Architecture
Distribution Layer: CSLB
Microservice Architecture IaaS layer
Processing Layer: Stateless How do networks transition to the MSA?
Data Layer: CSDB What are the MSA's unique features? Supports Huawei public clouds and E2E NFVI deployment.
Microservice Architecture

PaaS layer
Background
Microservice governance framework: develops and manages E2E microservices, and
According to Huawei's "All Cloud" strategy, Cloud Native will revoltionize telecom networks. The simplifies the process of application transformation to microservices. Various governance
Microservice Architecture (MSA) is an important capability of Cloud Native. Huawei is now functions ensure secure and reliable microservice system operations.
gradually transitioning its NFV architecture to the MSA. Common and field services: integrate services from third-party open source, partners, and
public clouds to provide differentiated services.

Cloud application deployment & operating environment: supports VNFM and lightweight
application deployment, containers[2], and VM running environment.

Service-oriented O&M center: provides automated, scenario-based O&M functions, gray


release of applications and services, and unified O&M.

SaaS layer

Technical Solution Large-granularity services: allow the CloudCore and CloudEdge architectures to transform to a
service architecture in forms of large-granularity services.
The MSA outperforms the traditional cloud architecture because: Microservices: enable reconstructed service VNFs and 5G networks to be deployed in the form
It is highly cohesive, loosely coupled, and has self-governed microservices . [1] of microservices.

The automated O&M includes deployment, upgrades, scale-in/out, alarms, monitoring,


fault locating, and self-healing.

The stateless services are automatically scaled on demand. They start up fast and gracefully
degrade.

Future MSA:

VM deployment Container deployment

Large-granularity services Microservice

VNF VNF Sessions Policy


mgmt.
Service- SaaS layer
oriented Guest OS Container OS
O&M
center
Microservice governance framework

Common and field services PaaS layer


[1] Microservice: Each microservice is an application that can be independently deployed. Communications between each
microservice is lightweight. A series of microservices make up the MSA.
Cloud application deployment & operating environment
[2] Container: A container is created using container virtualization and only includes application processes and their dependent
packages. These processes run over a host OS using container technology and are isolated from each other. Container
NFVI Private cloud Public cloud IaaS layer virtualization and VMs use different virtualization technologies. A VM is created using traditional hardware virtualization, and
includes application processes and a complete guest OS required by these processes.

Architecture 20
D eployment
NFV Fundamentals
Network Functions Virtualization

Management zone CloudCore & CloudEdge control plane CloudCore & CloudEdge media plane
VNF Co-deployment VDC-MNG
VPC MNG VPC-IMS VPC-PCRF VPC-UIC
VDC1
VPC-USN VPC-UGW VPC-MSE VPC-SBC

Cloud OS
How can Huawei's CloudCore and CloudEdge products be deployed on the same cloud?

USCDB
U2000

VNFM

PCRF
CSCF

OMU
OMU

OMU

VM1
VM2

VM1
VM2

VM1
VM2
USN

SBC
ATS
Shared HA Shared HA
Background
HA HA_MNG HA1 HA2

When VNFs from different product families are


co-deployed in the same DC, they share the
same FusionSphere components (including Cloud OS FusionSphere

OpenStack and the OM), the VNFM, and all or Storage


partial use of the NFVI resources (including
COTS E9000
cabinets, chassis, servers, switches, and disk arrays).

Huawei CloudCore and CloudEdge VNFs can *By default, HAs are planned as follows:

be deployed on the same cloud and share NFV


resources. This helps to simplify O&M HA Example Solutions in the HA
management and reduce costs.
Management HA FS, VNFM, and U2000

Technical Solution Control plane HA in the trusted zone IMS, UPCC, DRA, UC (RCS), MSOFTX3000, SDM,
DSP, CaaS, EC (excluding MCU), UIC, etc.

Hardware resources are divided into host aggregates (HAs)[1] for upper-layer applications. Different Media plane HA 1 in the trusted zone SBC
products or solutions can be deployed in the same HA to maximize the resource utilization.
Media plane HA 2 in the trusted zone USN, CG, UGW, MSE, ePDG, C-SGN, etc.

Shared HA MCU HA EC MCU

DRA, DSP, EC USM-Proxy and MediaX-Proxy, SDM UIM, UC RCS, and


Control plane HA in the DMZ
Management-zone products, such as the U2000, VNFM, and FusionSphere OpenStack OM, are other VNFs whose traffic can optionally pass through a firewall
deployed in the same HA which is shared by CloudCore and CloudEdge.
Control plane HA in the DMZ_FW VNFs whose traffic must pass through a firewall, for example, CaaS
CloudCore or CloudEdge products are deployed in the same HA, with the exception of those
with special requirements on hardware, traffic, or security.
Media plane HA in the DMZ SBC
CloudCore and CloudEdge can be deployed in independent HAs or share an HA on the control
plane and another on the media plane.
CDM HA CDM

*HA plans may vary depending on project requirements and live network design.
Flow control
Huawei CloudCore and CloudEdge solutions use the same hardware, networking, and FusionSphere
If multiple service VNFs are deployed in the same HA, multiple bandwidth-hungry VMs may be OpenStack configurations, and can be co-deployed.
assigned to the same host, and the host may not be able to provide sufficient bandwidth
resources, causing congestion issues. To address these issues, Huawei FusionSphere OpenStack
[1] Host aggregate: a group of hosts that have the same properties, for example, same compute, storage, and network
and CloudOpera CSM have been designed to control the traffic flow, ensuring maximized performance, and SR-IOV-capable. Resources in an HA are physically isolated from others.
resource usage without affecting the performance. [2] VDC: virtual data center, which is a virtual resource pool containing compute, network, and storage resources for users to
deploy applications. Resources in a VDC are isolated from others.
The following illustrates how CloudCore and CloudEdge are co-deployed when their control
[3] VPC: virtual private cloud created in a VDC. A VPC is used to build an isolated virtual network environment which can be
planes share an HA and their media planes share another. In this example, CloudCore and configured and managed by users themselves. It enhances resource security and simplifies network deployment.
CloudEdge are deployed in the same VDC[2] but in different VPCs[3].

Deployment 02
NFV Fundamentals
Network Functions Virtualization

Automated VNF Deployment


How are VNFs automatically deployed and managed through a unified interface?
MANO

OSS/BSS Huawei CloudOpera


Orchestrator
Background
CloudOpera CSM 1
The NFV architecture is divided into three layers Huawei CloudOpera CSM 6 2
VNFD

t
from top down: SaaS, PaaS, and IaaS. A VNFM is EMS VNF VNF
Adaptor Lifecycle Package
required to schedule NFVI resources to quickly

Simple
EMS
SFTP VNF VIM Image/
and automatically deploy VNFs and elastically User Software
Sever Adaptor Adaptor
scale in/out VMs. 7
Huawei CloudOpera CSM provides VNFM
functions. It manages VNF lifecycles and helps 3
5
carriers simply and efficiently operate and NFVI VIM
maintain their cloud networks.
VNF VNF VNF VNF FusionSphere
4 OpenStack OM
vCloud Director
Technical Solution Virtualization Layer
vCenter

Hardware Layer 3rd Party VIM


Huawei CloudOpera CSM features:

VNF creation process:


Efficient, automated deployment
1 Upload the A user manually uploads the VNFD, image, and software files to the VNFM.
VNFD.
The CSM flexibly selects VNFs according to service plans and automatically deploys the VNFs in
the cloud environment, which significantly improves VNF deployment efficiency and
accelerates the commercial use of telecom clouds. 2 The VNFM creates a VNF deployment task.

Instantiate The VNFM applies for compute, storage, and network resources from
3 the VNF. the VIM to create VMs.
Comprehensive VNF lifecycle management

The CSM helps carriers easily manage VNF lifecycles, including VNF deployment, monitoring, 4 The VIM reserves the required resources and creates the VMs.
scaling, and termination.
Install VNF After the VMs start up, the VNF downloads software packages from the SFTP server
5 software. and installs the software. The VNF then notifies the VNFM that the installation is complete.
Identifying affected VNFs in case of faults
Notify the
6 EMS. The VNFM notifies the EMS that the VNF has been deployed.
The CSM receives hardware alarms and locates affected VMs and their services. VNFs trigger
their protection mechanisms the moment they receive the notification from the CSM, assisting
maintenance engineers in quickly rectifying faults. Configure the The EMS configures the VNF.
7 VNF

The CSM uses VNFD[1] files to automatically deploy VNFs. After the VNFD file and VNF
software packages are uploaded, the CSM automatically deploys the VNF by creating and
[1] VNFD: virtualized network function descriptor which describes a VNF, for example, the VNF deployment policies, dependent
powering on VMs, and installing the software, according to the VNF description in the software packages, and scaling policies, and defines compute, storage, and network resources required by the VNF. One
VNFD file. VNFD applies to only one VNF.

Deployment 04
NFV Fundamentals
Network Functions Virtualization

One-Click Deployment Tool


How can VNFs be deployed with just one-click?

Preparing network OSS/BSS


1 design data
NFVO
Background Network design 6
2 EMS
Software packages
NFV projects include delivery of hardware and
VNF VNF VNF
VNFs, which requires complex manual operations One-click VNFM
deployment 5
and a long delivery period and poses high tool
requirements on operating personnel. To address
this issue, Huawei developed a one-click Virtual Virtual Virtual
Laptop Computing Storage Network Huawei
deployment tool. 4 OpenStack
+ OM

Computing Storage Network


3
Technical Solution Hardware Hardware Hardware

On-site customer service engineers can install the tool on their laptops. After hardware devices
are installed, they can manually import network design data and software packages, connect
the laptop to the devices through serial and Ethernet ports, and enable the tool to Install hardware devices and connect corresponding cables.
Prepare
automatically configure hardware data on storage devices, switches, and servers, and network
1
automatically deploy FusionSphere OpenStack, the OM, MANO, and U2000. design data.
Prepare network design data.
After running the one-click deployment tool, engineers can create a deployment task on the
tool WebUI and customize deployment steps of the whole automated process. They can
conveniently set the models and versions of the hardware and software to be deployed, specify Install the tool on your laptop.
paths of the software packages, and enter or upload network design data. 2
Upload the
data.
In addition, deployment preparations, including creating a task, configuring network design Upload the network design data and software packages to the tool.
data, and uploading software packages, can be made in advance, so that engineers can directly
execute the task in the equipment room.
Connect the laptop to a switch through a serial cable, and enable the tool
to automatically configure the switch data.

Configure Connect the laptop to a disk array through a network cable (to the management port on the
3
hardware. disk array) or serial cable, and enable the tool to automatically configure the disk array data.

Use a network cable to connect the laptop port to the management port on an
E9000 server, and enable the tool to automatically configure the server data.

Enable the tool to automatically deploy FusionSphere OpenStack and the OM and
4 Deploy FS.
connect them.

Deploy VNFM. Enable the tool to automatically deploy the VNFM and complete the initial
5
configuration and interworking data.

Enable the tool to automatically deploy the U2000 and complete the initial
6 Deploy EMS.
configuration and interworking data.

Deployment 06
M aintencance
M
AI CE
N
N T ENA

VNF共部署

网元自动部署

I层工具部署
NFV Fundamentals
Network Functions Virtualization

Unified O&M
The U2000 is introduced to centralize NFV-level O&M. All system alarms and KPIs are reported
to the U2000 via the paths below:

How do I efficiently manage so many components in the NFV architecture?


EMS
RESTful VNFM
CCE U2000

Background

SNMP alarm
RESTful
sFTP
Cloud IMS Cloud DB Cloud SBC
The NFV architecture comprises various
components from multiple layers, including the NFVI
hardware (compute/storage/network resources),
Virtual Virtual Virtual SNMP
virtualization, and VNF layers. Each component compute storage network OpenStack OM
PNF
is maintained using an independent O&M UI.
eSight RESTful OpenStack OpenStack
The multi-UI management cannot present a (Active) (Ceilometer) (neutron)
Server Server
SNMP
coherent view of relationships between resources 3rd party
A unified hardware SNMP/
at each layer. It also compromises O&M efficiency UI is needed. Network device
RESTful MongoDB
and increases O&M complexity.

Technical Solution VNF alarm/KPI report

VNF -> U2000: Each VNF reports alarms to the U2000.


eSight is introduced to streamline NFVI-level O&M. It provides the following functions:
VNFM alarm/KPI report
1 Topology monitoring: Provides unified topology views to present real-time device and
link status in different color-coded severities, and identifies the scope of affected areas to VNFM -> U2000: The VNFM reports alarms to the U2000.
better aid maintenance engineers.
eSight alarm/KPI report
2 Resource management: Allows users to create groups, where they can add devices
eSight -> U2000: eSight reports alarms to the U2000.
belonging to different subnets, thereby improving device management efficiency.

3 Performance management: Displays device performance statistics, and enables users to Hardware alarm/KPI report
analyze historical data, create performance analysis models, and estimate future network COTS hardware -> eSight -> U2000: eSight forwards hardware alarms to the U2000.
performance, from which they can modify network configurations and parameters to
optimize network operations. FusionSphere alarm/KPI report

4 Alarm queries: Presents NFVI alarms for network-wide device monitoring and data queries. FusionSphere OpenStack -> FusionSphere OpenStack OM -> eSight -> U2000:
eSight forwards FusionSphere OpenStack alarms to the U2000.

eSight reports some VNF-affecting hardware alarms to the VNFM, which then associates the
alarms with the specific VNFs. The VNFM does not forward NFVI alarms to the U2000. NFVI
alarms are still issued from eSight to the U2000.

Unified topology monitoring, resource and performance


management, and alarm queries

Maintenance 02
NFV Fundamentals
Network Functions Virtualization

Intuitive NFV Topology Views


How do I easily find out the usage of physical and virtual resources for each VNF? Cloud platform

OS

A211 az1.dc1 Compute-SLOT12


Background
Virtualization
NFV leverages virtualization software to abstract physical compute, storage, and network VM

resources into virtual resources and deploys VNFs using these resources. If users cannot view
A211_MSX70_BSG_VD
each VNF's physical and virtual resource usage and the resource dependency at each layer, they U__1
may find it difficult to locate problems when a VNF becomes faulty. eSight can tackle this
problem using its topology views.

vnfm_volume_014fd4a vnfm_volume_7d3a59 vNIC 4 vNIC 5 vNIC 6 vNIC 3 vNIC 1 vNIC 2


8-8d76-4050-b5ff-511 b1-179d-4c8b-97da-38
0de8d08a8 22f8b7bbbf

SAN401

Hardware

Technical Solution Storage device


(155.101.9.40)
trunk1

eSight presents the mappings between a specific VM and physical/virtual resources.When a VM


is faulty, its mappings help quickly locate the root cause of the fault. The figure on the right
uses a VM topology as an example. eth2 eth3

Cloud platform layer

155.101.9.11-blade_12
Displays the cloud platform system, host, and host aggregate specific to a VM, records the
resource status and total number of alarms in real time, provides links to the alarm list, and
identifies the physical server that houses the VM.

Virtualization layer

Displays VM-specific virtual storage (disks and backend storage) and compute (vNICs) resource
usage, and displays their virtual resource status and KPIs in real time.

Hardware layer

Presents VM-specific physical storage (disk arrays) and compute (blades and their ports)
resource usage.

Maintenance 04
NFV Fundamentals
Network Functions Virtualization

Auto Scaling
How do I deploy services on demand and maximize resource utilization?

Background Start

The NFV architecture abstracts physical telecom


devices into software applications that can be Configures auto
run on VMs. This technological breakthrough 1 scaling policies.
makes on-demand resource allocation a reality. Analyzes capacity
2
Resource utilization can be maximized by and capacity change.
adjusting resources based on teleservice
demands. MANO

Technical Solution 2 Obtains KPIs from a VNF.


Obtains KPIs from
2 the NFVI layer.
When VNFs are deployed using a VNFD file, their auto scaling policies are also specified in the VNFD 4 Adds VMs.
file. The policies include triggering conditions and scaling methods. The MANO records these
policies when analyzing the VNFD file. OMU CSCF TAS
Confirms resource
The system automatically checks whether the scaling conditions are met. If they are, the system 3 conditions.

VM

VM

VM

VM

VM

VM

VM

VM

VM
automatically triggers the operation to dynamically schedule resources.

FusionSphere
NFVI resource pool OpenStack OM

A scale-out is used as an example:


1 A user configures scale-out policies in the VNFD file, including triggering conditions,
threshold, and the number of VMs required, and enables auto scaling on the MANO.

2 The MANO periodically obtains real-time service KPIs from VNFs and CPU usage from the
NFVI layer. If scale-out conditions are met, for example, the VM CPU usage exceeds the
scale-out threshold, the MANO starts scaling out VMs.

3 The MANO determines the number of VMs to be added, confirms the resource condition,
and requests VM resources from the NFVI layer.

4 After the requested VMs are allocated, the MANO powers on these new VMs.
Auto Scaling Policies in the VNFD

Triggering · VM CPU usage


Condition · Service KPIs, such as the number of users, traffic, and media resource usage
[1] Scale-out/-in: To add/remove resource instances, VMs for example, to/from a system.
Method Scale-out/-in or scale-up/-down[1]
Scale-up/-down: To change allocated resources, for example, by increasing/decreasing memory, CPU capacity, or storage
size. To date, this method has not been supported.

Maintenance 06
NFV Fundamentals
Network Functions Virtualization

High VM Availability
Are COTS hardware and VMs highly reliable?

Controller node

Background
CPS client
One of the challenges the NFV architecture faces is to provide E2E high availability (HA) services
without interrupting services on its COTS hardware. Huawei FusionSphere OpenStack uses HA
VMs and migration technologies to ensure compute reliability.
CPS client Compute node CPS client Compute node CPS client Compute node

VM HA and migration VM 1 VM 2 VM 3 VM 1 VM 4 VM 2

Host 1 Host 2 Host 3

Host 1 Host 2

SAN

Technical Solution
LUN LUN LUN LUN

Service NEs are deployed as VMs running on COTS hardware. To a large extent, NE availability
relies on these VMs.

VM HA allows all VMs on a faulty host to be automatically reconstructed on an operational


When the controller node detects a compute node failure, VMs on the faulty node are not
host. When any VM becomes faulty, the HA process is also triggered to recreate this VM on
immediately reconstructed on the destination compute node. FusionSphere OpenStack uses a
another host.
service-plane VLAN check to further analyze if VMs are truly faulty. This prevents VM split-brain[1]
when service VMs are still running but the compute node's control plane has been disconnected.
VMs are reconstructed on the destination host only after they are confirmed to be truly
faulty.

If a VM's HA process fails, users can manually enable cold or live migration[2] on FusionSphere
OpenStack OM to recreate these VMs on another host. The HA process is also used when an
exception occurs on a physical server and VMs need to be manually migrated from this server
to another before the exception turns into a failure.

[1] VM split-brain: a condition where an OpenStack network exception is incorrectly recognized as a VM exception, while the VM
is still processing services. After VM HA is triggered, there will be two VMs using the same IP address and operating the same
remote volume. The two split-brain VMs may cause unexpected issues in the environment.
[2] VM live migration: a process of migrating a VM from a physical server to another without interrupting services. The VM
manager adopts memory data quick replication and shared storage technologies to ensure that data keeps unchanged after
the VM is live migrated to the destination host.

Maintenance 08
G
IN
OT
05
HO
ES
BL
OU T roubleshooting
TR
NFV Fundamentals
Network Functions Virtualization

Cross-Layer Alarm Correlation


If a fault results in multiple reported alarms, is there a way to find out if they are related and how?
How do I quickly find out the root alarm?
Hardware Virtualization CGP Service VNF U2000

Reported
Background
VNF
Alarm A
alarm
Each layer in the NFV architecture has a failure detection mechanism and separately reports Causing
Alarm B
failure alarms. A single point of failure may set off multiple alarms from different layers. For
CGP
example, a board failure may be reported by the hardware layer using blade and port failure alarm Alarm C
Causing
alarms, and also from the VNF layer using process, link, and communication failure alarms. Alarm D
It is difficult and time-consuming for users to identify faults, how they are related, and which of FusionSphere
alarm
them can be ignored. Causing
Root and
Hardware correlative alarms
alarm presented

Root alarm

Users can define how the alarms at each layer are presented on the U2000 based on
the correlation between these alarms. An alarm with correlative alarms is marked using
a bell shaped icon. Users can right-click (and continue right-clicking for more details)
this alarm to display the root alarm or other correlated alarms.

Technical Solution

The correlation between all alarms generated by each layer is summarized, and the U2000 is
used as the single point of access to present results.

Troubleshooting 02
NFV Fundamentals
Network Functions Virtualization

Failure Detection and Recovery


How do I automate O&M and quickly detect and rectify faults based on service KPIs?

1 Receives KPI statistics from the OMU over the RESTful interface.

Background U2000/CCE
(Network-level KPI-based VNFM
quick restore center) Requests the VNFM to quickly
A fully cloudified network features pooled hardware, distributed software, and automated O&M. restore VNF services through
the ve-vnfm interface. (Note 1)
One of the challenges this network poses to customers is to automate O&M while ensuring Note 1: To avoid
99.999% availability of VNFs running on 99.9% reliable hardware. Additionally, VNFs are conflicts with the OMU,
the VNFM feature that
required to quickly detect NFVI failures and restore services.
Requests VNFs to restore services quickly restores VNF
2 services has been
through the MML interface. (Note 2)
disabled.

CGP OMU 1 CGP OMU n Note 2: If the OMU


(VNF quick restore center) (VNF quick restore center) discovers that any VNF
has detected the fault
and initiated recovery
operations, it rejects the
U2000/CCE's service
restore requests.

CSCF n
CSCF 1

HSS n
HSS 1

CCF n
CCF 1

TAS n
TAS 1
Technical Solution

Huawei's CloudCore solution defines KPIs and KPI thresholds for VNF services during initial
service configuration and allows the monitored data to be reported to the monitor center in The U2000 or Core Network Configuration Express (CCE) functions as the network-level quick
real time. restore center. The U2000/CCE checks for VNF sub-health[1] based on service KPIs and issues
When a device at the NFVI layer becomes faulty, the monitor center detects that the decline service restore policies to the OMU.
of the KPIs has exceeded the predefined threshold. It isolates and restarts the specific VMs
without distributing any services to them. Additionally, the monitor center reports alarms to
CGP Functions
the U2000.

1 Monitors each VNF's service KPIs in real time and sends them to the CCE.

2 Receives the service recovery requests from the U2000/CCE and initiates or suppresses the
VNF recovery based on site conditions.

CSCF Functions

Registers the KPIs to be monitored and reports each process's KPI statistics to the CCE via the
OMU. The statistics include the traffic, number of successful services, and failure causes.

[1] Sub-health: a condition in which services are interrupted because of hardware or service module failure or service logic
errors. Service interruption cannot be detected during a traditional heartbeat check, and a switchover cannot be triggered.
Services are continuously interrupted in this case.

Troubleshooting 04
NFV Fundamentals
Network Functions Virtualization

IP Packet Coloring iLMT

How do I quickly identify the increasing number of IP network problems after software and hardware
are decoupled?
OMU VNFM

Background Service NE A Service NE B

Service VM Forwarding VM Forwarding VM Service VM


The cloudified core network is facing more Service Forwar- PMU Forwar-
Service
process PMU ding ding PMU PMU process
sub-health[1] issues than the legacy ATCA process process

architecture because the new platform uses COTS


hardware and has a third-party virtualization layer
and open-source components. Identifying the
root cause is technically demanding. vSwitch vSwitch vSwitch vSwitch

For example, when IP packets between VMs on


different hosts are lost or transmitted with long FusionSphere
delay, it is difficult to determine whether the Network OpenStack OM
problem lies in the platform or NFVI layer or
possibly the IP network. Accurate maintenance is Coloring packets and collecting statistics Message procedure
required to identify and troubleshoot the Identifying colored packets and Task control (adding/deleting tasks, and querying statistics)
collecting statistics Statistics report and data summary
problem.

Technical Solution Testing Transmission Quality

To check the transmission quality of a link between VNFs, users can start the IP packet coloring
IP packet coloring uses dedicated bits to color IP packets transmitted at each layer on the and measurement task in the MML Command – CGP window. The CGP sends the task to target
network. The colored packets are analyzed to quickly identify problems. VMs. The PMUs[2] on the VMs receive the task. The CGP also sends the task to FusionSphere
OpenStack OM via the MANO. The OM forwards the task to other vSwitches.
The IP packets of each service VM are colored with related statistics collected at the platform
layer. The statistics of the packets are collected at the NFVI layer. All statistics are then analyzed
to check for any lost packets between the platform and NFVI layers. When this mechanism is Coloring IP Packets and Collecting Statistics
used for IP packet streams between two or more VMs, the cause of the network sub-health
issue can be quickly identified.
The VM sends signaling messages, colors specific IP packets in the message procedure, and
collects statistics. The forwarding and terminating VMs and the forwarding vSwitches identify
the colored packets and collect statistics.
Dedicated bits for colored IP packets

Displaying Statistics

The CGP issues commands to query each VM and switch's IP packet coloring statistics. The CGP
then determines network quality and presents it in the MML command window.

[1] Network sub-health: an intermediate state between normal and faulty, with no specific symptom appearing. A sub-healthy
product runs with errors, with deteriorated system reliability and processing capability and low service success rate. A faulty
product can be detected quickly, with recovery measures taken immediately and faulty nodes isolated.
[2] PMU: the Platform Management Unit used to determine the reliability of service processes and initially deploy these
processes. It connects service processes to backend management modules.

Troubleshooting 06

También podría gustarte