Está en la página 1de 38

53-1003608-01

31 October 2014

Data Center Solution-


Storage
Gen 5 Fibre Channel Distance Extension
Using ADVA FSP 3000 WDM Platform
Design Guide
2014, Brocade Communications Systems, Inc. All Rights Reserved.

Brocade, the B-wing symbol, Brocade Assurance, ADX, AnyIO, DCX, Fabric OS, FastIron, HyperEdge, ICX, MLX, MyBrocade, NetIron,
OpenScript, VCS, VDX, and Vyatta are registered trademarks, and The Effortless Network and the On-Demand Data Center are trademarks
of Brocade Communications Systems, Inc., in the United States and in other countries. Other brands and product names mentioned may be
trademarks of others.
Notice: This document is for informational purposes only and does not set forth any warranty, expressed or implied, concerning any
equipment, equipment feature, or service offered or to be offered by Brocade. Brocade reserves the right to make changes to this document
at any time, without notice, and assumes no responsibility for its use. This informational document describes features that may not be
currently available. Contact a Brocade sales office for information on feature and product availability. Export of technical data contained in
this document may require an export license from the United States government.
The authors and Brocade Communications Systems, Inc. assume no liability or responsibility to any person or entity with respect to the
accuracy of this document or any loss, cost, liability, or damages arising from the information contained herein or the computer programs that
accompany it.
The product described by this document may contain open source software covered by the GNU General Public License or other open
source license agreements. To find out which open source software is included in Brocade products, view the licensing terms applicable to
the open source software, and obtain a copy of the programming source code, please visit http://www.brocade.com/support/oscd.
Contents

Data Center Solution-Storage Gen 5 Fibre Channel Distance Extension Using ADVA FSP
3000 WDM Platform............................................................................................................5
Preface..............................................................................................................5
Overview............................................................................................... 5
Reference Architecture..................................................................................... 7
Business Requirements........................................................................ 8
Special Considerations......................................................................... 9
Overview of Active WDM Systems..................................................................14
WDM System Building Blocks.........................................................................14
The Control Unit.................................................................................. 15
WDM Transponder and Muxponder Modules..................................... 15
The WDM Optical Layer......................................................................17
ADVA FSP 3000 WDM Platform..................................................................... 18
ADVA FSP 3000 Optical Layer........................................................... 19
Protection Modules............................................................................. 21
ADVA FSP 3000 Transponder and Muxponder Module Types ......... 22
WDM Error Forwarding Settings ........................................................ 27
SX and LX Optical Interfaces.............................................................. 31
Buffer Credit Calculation and Settings................................................ 31
Brocade Switch Port Settings .............................................................32
Dual Fabric Over Distance ................................................................. 33
Troubleshooting.................................................................................. 34
Appendix A......................................................................................................35
Latency ...............................................................................................35
Support of Brocade port-based Fibre Channel features..................... 36

Data Center Solution-Storage Gen 5 Fibre Channel Distance Extension Using ADVA FSP 3000 WDM Platform Design Guide 3
53-1003608-01
4 Data Center Solution-Storage Gen 5 Fibre Channel Distance Extension Using ADVA FSP 3000 WDM Platform Design Guide
53-1003608-01
Data Center Solution-Storage Gen 5 Fibre Channel Distance
Extension Using ADVA FSP 3000 WDM Platform

Preface..............................................................................................................................5
Reference Architecture..................................................................................................... 7
Overview of Active WDM Systems..................................................................................14
WDM System Building Blocks.........................................................................................14
ADVA FSP 3000 WDM Platform..................................................................................... 18
Appendix A......................................................................................................................35

Preface
Overview
The most common reason for extending a Fibre Channel (FC) storage area network (SAN) over
extended distances is to safeguard critical business data and provide near-continuous access to
applications and services in the event of a localized disaster. Designing a distance extension solution
involves a number of considerations, both business and technical.
From the business perspective, applications and their data need to be classified by how critical they are
for business operation, how often data must be backed up, and how quickly it needs to be recovered in
the event of failure. Two key metrics are the Recovery Point Objective (RPO) and the Recovery Time
Objective (RTO). The RPO is the time period between backup points and describes the acceptable loss
of data after a failure has occurred. For example, if a remote backup occurs every day at midnight and a
site failure occurs at 11 pm, changes to data made within the last 23 hours will be lost. RTO describes
the time to restore the data after the disaster. RTO determines the maximum outage that can occur with
an acceptable impact to the business.
From a technology perspective, there are several choices for the optical transport network and
configuration options for the FC SAN when it is extended over distance. Applications with strict RTO
and RPO require high-speed synchronous or near-synchronous replication between sites with
application clustering over distance for immediate service recovery. Less critical applications may only
require high-speed replication that could be asynchronous to meet the RPO/RTO metrics. Lower priority
applications that don't need immediate recovery after a failure can be restored from backup tapes from
remote vaults.
Brocade is a leader in Fibre Channel SAN switching providing a broad product portfolio with unique
features the designer can leverage for cost-effective and efficient SAN distance extension. Inter-switch
Links (ISL) are used to connect two SAN switches together. By stretching ISLs over extended distances
(a few kilometers to as much as 200 Km), data replication traffic can use Fibre Channel as the transport
to a remote data center. For this reason, SAN distance extension over ISLs is a common method of
transporting replicated storage data for mission critical application disaster recovery.

Purpose of this Document


This guide contains design guidance for SAN distance extension using Brocade Gen-5 Fibre Channel
SAN products, Brocade Fabric Operating System (FOS) and wave division multiplexer (WDM) from

Data Center Solution-Storage Gen 5 Fibre Channel Distance Extension Using ADVA FSP 3000 WDM Platform Design Guide 5
53-1003608-01
Data Center Solution-Storage Gen 5 Fibre Channel Distance Extension Using ADVA FSP 3000 WDM Platform

Adva, the Adva FSP-3000. Brocade Gen-5 products have a number of features designed to optimize
SAN extension using ISL connections.
Design best practices are included for SAN extension with ISL connections. The topology shown in the
reference architecture has validated in Brocade's Strategic Solution Validation Lab.
This design can be used with array-based replication and/or tape backup systems due to its excellent
scalability, high performance and very low latency at distances up to 200 KM.

Audience
This document is intended for disaster recovery planners and SAN architects who are evaluating and
deploying DR solutions that use SAN distance extension for storage data.

Objectives
This design guide is intended to provide guidance and recommendations based on best practices for a
two-site data center disaster recovery solution using Fiber Channel ISL connections over extended
distance.

Restrictions and Limitations


This design guide only addresses SAN distance extension using ISL connections over WDM links on
the Adva FSP3000 WDM platform. Other WDM vendor specific design guides are available.

Related Documents
The following documents are valuable resources for the designer. This design is based on the Data
Center Infrastructure Base Reference Architecture which includes SAN building blocks and templates:
Data Center Infrastructure Base Reference Architecture
SAN Blocks
Fibre Channel Core Blocks
Brocade Fabric OS Administrator Guide, v7.3.0
Brocade Fabric OS Command Reference, v7.3.0
Data Center Infrastructure, Storage-Design Guide: SAN Distance Extension Using ISLs

About Brocade
Brocade networking solutions help the world's leading organizations transition smoothly to a world
where applications and information reside anywhere. This vision is realized through the Brocade
One strategy, which is designed to deliver key business benefits such as unmatched simplicity, non-
stop networking, application optimization, and investment protection.
Innovative Ethernet and storage networking solutions for data center, campus, and service provider
networks help reduce complexity and cost while enabling virtualization and cloud computing to
increase business agility.
To help ensure a complete solution, Brocade partners with world-class IT companies and provides
comprehensive education, support, and professional services offerings.
To learn more, visit www.brocade.com

6 Data Center Solution-Storage Gen 5 Fibre Channel Distance Extension Using ADVA FSP 3000 WDM Platform Design Guide
53-1003608-01
Reference Architecture

About ADVA
Our company began with a single vision: to transport data, storage, voice and video signals at native
speeds and lowest latency. A lot's changed since that time, but our vision remains the same. Our
products are the building blocks for tomorrow's networks, enabling the transport of increasing amounts
of data across the globe. From the access to the metro core to the long haul, we create intelligent,
software-automated solutions that will provide future generations with networks that can scale to meet
increasing bandwidth demands.
To learn more, visit www.advaoptical.com

Document History

Date Version Description

2014-10-31 1.0 Initial release

Reference Architecture
This design guide is based on the Data Center Infrastructure Base Reference Architecture building
blocks. Shown below is the SAN Template used for this design.

FIGURE 1 SAN Core Template with ADVA Optical Network Building Blocks

Data Center Solution-Storage Gen 5 Fibre Channel Distance Extension Using ADVA FSP 3000 WDM Platform Design Guide 7
53-1003608-01
Business Requirements

This template illustrates a common SAN topology, Core/Edge, with two edge blocks, Edge Switch and
Edge Access Gateway and a Core Backbone block. Both edge blocks connect to the Core Backbone
block using ISL Trunks, providing automatic frame based flow balancing over multiple ISL links for
highest utilization with mixed traffic flows. The Edge Access Gateway block is commonly used with
embedded FC switches found in blade servers. Access Gateway can also be used with rack mount
switches with Top-of-Rack SAN switches. When a switch is configured for Access Gateway mode, it
does not consume a fabric Domain ID simplifying the fabric design. Similar to ISL Trunks, Access
Gateway provides Access Gateway Trunks for excellent link utilization and automatic fail-over should
a link in the trunk fail.
Fabric A and Fabric B are shown indicating the use of two physically independent SAN fabrics to
connect servers to storage arrays. This is a SAN best practice for high availability and resiliency. Each
Server and storage use dual connections, each going to either Fabric A or Fabric B. Servers are
configured with IO Adaptors and multipath IO device drivers for active/active IO from both adaptors to
both fabrics. Should a path in one fabric fail for any reason (HBA, cable, FC switch port, FC switch,
array port, configuration error, power outage, etc.), then IO continues on the remaining path.
Fabric C and Fabric D are shown in the Core Backbone block. The Brocade DCS and DCX Backbone
switches support virtual fabrics allowing ports from the same switch to be allocated to logically isolated
fabrics. Again, dual physically independent fabrics are connected to the long distance optical transport
network for high availability and resiliency.
The Core Backbone block uses ISL links with a long distance optical network as shown by the cloud
labeled San Distance Extension with ISLs. Server IO at the edge blocks flows to the Core Backbone
block and then to storage arrays. The storage array(s) replicates changes to the data blocks to an
array(s) in a remote datacenter. The replication traffic flows over the ISL links in Fabric C and D that
are attached to the long distance optical network.
Note that Backbone ICL links connect the core switches in each fabric. This is an innovative feature
available on Brocade DCX and DCX Backbone switches providing very high bandwidth trunks
between core switches without consuming ports on port cards allowing all port card ports to be used
for connecting arrays and for ISL Trunks to edge switches.

References

Data Center Infrastructure, Base Reference Architecture:

SAN Blocks

Fibre Channel Core Blocks

Brocade DCX Backbone Switch Data Sheet

Business Requirements
As more applications drive business value, and the associated data becomes key to competitive
advantage, cost-effective protection of the applications and data from site disasters and extended
outages has become the norm. Modern storage arrays provide synchronous as well as asynchronous
array-to-array replication over extended distances. When the array provides block-level storage for
applications, Fibre Channel is the primary network technology used to connect the storage arrays to
servers, both physical and virtual. For this reason, cost-effective disaster recovery designs leverage
Fibre Channel to transport replicated data between arrays in different data centers over distances
spanning a few to more than 100 kilometers. Therefore, SAN distance extension using Fibre Channel
is an important part of a comprehensive, cost-effective and effective disaster recovery design.

8 Data Center Solution-Storage Gen 5 Fibre Channel Distance Extension Using ADVA FSP 3000 WDM Platform Design Guide
53-1003608-01
Special Considerations

Special Considerations
It is helpful to review the following special considerations that apply to Fibre Channel SAN distance
extension. It is important to understand the Fibre Channel protocol and the optical transport technology
and how they interact.

Optical Fiber Cabling


There are two basic types of optical fiber, Multimode Fiber (MMF) and Single-Mode Fiber (SMF).
Multimode fiber is generally used for short distance spans and is common for interconnecting SAN
equipment within the data center. Single-mode fiber has a smaller core diameter of 9 m and carries
only a single mode of light through the waveguide. It is better at retaining the fidelity of each light pulse
over long distances and results in lower attenuation. Single mode fiber is always used for long-distance
extension over optical networks and often used even within the data center for FICON installations.
There are several types of single-mode fiber, each with different characteristics that should take into
consideration when deploying a SAN extension solution. Non-Dispersion Shifted Fiber (NDSF) is the
oldest type of fiber and was optimized for wavelengths operating at 1310 nm, but performed poorly in
the 1550 nm range, limiting maximum transmission rate and distance. To address this problem,
Dispersion Shifted Fiber (DSF) was introduced. DSF was optimized for 1550 nm, but introduced
additional problems when deployed in Dense Wavelength Division Multiplexing (DWDM) environments.
The most recent type of single-mode fiber, Non-Zero Dispersion Shifted Fiber (NZ-DSF) addresses the
problems associated with the previous types and is the fiber of choice in new deployments.
As light travels through fiber, the intensity of the signal degrades, called attenuation. The three main
transmission windows in which loss is minimal are in the 850, 1310, and 1550 nm ranges. The table
below lists common fiber types and the average optical loss incurred by distance for both multimode
(MM) and single mode (SM) fiber.

TABLE 1 Average attenuation of optical fiber due to distance


Fiber Optical Loss (dB/km)

Size Type 850 nm 1310 nm 1550 nm

9/125 SM - 0.35 0.2

50/125 MM 3.0 - -

62.5/125 MM 3.0 - -

Optical Power Budget, Fiber Loss


A key part of designing SANs over long distance optical networks involves analyzing fiber loss and
optical power budgets. The decibel (dB) unit of measure for the signal power in a fiber link. The dB loss
can be determined by comparing the launch power of a device to the receive power. Launch and
receive power are expressed in decibel-milliwatt (dBm) units, which is the ratio of measured signal
power in milliwatts (mW) to 1 mW.

References

Definition of dB, Wikipedia

Definition of dBm, Wikipedia

Definition of Optical Power Budget, Wikipedia

Data Center Solution-Storage Gen 5 Fibre Channel Distance Extension Using ADVA FSP 3000 WDM Platform Design Guide 9
53-1003608-01
FC Transceivers for Extended Distances

The optical power budget identifies how much attenuation can occur across a fiber span while still
maintaining sufficient output power for the receiver. It is determined by finding the difference between
worst-case launch power and receiver sensitivity. Transceiver and other optical equipment vendors
typically provide these specifications for their equipment. A loss value of 0.5 dB can be used to
approximate attenuation caused by a connector/patch panel. It is useful to subtract an additional 2 dB
for safety margin.
Optical Power Budget = (Worst Case Launch Power) (Worst Case Receiver Sensitivity) +
(Connector Attenuation)
Signal loss is the total sum of all losses due to attenuation across the fiber span. This value should be
within the power budget to maintain a valid connection between devices. To calculate the maximum
signal loss across an existing fiber segment, use the following equation:
Signal Loss = (Fiber Attenuation/km * Distance in km) + (Connector Attenuation) + (Safety Margin)
The previous table showed average optical loss characteristics of various fiber types that can be used
in this equation, although loss may vary depending on fiber type and quality. It is always better to
measure the actual optical loss of the fiber with an optical power meter.
Some receivers may have a maximum receiver sensitivity that should not be exceed. If the optical
signal is greater than the maximum receiver sensitivity, the receiver may become oversaturated and
not be able to decode the signal, causing link errors or even total failure of the connection. Fiber
attenuators can be used to resolve the problem. This is often necessary when connecting FC switches
to DWDM equipment using single mode FC transceivers.

FC Transceivers for Extended Distances


Optical Small Form-factor Pluggable (SFP) transceivers are available in short- and long-wavelength
types. Short wavelength transceivers transmit at 850 nm and are used with 50 or 62.5 m multimode
fiber cabling. For fiber spans greater than several hundred meters without regeneration, use long-
wavelength transceivers with 9 m single-mode fiber. Long-wavelength SFP transceivers typically
operate in the 1310 or 1550 nm range.
Optical transceivers often provide monitoring capabilities that can be viewed through FC switch
management tools, allowing some level of diagnostics of the actual optical transceiver itself.

NOTE
Brocade 8 and 16 Gbps products enforce the use of Brocade branded optics plus a restricted list of
specialist third party options to meet requirements for extended distance or CWDM/DWDM optics.
Other Brocade products do not enforce optics rules but qualified or certified optics only should be used
as shown in the latest Brocade Compatibility Matrix, Transceivers Quick Reference section (see the
References below).

References

Brocade Compatibility Matrix: Network Solutions Section

FC Protocol over Extended Distance Considerations

Flow Control

Brocade switches can support two methods of flow control over an ISL

10 Data Center Solution-Storage Gen 5 Fibre Channel Distance Extension Using ADVA FSP 3000 WDM Platform Design Guide
53-1003608-01
Quality of Service

Virtual Channel (VC_RDY) VC_RDY is the default method and uses multiple lanes or channels,
each with different buffer credit allocations, to prioritize traffic types and prevent head-of-line
blocking. VC_RDY flow control differentiates traffic across an ISL. It serves two main purposes:
To differentiate fabric internal traffic from end-to-end device traffic.
To differentiate different data flows of end-to-end device traffic to avoid head-of-line
blocking. Fabric internal traffic is generated by switches that communicate with each other
to exchange state information (such as link state information for routing and device
information for Name Service). This type of traffic is given a higher priority so that switches
can distribute the most up-to-date information across the fabric even under heavy device
traffic. Additionally, multiple IOs are multiplexed over a single ISL by assigning different VCs
to different IOs and giving them the same priority (unless QoS is enabled). Each IO can
have a fair share of the bandwidth, so that a large-size IO will not consume the whole
bandwidth and starve a small-size IO, thus balancing the performance of different devices
communicating across the ISL.
Receiver Ready (R_RDY) R_RDY is defined in the ANSI T-11 standards and uses a single lane, or
channel, for all frame types.

NOTE
When Brocade switches are configured to use R_RDY flow control, other mechanisms are used to
enable QoS and prevent head-of-line blocking.
When connecting switches across dark fiber or wave division multiplexing (WDM) optical links, VC_RDY
is the preferred method, but there are some distance extension devices that require the E_Port use
R_RDY. To configure R_RDY flow control on Brocade switches, use the portCfgISLMode command.

References

Brocade FOS Administrator Guide, v7.

3.0

: Inter-switch Links (ISL)

Brocade FOS Command Reference, v7.0.1: portCfgISLMode command

Quality of Service

Starting with FOS release 6.0, Brocade Virtual Channel technology can be used to prioritize traffic
between initiator/target pairs by mapping traffic flows to High, Medium, or Low priority queues. QoS
support with Virtual Channels is enabled with the Adaptive Networking license. QoS is supported over
long-distance ISLs that utilize up to 255 buffers. When an E_Port is allocated more than 255 buffers, the
remaining buffers are allocated to the medium priority queue.

References

Brocade FOS Administrator Guide, v7.3.0: Optimizing Fabric Behavior

Brocade FOS Administrator Guide, v7.3.0: QoS SID/DID Traffic Prioritization

Brocade FOS Administrator Guide, v7.3.0: QoS Zones

Brocade FOS Command Reference, v7.3.0: portCfgQoS command

Data Center Solution-Storage Gen 5 Fibre Channel Distance Extension Using ADVA FSP 3000 WDM Platform Design Guide 11
53-1003608-01
Buffer Allocation

Buffer Allocation

Before considering FC-level buffer allocation, note that the availability of sufficient FC-level buffering is
not itself sufficient to guarantee bandwidth utilization. Other limitations, particularly at the SCSI level of
the storage initiator and/or target, are often the limiting factor. The I/O size, I/O per Second (IOPS)
limit, and concurrent or outstanding I/O capability at the SCSI level of the initiators/targets can be and
often are gating factors.
While exact calculations are possible, a simple rule of thumb is often used to calculate the BB credit
requirement of a given link. Based on the speed of light in an optical cable, a full-size FC frame spans
approximately 4 km at 1 Gbps, 2 km at 2 Gbps, 1 km at 4 Gbps, 500m at 8 Gbps , 200m at 16 Gbps or
400 m at 10 Gbps. The rule of thumb is this: 1 credit is required for every kilometer at 2 Gbps;
therefore half a credit is required for every kilometer at 1 Gbps and 2 credits are required for every
kilometer at 4 Gbps. With this simple set of guidelines it is easy to estimate the amount of required
credits per link to maintain line speed.
Having insufficient BB credits will not cause link failure, but it will reduce the maximum throughput.
In the example cited above, the 1-ms link running at 4 Gbps with only 100 BB credits can achieve a
maximum throughput of approximately 2 Gbps.
Using the LS option, the portCfgLongDistance command can be used to allocate the required buffers
for the link distance.

References

Brocade FOS Administrator Guide, v7.3.0: Buffer Credit Management

Brocade FOS Administrator Guide, v7.3.0: Long Distance Link Modes

Brocade FOS Command Reference, v7.3.0: portCfgLongDistance command

Frame-Based Trunking

Long distance links using VC_RDY flow control can be part of an ISL trunk group if they are configured
for the same speed and distance and the distances of all links are nearly equal. Within a Frame based
trunk, the maximum allowed difference between shortest and longest links is approximately 400
meters.
When R_RDY flow control is used, frame-based trunking is disabled. Exchanged-based routing policy,
used to interleave FC exchanges across multiple ISLs, can be used with either type of flow control.

References

Brocade FOS Administrator Guide, v7.3.0: Manageing Trunking Connections, ISL Trunking over Long Distance
Fabrics

Brocade FOS Administrator Guide, v7.3.0: Routing Traffic, Routing Policies, Exchange Based Routing

Dynamic Path Selection

Dynamic Path Selection (DPS), also called Exchange Based Routing, is a feature first available for 4
Gbps products and later. DPS applies at a fabric level and has no restrictions on co-location of ports
on a given switch or even on their going the same route through the fabric. However, as with some
other configuration options DPS is not supported for certain limited cases, specifically FICON and HP-
EVA/CA. Where frame-based ISL Trunks cannot be used, DPS is a good alternative for high
availability with multiple ISL connections.

12 Data Center Solution-Storage Gen 5 Fibre Channel Distance Extension Using ADVA FSP 3000 WDM Platform Design Guide
53-1003608-01
D-port Advanced Diagnostics for Brocade 16G SFP+

References

Brocade FOS Administrator Guide, v7.3.0: Routing Traffic, Routing Policies, Exchange Based Routing

D-port Advanced Diagnostics for Brocade 16G SFP+

A Brocade D-Port is used to diagnose optics and cables. It does not carry any FC control or data traffic
and is supported on E_Ports and also F_Ports if a Brocade 1860 adaptor is used in the server. When a
port is in D_Port mode, the following diagnostic tests can be conducted (refer to the diagram below. C3
ASIC refers to 16 Gbps products).
Performs Electrical loopback
Performs Optical loopback
Measures link distance
Performs link traffic test

FIGURE 2 D_Port Diagnostic Test Paths

References

Brocade FOS Command Reference, v7.3.0: portCfgDport command

Brocade FOS Command Reference, v7.3.0: portDPortTest command

In-flight Encryption and Compression over 16 Gbps ISLs

With the 16 Gbps products such as the DCX 8510 Backbone switch, in-flight encryption and
compression can be applied at an egress E_Port of an ISL between two Brocade switches. The E_Port
on the receiving side of ISL will decrypted and decompressed the traffic. A maximum of two ports per
ASIC can be have in-flight encryption and compression enabled..

References

Brocade FOS Administrator Guide, v7.3.0: In-flight Encryption & Compression

Forward Error Correction (FEC) on 16Gbps Ports

FEC can recover bit errors for 10 Gbps and 16 Gbps ports for both FC frames and FC primitives. FEC
on 16 Gbps ports has the following capabilities.
Can correct up to 11 error bits in every 2112-bit frame transmission
Enhances reliability of transmission and thus performance

Data Center Solution-Storage Gen 5 Fibre Channel Distance Extension Using ADVA FSP 3000 WDM Platform Design Guide 13
53-1003608-01
Overview of Active WDM Systems

Enabled by default on backend links for 16 Gbps blades in 8510-8/8510-4 chassis


Supported on E/Ex_ports between 16Gbps ports at either 16Gbps or 10Gbps link speed.

References

Brocade FOS Administrator Guide, v7.3.0: Performing Advanced Configuration Tasks, Enabling Forward Error
Correction

Overview of Active WDM Systems


Networks based on Wavelength Division Multiplexing (WDM) are an integral part of the picture when it
comes to connecting geographically-dispersed data centers. In todays networks, WDM devices and
optical services based on WDM technology are becoming more and more of a commodity. As a result,
most manufacturers and service providers are able to transport all kinds of signals over distance.
Modern WDM systems seem to be relatively exchangeable. In general, this statement is true, but
looking more closely at specific applications, there are small but important differences among the
platforms available.
In this chapter, a closer look into WDM systems technology is provided to better understand the ADVA
FSP 3000 platform. For understanding this chapter, you should be familiar with the basics of WDM
and adjacent technologies like TDM.
For this design guide it is not relevant to differ between CWDM and DWDM since the technology, from
a transmission and data processing point of view is the same. Also the abbreviation WDM and the
term WDM system are use synonymical.

WDM System Building Blocks


Nearly all active WDM Systems consist of standard building blocks. Usually WDM systems use a
modular approach, where modules will be plugged into one or more chassis. Typically, those modules
are:
One or more control units for external communication, e.g. provisioning
Optical multiplexer (MUX) and de-multiplexer (DEMUX) modules
Transponders and/or Muxponder modules
Optical amplifiers, ROADMs, dispersion compensation and other specialized modules
In the figure below, you can see the generic architecture of a WDM system showing its building
blocks.

14 Data Center Solution-Storage Gen 5 Fibre Channel Distance Extension Using ADVA FSP 3000 WDM Platform Design Guide
53-1003608-01
The Control Unit

FIGURE 3 Architecture of a WDM System

The Control Unit


The control unit is a module that allows external and internal communication and is mainly used for
provisioning and OAM (operation and maintenance). Communication is established using Ethernet
and/or serial interfaces. You can connect to your WDM equipment using protocols like HTTP, Telnet,
SSH, SNMP, etc.

WDM Transponder and Muxponder Modules


The main parts of active WDM systems are the transponder and muxponder modules. These are used
to connect end (client) devices with the optical layer of the WDM system. Client data-rates range from
several Mbit/Sec up to 100Gbit/Sec. Usually, WDM data-rates would be similar, but are typically
between 10 and 100Gbit/Sec for modern systems.

WDM Transponders
A transponder converts the incoming signal from the end or client device to a WDM wavelength or
lambda. Transponders are available with single or multiple lanes per module. A quadruple transponder,
for example, has four client and four WDM network ports per module; typically client and network ports
have an equal number. The building blocks of a datacenter optimized transponder design are shown in
the figure below. The transponder takes the weak grey signal coming from a client device (e.g. a Fibre
Channel switch), regenerates the signal and launches it towards the WDM optical stage using a high
power WDM interface. These interfaces can be built in or pluggable. Such transponders usually have a
maximum reach of 200 kilometers.

Data Center Solution-Storage Gen 5 Fibre Channel Distance Extension Using ADVA FSP 3000 WDM Platform Design Guide 15
53-1003608-01
WDM Muxponders

FIGURE 4 Datacenter optimized transponder design (quadruple transponder)

A typical transponder for Telco/ISP use is shown in below. The design is similar to the datacenter
optimized transponder, but the WDM output signal is standardized for use in a Telco operators
network. Since the mapping procedure into standardized protocols is mandatory, such a device is
much more complex. Therefore, latency is much higher and MTBF values are lower. Additionally,
power consumption is much higher compared to a simple design. The maximum reach for this kind of
module is around 200 kilometers without optical-electrical-optical (OEO) conversion.

FIGURE 5 Telco/ISP transponder design (twin transponder)

WDM Muxponders
A muxponder is a hybrid between a TDM (Time Division Multiplexing) multiplexer and a WDM
interface. Thus a muxponder has several client interfaces (usually two to ten) and typically one
network interface as shown in the figure below.

FIGURE 6 Datacenter Optimized Muxponder Design (5x Muxponder with AES Encryption)

The TDM algorithm electrically aggregates the incoming client signals into a sum signal that is fed to
the WDM network interface. Like transponders, the muxponders are available in two different varieties
- a Telco/ISP design and a more lightweight datacenter-optimized design.

16 Data Center Solution-Storage Gen 5 Fibre Channel Distance Extension Using ADVA FSP 3000 WDM Platform Design Guide
53-1003608-01
Recommendation for Transponder/Muxponders

A datacenter optimized muxponder usually consists of a unique design where all parameters could be
controlled by the manufacturer. The use of proprietary techniques like high speed framing allows
support of special protocols like Infiniband or Fibre Channel with the vendors various protocol
extensions (Trunking, VSAN, etc).
Highly standardized techniques are used by Telco/ISP muxponders in order to fit seamlessly into
current telecommunications networks as shown in the figure below.

FIGURE 7 Telco/ISP Compliant Muxponder Design (10x Muxponder with Protected East/West Network
Interface)

Using standardized mapping and framing procedures like GFP-T or ODU based mapping, an
optimization towards datacenter focused protocols are quite limited. Thus feeding entirely standard
conform protocols might be accepted on the client ports. This could lead to feature loss or unexpected
issues, if you try to use e.g. feature-rich Fibre Channel ISL connections using such card types. The
benefit of such a design is the support of a much wider range of client protocols like OC3/12/48. Those
modules are also designed to span thousands of kilometers and are specifically built to interact with 3rd
party telco devices natively.

Recommendation for Transponder/Muxponders


Datacenter optimized transponders and muxponders are the first choice for connecting geographically
dispersed datacenters over distance, since they are low in latency and high in MTBF. This is especially
true for Fibre Channel and other latency sensitive protocols.
But if you have to use a Telcos network, or if you need to have a full standard conform network
interface like SDH, SONET or OTH, you should use a ISP compliant WDM design. Please keep in mind
that this could limit the features and capabilities of your Fibre Channel network.

The WDM Optical Layer

Optical Multiplexer/De-multiplexer
Optical multiplexer and de-multiplexers are passive optical modules used for combining (multiplexing)
and separating (de-multiplexing) optical WDM signals into or out of an optical fiber. Modern port counts
standard for optical systems range from 40 to 120. Those filters are also known as Optical Add Drop
Multiplexers (OADM). Optical filters operate within a so called optical grid. This grid and the
wavelengths used within are standardized by the ITU. Typically WDM systems use a 100GHz or a
50GHz grid, where the WDM wavelengths are separated by the grid value. This is also known as
channel spacing.

Data Center Solution-Storage Gen 5 Fibre Channel Distance Extension Using ADVA FSP 3000 WDM Platform Design Guide 17
53-1003608-01
Amplifiers

Amplifiers
Optical Amplifiers such as EDFAs (Erbium Doped Fiber Amplifier) or Raman amplifiers are used to
extend the optical WDM signal over longer spans. The maximum single span distance you can
achieve is around 200 kilometers. With amplifier chains, you can go over several thousands of
kilometers.

Dispersion Compensation
Dispersion is a fiber based physical effect that spreads the optical pulse while it travels down the
optical fiber. This effect worsens the signal quality and thus limits the maximum distance you can
achieve with your optical system. In order to counter this effect, dispersion compensation modules are
typically used. Those are available based on two different flavors:
Dispersion compensating fiber (DCF) on a reel
Fiber Bragg gratings (FBG)
The differences between those technologies is that the FBGs are doing the compensation within
some tenths of a nanosecond whereas the fiber based modules add several 10s of microseconds to
all over link delay. For example, a dispersion compensation for a 100 kilometer link adds 60s with
DCF and 45ns with a FBG based module.
Therefore, FBG-based compensation is preferable for latency sensitive protocols. For example, IBM
stipulates that DCG is required for dispersion compensation when implementing WDM connectivity in
IBM mainframe environments.

ADVA FSP 3000 WDM Platform


The ADVA FSP 3000 is a scalable and fully modular active Wavelength Division Multiplexing (WDM)
platform specifically designed for large enterprises and service providers requiring a flexible and cost-
effective solution that will multiplex, transport and protect high-speed data, storage, voice and video
applications.
In combination with the unique optical layer design, the FSP 3000 offers complete deployment
flexibility. Up to 120 wavelengths per fiber pair and fully integrated transponder and muxponder
options ranging from 8Mbit/s to 100Gbit/s optimize the spectral efficiency in the transmission fiber,
eliminate fiber exhaust and reduce power and space consumption.

18 Data Center Solution-Storage Gen 5 Fibre Channel Distance Extension Using ADVA FSP 3000 WDM Platform Design Guide
53-1003608-01
ADVA FSP 3000 Optical Layer

FIGURE 8 FSP 3000 7HU shelf

For the ADVA FSP 3000, a special set of dedicated enterprise modules are available to enable
Brocades customers to cost effectively transfer all kinds of storage and server-related traffic up to
hundreds of kilometers between their data centers. Additionally, all datacenter optimized modules are
designed and tested in order to fully support all ISL enhancements, implemented in Brocades SAN
switch technology (e.g. frame- based trunking, credit buffer recovery, FEC ).
The transparent high-speed transmission technology of the ADVA FSP 3000 eliminates the need for
costly gateways or routers and provides the lowest latency in the industry. Customers benefit from a
more reliable Disaster Recovery plan at much lower cost and higher data rates when using the ADVA
FSP 3000 between data centers over fiber.

ADVA FSP 3000 Optical Layer

Optical Filter Modules


Optical Filters for WDM are usually fixed or variable. The variable filters are called ROADM
(Reconfigurable Optical Add Drop Multiplexer) and are active components which require power to
operate. They are mainly used in rings and/or fully meshed optical networks which require a remote
configuration.
Fixed optical filters (also known as FOADM) are purely passive and are used mainly for Point-to-Point
(PtP) networks. Usually datacenter interconnect are built this way. Today, FOADMs come with WDM
port counts ranging from 40 WDM channels up to 100 WDM Channels per entity.
In this chapter, only a few modules are shown and explained. The correct choice for the optical layer
design depends on many parameters such as distance, fiber types, max bandwidth achievable, speed
per WDM channel, etc. Therefore, careful and proper planning is mandatory. This is commonly done by

Data Center Solution-Storage Gen 5 Fibre Channel Distance Extension Using ADVA FSP 3000 WDM Platform Design Guide 19
53-1003608-01
Optical Amplifiers

the WDM vendor itself or by specially trained consultants. Such planning will not be discussed within
this guide.
A typical optical setup for an 80 WDM port system build with two 40 Channel FOADM, an optical
interleaver and amplifier units is shown below.

FIGURE 9 Optical filter and amplifiers for an 80 WDM channel system using passive optical filters and
amplifiers

Please note that all connections are bi-directional. Only one fiber is shown for a clearer scheme. The
following FSP 3000 components are used for this design.

40CSM/2HU A 40 DWDM channel mux-demux. One with odd and one with even channels. Channel spacing is
100Ghz

ILM50 A 2 port interleaver module, combining the two 40CSMs. The ILM combines two 100Ghz grids
into a 50Ghz Channel Grid sum signal.

EDVA-C-D20 A double stage erbium doped fiber amplifier with up to 20dBm output power

This optical layer configuration is only a small portion of the ADVA FSP 3000 optical layer portfolio. At
any rate, this would be a typical setup for a high bandwidth Point- to-Point network layout.

Optical Amplifiers
The ADVA FSP 3000 comes with a variety of optical amplifiers suited for different use cases within the
optical networking space. Relevant for datacenter networks are the following types:

20 Data Center Solution-Storage Gen 5 Fibre Channel Distance Extension Using ADVA FSP 3000 WDM Platform Design Guide
53-1003608-01
Dispersion Compensation Modules

EDFA-C-D20 A double stage erbium doped fiber amplifier with up to 20dBm output power. This module could be
used for booster and/or preamplifier configuration. A dispersion compensating module could be
inserted between the two stages.

EDFA-C-S20 A single stage erbium doped fiber amplifier with up to 20dBm output power. This module would be
used for booster configurations.

RAMAN types Raman amplifiers could be used in addition to classic EDFAs for long, single spans up to 200
kilometers without the need of an amplification hut inbetween.

Dispersion Compensation Modules


There is different dispersion compensating modules available for the ADVA FSP 3000, which are either
DCG or fiber based. Well focus on the DCG based modules due to the latency sensitivity of the SAN
protocols. Mainly, there are 2 different types available compensating different fiber distances. The
distance statement which comes with the modules name is only valid for G.652 standard single mode
fiber. For other fiber types like, for example G.655 or TrueWave, a detailed dispersion calculation of
the whole optical system is mandatory.

DCG-M DCG based modules with less than 50ns latency for a 100GHz optical grid. Available for 60,80,100km
dispersion load according to a G.652 fiber

DCG50-M DCG based modules with less than 50ns latency for a 50GHz optical grid. Available for
20,40,60,80,100km dispersion load according to a G.652 fiber

Protection Modules
A fiber break can cause a complete outage for fiber optical networks. Since the fiber is subject to
external events out of the control of the data center staff, its important to design the WDM system to
survive fiber failures.
There are several different WDM protection options available. Almost all of these techniques are based
on a Telco operators needs and requirements rather than a data center operator. The requirements of
Telco/ISPs are usually different from the requirements for protecting WDM used for datacenter
interconnect. Therefore, the protection options available might not be as good a fit.
Most of the time datacenter equipment such as Fibre Channel switches require Client Layer Protection
(CLP) that follows the best practice of using dual fabrics for high availability SANs. Two independent
WDM systems transporting data over two independent fiber ducts and routes is a proven design chocie
when interconnecting datacenters as shown in the figures below.

FIGURE 10 Dual WDM Optical Transport Network

Data Center Solution-Storage Gen 5 Fibre Channel Distance Extension Using ADVA FSP 3000 WDM Platform Design Guide 21
53-1003608-01
ADVA FSP 3000 Transponder and Muxponder Module Types

FIGURE 11 Remote Switch Module (RSM)

For additional security within these networks, fiber switch modules could be used on top of a CLP-
based protection. Typically, the Remote Switch Module (RSM) or the Versatile Switch Module (VSM)
would be used for the ADVA FSP 3000. With this combination being used, a double failure protection
is achieved and even if a fiber route has an outage, all services can still operate at one hundred
percent capacity. Those modules consist of an opto-mechanical switch (similar to a relay) which
switches the light path from a defect long distance fiber to a backup fiber. This is done automatically
and the client devices only see a short loss of sync or loss of light, depending on the settings of the
WDM system. The drawback of such a configuration is that four long distance fiber paths between the
two sites are mandatory.
Please note that other options are available as well, e.g. multisite protection or optical restoration
might be the option for your connectivity needs. Protecting optical networks and especially the
influence of optical protection switching on the end devices (e.g., storage, servers) must be analyzed
before choosing the optical protection scheme.

ADVA FSP 3000 Transponder and Muxponder Module Types


In order to serve a wide range of applications and customer needs, there are a variety of modules with
different functionality and dedicated market segments available. To get a brief overview, our xPonders
are divided into three classifications:
Core modules xPonders, dedicated to the Telco/ISP market
Access modulesx Ponders, cost optimized and for metro distances
Enterprise modulesx Ponders, optimized for datacenter connectivity solutions
The portfolio for datacenter optimized modules is the full set of Enterprise modules and a subset of
the Access modules. A more detailed overview is shown in below.

22 Data Center Solution-Storage Gen 5 Fibre Channel Distance Extension Using ADVA FSP 3000 WDM Platform Design Guide
53-1003608-01
ADVA FSP 3000 4TCA Module

FIGURE 12 FSP 3000 xPonder Module Types and Naming Convention

All ADVA FSP 3000 modules follow a similar naming convention. The first letter is an indicator that
separates the modules into muxponders and transponders. A W translates to transponder while a T
stands for muxponder.
The number in front of the triple-letter name show the number of client ports if higher than one. For
example a 4WCE would translate to a 4 client port transponder for enterprise applications, a 2TCC
would be a 2 client port muxponder intended for Telco/ISP use.
Now lets focus more on datacenter optimized modules, which are shown in the following table:

FIGURE 13 FSP 3000 Portfolio of Datacenter Optimized xPonder Modules

The technology of the six datacenter-optimized modules are explained more in detail in the next
chapter.

ADVA FSP 3000 4TCA Module


The 4TCA module (4TCA-PCN-4GU+4G), shown below, is an access type muxponder which has a
maximum network data-rate of 4G. It is equipped with four client ports and two network ports, which
could be used in parallel or for network protection options.

Data Center Solution-Storage Gen 5 Fibre Channel Distance Extension Using ADVA FSP 3000 WDM Platform Design Guide 23
53-1003608-01
ADVA FSP 3000 2WCA Module

For datacenter applications, usually both network ports are used in parallel to achieve a maximum
bandwidth of 8G per module. Thus 2 x 4G, 4x2G or any combination of 1G, 2G and 4G services are
possible.

FIGURE 14 FSP 3000 4TCA Muxponder Module

Specification of the 4TCE-PCN-4GU+4G:


Services: GbE, 1,2,4G FC, ISC-3 peer (IBM Mainframe signal)
Network and client port SFP based
Built in protection option available
Module is one single slot width
80 DWDM channels or 16 CWDM channels available

ADVA FSP 3000 2WCA Module


The 2WCA module (2WCA-PCN-10G), shown below, is an access type dual transponder with a
maximum network datarate of 10G. It is equipped with two client and two network ports which could be
used in parallel or for protection options.
For datacenter applications, usually both network ports are used in parallel to achieve a maximum
bandwidth of 20G per module. Thus 2 x 4G, 2x8G, 2 x 10G or any combination from 4G, 8G and 10G
services are possible.

FIGURE 15 ADVA FSP 3000 2WCA dual transponder module

Specification of the 2WCA-PCN-10G:


Services: 4,8,10G FC; 10GbE; STM-64/OC-192;OTU2
Network and client port XFP based
Built in protection option available
Module is one single slot width
80 DWDM channels or 8 CWDM channels available

24 Data Center Solution-Storage Gen 5 Fibre Channel Distance Extension Using ADVA FSP 3000 WDM Platform Design Guide
53-1003608-01
ADVA FSP 3000 4WCE Module

ADVA FSP 3000 4WCE Module


The 4WCE module (4WCE-PCN-16G), shown below, is an enterprise type quadruple transponder with
a maximum network datarate of 16G. It is equipped with four client and four network ports. For
datacenter applications, usually all ports are used in parallel to achieve a maximum bandwidth of 64G
per module. Thus 4 x 8G, 4x10G, 4 x 16G or any combination from 8G, 10G and 16G services are
possible.

FIGURE 16 ADVA FSP 3000 4WCE quad transponder module

Specification of the 4WCE-PCN-16G:


Services: 8,16G FC; 10GbE
Network port XFP based and client port SFP+ based
Module is one single slot width
80 DWDM channels available

ADVA FSP 3000 5TCE Module Family


The 5TCE module, shown below, is available in some different HW variants. The two main variants are:
Standard muxmonder with transponder functionality (5TCE-PCTN-10GU+10G)
AES 256 Encryption muxmonder with transponder functionality (5TCE-PCTN-10GU+AES10G)
This module is an enterprise type muxponder with a maximum network datarate of 10G. It is equipped
with five client ports and one network port. The module could be used as muxponder for client services
smaller than 10G, like 3x 4G, 5x 2G, etc or any combination. A low-latency transponder model is also
available for 10G based services.

FIGURE 17 ADVA FSP 3000 5TCE muxponder module with AES 256 encryption

Specification of the 5TCE-PCTN-10G+(AES)10G:


Services: 1,2,4,8,10G FC; 10GbE, 5,10G InfiniBand; ISC-3 (IBM Mainframe)
Client port SFP(+) based, network port tunable, built in

Data Center Solution-Storage Gen 5 Fibre Channel Distance Extension Using ADVA FSP 3000 WDM Platform Design Guide 25
53-1003608-01
ADVA FSP 3000 10TCE Module

Module is one single slot width


Active roundtrip latency measurement
GFEC available
120 DWDM channels available

ADVA FSP 3000 10TCE Module


The 10TCE module (10TCE-PCN-10G+100G), shown below, is an enterprise type muxponder with a
maximum network datarate of 100G. It is equipped with ten client ports and four network ports. The
network ports are combined and cannot be used independently. Each network port is running at 28G
speed. For datacenter applications, usually all ports are used in order to achieve the maximum
bandwidth of 100G per module. All client port combinations of 8G/10G services are possible.

FIGURE 18 ADVA FSP 3000 10TCE 100G Muxponder Module

Specification of the 10TCE-PCN-10G+100G:


Services: 8G FC; 10GbE, STM+64/OC-192
Client port SFP(+) based, network port CFP based
Module is one double slot width
Enhanced FEC available

ADVA FSP 3000 WCE-100G Module


The 10TCE module (WCE-PCN-100G), shown below, is an enterprise type transponder with a
maximum network datarate of 100G. It is equipped with one client port and four network ports. The
network ports are combined and cannot be used independently. Each network port is running at 28G
speed.

FIGURE 19 ADVA FSP 3000 WCE 100G muxponder module

26 Data Center Solution-Storage Gen 5 Fibre Channel Distance Extension Using ADVA FSP 3000 WDM Platform Design Guide
53-1003608-01
WDM Error Forwarding Settings

Specification of the 10TCE-PCN-10G+100G:


Services: 100GbE LR4, LR10, SR10; OTU4
Client and network port CFP based
Module is one triple slot width
EFEC available
These six modules visualized more in detail are specifically designed to support datacenter applications
and protocols. All modules capable of transporting Fibre Channel are tested and qualified by various
vendors (e.g., EMC, IBM, Brocade, HP) in order to support the necessary functionalities.
In general, all of these modules can cover unrepeated distances of 200 to 500 kilometers maximum.
Unrepeated distances are links, where only pure optical amplification or regeneration is used.
Compared to a repeated link, where a repeater takes the optical signal and converts it to the electrical
domain. There, the signal get regenerated and will be converted to an optical signal afterwards. This is
also known as OEO conversion.
Single span links are usually possible up to 200 kilometers on G.652 fiber, depending on the fiber
characteristics, the optical interface and the interface speed.

NOTE
More detailed capabilities of the various FSP 3000 modules can be found in Appendix A.

WDM Error Forwarding Settings


If you have two devices directly connected via cables, it is easy to troubleshoot if any outage occurs.
Trouble with the fiber connections usually causes a loss of sync or a loss of light. Those events are
handled quite differently by a switch, and could lead to data loss and/or fabric separation. But
nevertheless, the problem is quite easy to determine.
When a WDM system is between Fibre Channel switches, error determination will become more
complex and more difficult. Thus error forwarding and its effect on the Fibre Channel fabric should be
taken into account before planning a WDM network. The fabric behavior could also be influenced by
several settings on the switch/director. For example the portCfgLossTov command could change the
behavior of the fabric in case of a loss of light event.

Error Forwarding Schemes


If a fiber outage or other transport related problem on the WDM network path occurs, as shown below,
the client transmitter has to tell the attached switch that an error has occurred on the light path. This
can be done by either switching the laser off or by sending a special error propagation signal towards
the client device.

Data Center Solution-Storage Gen 5 Fibre Channel Distance Extension Using ADVA FSP 3000 WDM Platform Design Guide 27
53-1003608-01
Data Center Solution-Storage Gen 5 Fibre Channel Distance Extension Using ADVA FSP 3000 WDM Platform

FIGURE 20 Error Forwarding for Datacenter-optimized Transponders

For TDM-based modules the implementation is slightly different, as shown below. Please note that
events like loss of clock, WDM signal degradation or code violations could also trigger the error
forwarding. This functionality is available for all client ports on the FSP 3000 muxponder and
transponder modules.

FIGURE 21 Error forwarding for datacenter-optimized muxponders

There are basically two different settings for error forwarding on the various modules.
LOS: If an error is detected by the WDM system, the client laser will be switched off to forward this
error state to the FC switch. The switch will detect a LOL (loss of light) event on its port.
EPC: If an error is detected by the WDM system , the client transmitter will send an error
propagation signal to the FC switch. This signal is a special code, generated by the FSP 3000 card.
For 8B10B coded signals, it is a 10Bit unrecognized code word with neutral disparity. The switch
detects a signal with errors which leads to a No_Sync state at the switch.

28 Data Center Solution-Storage Gen 5 Fibre Channel Distance Extension Using ADVA FSP 3000 WDM Platform Design Guide
53-1003608-01
WDM Failover Switching and Error Forwarding

In order to overcome WDM based protection switching events, there is another setting which has to be
taken into account:
Laser off delay: This delays the LOS error forwarding by ~70ms in order to overcome protection
switching at the WDM domain. Basically, it is a hold off time for the client laser(s). During the hold off
time, there is no valid signal transmitted towards the Fibre Channel switch, just the light stays
switched on.

FIGURE 22 Error Forwarding Settings and Its Effect on the FSP 3000 Client Port

WDM Failover Switching and Error Forwarding


Failover switching within the WDM domain is an approach mainly used by service providers in order to
protect a link in case of a fiber outage. This can be done using various techniques which will not be
discussed here due to their complexity. Only protection using the FSP 3000 RSM module and its
possible impacts will be shown within this guide. Please see Figure 8b.
All of these protection mechanisms have something in common: the switchover time is 50ms or less.
This is the time from the event until the light path is fully recovered and a valid and stable signal is
transmitted out of the FSP 3000 client ports.
There is also a technique called (optical) restoration, which takes significantly longer. Restoration can
take up to 1 second or more until a valid signal is re-established.
If you do not have WDM failover or WDM protection installed, the effect of the different error forwarding
settings are not relevant. The different client port and therefore fabric behavior is only detectable having
a sub 50ms switchover in place.

Data Center Solution-Storage Gen 5 Fibre Channel Distance Extension Using ADVA FSP 3000 WDM Platform Design Guide 29
53-1003608-01
Fibre Channel Fabric Behavior

FIGURE 23 Error Forwarding Settings and Its effect on The FSP 3000 Client Port During WDM
Failover Switching

Fibre Channel Fabric Behavior


The error forwarding settings at the FSP 3000 client ports could have different effects on the fabric.
If LOS is set at the FSP 3000 client port, the switch will detect a loss of light and immediately take
down all of the ports routed over the WDM system. This is also the case if a WDM failover option is
installed since the sub 50ms switchover is reproduced at the FSP 3000 client laser. To mask this
switch behavior, there are three options: Setting the port to EPC, using the FSP 3000 laser off delay
option or using the portCfgLossTov feature on the switch.
Be aware that during a WDM switchover, you always lose data that could lead to higher layer events
like link resets, buffer credit recovery trigger, etc.
If you have only one WDM system available, where a WDM protection is installed, and all your ISL are
routed this way, the loss of light at all ports simultaneously will lead to fabric separation. But this is only
the case if these are the only connections between the two parts of a fabric. So masking this seems to
be a good choice.
But by masking this loss of light event, it could lead to buffer credit starvation, or to problems with
trunked ISLs. Also this could lead to unexpected fabric behavior for example if the normal link is
10km and the backup path is 50km. So the switch has no way to determine that it is running on a
50km ISL after the short loss of sync event.
On the other hand, masking leads to a shorter time between the event and the traffic flow established
fabric wide.
Some customers prefer having the port taken down completely and starting over again; some like the
short switch event to traffic fully restored time by masking the switchover. Therefore, there is no real
right or wrong when it comes to deciding on which error forwarding settings should be used.

30 Data Center Solution-Storage Gen 5 Fibre Channel Distance Extension Using ADVA FSP 3000 WDM Platform Design Guide
53-1003608-01
SX and LX Optical Interfaces

SX and LX Optical Interfaces


There are two types of optical interfaces that connect switches and FSP 3000 WDM systems, which are
called SX or LX interfaces.

Multi-mode Optical Cables and SX Interfaces


Multimode or SX interfaces are the first choice for most customers since the cost of the interface and
the optical cables are quite low. This is true for Brocade and ADVA Optical Networking components, in
general. From an optical signal quality point of view, those SX connections are less stable and have a
speed dependent length limitation. Please see the datasheets of both vendors for distance, speed and
fiber. This is also described in detail in Brocades SAN Distance Extension Reference.

Single Mode Optical Cables and LX Interfaces


Singlemode or LX transceivers and cables are much more expensive and could range up to 10km or
40km with special components. The optical signal has better quality and is less error prone compared to
an SX-based installation.
Generally, both options are possible, but both have pros and cons. For most people, a system
connected via SX components is sufficient and offers a good balance between price and performance.
Nevertheless, for highest speeds like 16G, LX-based infrastructure could lead to a more stable system if
done properly.

Buffer Credit Calculation and Settings


The buffer credit technology used by the Fibre Channel protocol will not be discussed within this guide.
Please refer to the Fibre Channel specifications or to several guides issued by Brocade.

Buffer Credit Usage and Calculations


The amount of buffer credits needed for a proper and preformat function for each Fibre Channel port is
a function of port speed, average frame size and latency between the two switches. The latency is the
sum of the latency introduced by the fiber plus the latency introduced by the WDM system. The latency
values of the FSP 3000 are shown in Table 1.
As a rule of thumb, you should calculate the minimum buffer credits for your E-ports in the following
way:
1G 0.5BB per km
2G 1BB per km
4G 2BB per km
8G 4BB per km
10G 6.5BB per km
16G 8.5BB per km

NOTE
The values above are only valid for full sized frames (2112Bytes payload). If your average payload is
smaller, you have to increase the amount of BB buffer credits according to your needs. As a best
practice, you should double the amount of the rule of thumb. Also please be aware that a WDM could
add some latency (virtual fiber distance equivalent) to your link. This virtual length must be taken into
account when calculating the buffer credits. Please see Table 2 below.

Data Center Solution-Storage Gen 5 Fibre Channel Distance Extension Using ADVA FSP 3000 WDM Platform Design Guide 31
53-1003608-01
Buffer Credits for Protected Systems

Buffer Credits for Protected Systems


For WDM systems equipped with protection options, the buffers should always be set to meet the
criteria of the link which has the higher latency. Dynamic buffer credit determination would be possible
but is not recommended. Please use fixed buffer credit settings instead.

Brocade Switch Port Settings


For connecting Fibre Channel switches over long distances using WDM, you should begin by building
your ports from scratch and adding features after you have a stable connection. The WDM system
must be set correctly and all connections should be measured before the switch is attached to the
WDMs. Usually the company installing a WDM or the company that provides you with the service
should hand over a measurement protocol. From this protocol, you can see if you links are in good
condition and are running error free.

General Settings
As a best practice, please use the step by step approach as described below:
portCfgDefault; ensure all ports that will be connected to a WDM have a defined configuration
before applying more port-based settings.
portCfgSpeed; for connections over WDM, the speed should be fixed on all ports. Autonegotiation
is not recommended.
portCfgFillword; this command is optional and is not available within all FOS and HW versions of
the switch. If available, you should set this to 1: -arbff -arbff.
portCfgLongDistance; for this setting you should use the LS option, VC_link_init to 1 and then
specify the desired distance or desired amount of buffer credits.
After applying those basic settings on both sides of the link, the WDM link should come up
immediately without any problems. If you have trouble, please refer to the troubleshooting section
within this guide.
For more information on the syntax of the FOS commands please refer to the Fabric OS Command
reference for you FOS version loaded on your equipment.

Trunking
Trunking over distance works fine with the cards listed on the Brocade compatibility matrix. However,
due to the different latency of the various card types, for trunking you must use only one type of
module with the exact same settings for FEC and transmission modes. Also the error forwarding
settings should be set to the same values for all client ports within one trunk.

Advanced Settings
After establishing the basic link, you have to decide whether you need or want to use other port/link/
fabric based features.
Feel free to alter the port settings and enable features like FEC, Encryption, Compression and so on.

32 Data Center Solution-Storage Gen 5 Fibre Channel Distance Extension Using ADVA FSP 3000 WDM Platform Design Guide
53-1003608-01
Dual Fabric Over Distance

Dual Fabric Over Distance


Most people use a so called dual fabric approach when they connect their SANs. This methodology is
also usually maintained in long distance connectivity via WDM systems. Depending on the overall SAN
design, there are different solutions possible on what you could be done at the WDM site.

Sample Configurations
The strict approach follows the same rules that you would probably follow if you connect the switches
via optical cable directly. If the WDM link breaks, all ISL go down immediately and the whole fabric
collapses. All ISL could be trunked since they run down the same light path. The remaining fabric will
stay fully intact and no error will occur there. This configuration is shown below.

FIGURE 24 Strict Dual Fabric Over WDM Approach

The mixed approach will give you a different characteristic and behavior. Both fabrics will stay up if one
of the WDM link breaks, but since the different WDM links are never the same length, features like
trunking will not work using both long distance paths. Also in case of a WDM or fiber outage, you will
lose half of your connectivity bandwidth on both fabrics. Additionally, both fabrics are affected at once
what could lead to problems. However please note the two fabrics will not merge since the links are
physically separated. This configuration is shown below

Data Center Solution-Storage Gen 5 Fibre Channel Distance Extension Using ADVA FSP 3000 WDM Platform Design Guide 33
53-1003608-01
Troubleshooting

FIGURE 25 Figure 22 Mixed Dual Fabric Over WDM Approach

Both scenarios might have added availability through WDM-based protection. This will restore a fiber-
based outage in a way that both fabrics will operate at full speed after the WDM failover.

Troubleshooting
Troubleshooting a WDM system is quite difficult since it is an analogue technology. For in-depth
troubleshooting, a lot of knowledge, experience and special tools are needed. Hence, do not attempt
to work on a WDM system if you are not a trained professional.
Therefore, the focus of this chapter is only related to problems you might experience with WDM
systems in conjunction with Fibre Channel switches.

Bit-Errors
In optical transport systems, a bit error is the most common error. For Fibre Channel data
transmission, the light is amplitude modulated (AM) and transported over optical fiber. Those links will
never be 100% error free. The amount of errors is given by the Bit Error Rate (BER). The minimum
BER for Fibre Channel is 1012 as defined by the T11. This is one bit error in 1012 bits, or in other
words 14 errors per hour at 4G Fibre Channel speed.
Usually WDM systems are much better, and if properly configured and set up, you usually wont see
an error for several days even running at 16G speed. However, if you see increasing error counts on
your ISLs it could be the WDM long distance link, or the links between the switches and the WDMs.
As shown in the figure below, you have 3 links which could have problems, but they will be seen as
one link from the switch perspective.

FIGURE 26 ISL links Between Two Switches via WDM

34 Data Center Solution-Storage Gen 5 Fibre Channel Distance Extension Using ADVA FSP 3000 WDM Platform Design Guide
53-1003608-01
Throughput

Checking the WDM link is the tricky part. You should check with your service provider (internal or
external) on the health of the WDM link and the local legs. The WDM system might be helpful on error
determination, as well.
Checking the local links (link one and three) is easier and can be done without checking the WDM. You
should carefully clean all fiber connectors as well as SFPs with a fiber cleaning tool and check that
correct power levels are set. This can be done using the switch and or the WDM local port readings. A
better way to check power levels is to use an optical power meter. The optical min and max values for
the interfaces used can be found in the corresponding Brocade and ADVA specification.

Throughput
Usually throughput problems via WDM links are nearly 100% based on wrong buffer credit settings.
100% throughput of an ISL is possible only if the BB credits are set correctly. Since the correct setting is
dependent on the average frame size, and thus not easy to determine, the buffer credits should be
calculated as described above. The use of the FOS command: portStatShow provides a buffer credit
zero counter: tim_txcrd_z; which will give you an indication of non-sufficient buffer credits.
If this counter is increasing, you should increase the amount of buffer credits on this particular port.
One additional problem might be buffer credit starvation. Buffer credit starvation occurs if R_RDYs are
lost during data transport. For example, if a bit error occurs, an R_RDY might be lost. This could also
happen during a WDM failover.
To avoid buffer credit starvation, you might consider using the buffer credit recovery function. Please
use the portCfgCreditRecovery command to enable or disable this feature.

Trunking
Trunking over distance is possible but you have to follow some rules:
The cables between the switches and the WDMs shall have the same length to avoid deskew.
Use qualified and the same types of WDM modules with exactly the same settings for your trunk.
While using WDM failover, the WDM system should signal a Loss of light event towards the Fibre
Channel switch to bounce the ports. Do not use the LOS_TOV feature. (This is best practice only and
might not be your choice for your special requirements)

Appendix A
This appendix provides ADVA latency and virtual fiber distances for combinations of FSP 3000
transponder and muxponder module types and link rates; Brocade Fibre Channel switch port settings
for transponder and muxponder module types; and error forwarding settings for transponder and
muxponder module types.

Latency
The overall latency between two switches is caused by the optical cable and the device(s) within the
link.
Since the speed of light is reduced to 200,000km/second in an optical fiber, one meter of fiber is equal
to 5ns of latency; one kilometer of fiber is equal to 5s. For all other devices, you should ask the
appropriate vendor for specific latency figures. Please be aware that latency figures, like latency values
for the FSP 3000 below, might vary depending on the card settings. For example a forward error
correction, quite often used in WDM systems, could add quite a bit of latency if switched on.

Data Center Solution-Storage Gen 5 Fibre Channel Distance Extension Using ADVA FSP 3000 WDM Platform Design Guide 35
53-1003608-01
Support of Brocade port-based Fibre Channel features

TABLE 2 Table 1 Latency values for FSP 3000 modules per link
FSP 3000 module / IF Speed 4TCA 2WCA 4WCE 5TCE 5TCE-AES 10TCE WCE-100G

1G FC 4s N/A N/A 5s* N/A N/A

2G FC 2.5s N/A N/A 3.5s* N/A N/A

4G FC 2s 10ns N/A 2.5s* N/A N/A

8G FC N/A 10ns 16ns 1.5s* 9-15s** N/A

10G FC N/A 10ns N/A 1.5s* N/A N/A

16G FC N/A N/A 16ns N/A N/A N/A

10GbE N/A 10ns 16ns 1.5s* 9-15s** N/A

40/100GbE N/A N/A N/A N/A 9-15s** 4.5-10s**

TABLE 3 Table 2 Virtual fiber distances for FSP 3000 modules per link
FSP 3000 module / IF Speed 4TCA 2WCA 4WCE 5TCE 5TCE-AES 10TCE WCE-100G

1G FC 2km N/A N/A 2.5km* N/A N/A

2G FC 1.3km N/A N/A 1.8km* N/A N/A

4G FC 1km 0.05km N/A 1.3km* N/A N/A

8G FC N/A 0.05km 0.08km 0.8km* 5-8km** N/A

10G FC N/A 0.05km N/A 0.8km* N/A N/A

16G FC N/A N/A 0.08km N/A N/A N/A

10GbE N/A 0.05km 0.08km 0.8km* 5-8km ** N/A

40/100GbE N/A N/A N/A N/A 5-8km ** 2-5km**

Support of Brocade port-based Fibre Channel features


FSP 3000 module / 2WCA 4G/8G 2WCA10G 4WCE 16G 5TCE (AES) 5TCE 10TCE8G
Feature 4G/8G (AES)10G
4WCE 8G

VC-RDY

R-RDY

Trunking

36 Data Center Solution-Storage Gen 5 Fibre Channel Distance Extension Using ADVA FSP 3000 WDM Platform Design Guide
53-1003608-01
Data Center Solution-Storage Gen 5 Fibre Channel Distance Extension Using ADVA FSP 3000 WDM Platform

FEC ** N/A N/A N/A

In-Flight Encryption *

In-Flight Compression *

In-Flight Enc. + Comp *

Table 3 Brocade port-features versus FSP 3000 modules

FSP 3000 module / Error 4TCA 2WCA 4WCE 5TCE 5TCE-AEStransparent 5TCE 5TCE- 10TCE
forwarding feature mux AESframed mux

LOS

Laser off delay

EPC N/A N/A N/A N/A

Table 4 Error forwarding settings available at the FSP 3000 modules

Data Center Solution-Storage Gen 5 Fibre Channel Distance Extension Using ADVA FSP 3000 WDM Platform Design Guide 37
53-1003608-01
Support of Brocade port-based Fibre Channel features

38 Data Center Solution-Storage Gen 5 Fibre Channel Distance Extension Using ADVA FSP 3000 WDM Platform Design Guide
53-1003608-01

También podría gustarte