Documentos de Académico
Documentos de Profesional
Documentos de Cultura
The next generation virtual datacenter from VMware will ensure efficient collaboration between network administrators and VMware administrators with the use of vNetwork Distributed Switches. By replacing an existing virtual switch with the Cisco Nexus 1000V and the availability of familiar Cisco NX-OS, Cisco Nexus 1000V supports the traditional boundaries between server and network administrators, allowing network administrators to also manage virtual switches. This lab will augment your knowledge about the Cisco Nexus 1000V with a considerable amount of hands-on experience.
Page 1
Contents
Hands-on lab: Introduction to the Cisco Nexus 1000V .............................................................................................................................. 1 Lab Overview ............................................................................................................................................................................................. 4 Objectives ..............................................................................................................................................................................................4 Cisco CloudLab .....................................................................................................................................................................................4 Lab Exercises ........................................................................................................................................................................................5 Network Admin vs. Server Admin ..........................................................................................................................................................5 Lab Topology and Access .......................................................................................................................................................................... 6 Logical Topology ...................................................................................................................................................................................6 Access ...................................................................................................................................................................................................7 Connecting via the vSphere Client ........................................................................................................................................................8 Deployment ................................................................................................................................................................................................ 9 Connect to the Cisco Nexus 1000V Virtual Supervisor Module (VSM) ..................................................................................................9 Creating an uplink port profile for the Management Traffic ..................................................................................................................10 Creating an uplink port profile for the Management Traffic ..................................................................................................................11 Adding an ESX host to the Distributed Virtual switch ..........................................................................................................................12 Attaching a Virtual Machine to the Network .............................................................................................................................................. 16 Creating a port profile for virtual machines ..........................................................................................................................................16 Verify the successful creation of the port-group: ..................................................................................................................................17 Network Administrator view of Virtual Machine connectivity ................................................................................................................21 VMotion and Visibility ............................................................................................................................................................................... 24 VMotion Configuration .........................................................................................................................................................................24 Network Administrators view of VMotion .............................................................................................................................................28 Perform a VMotion ...............................................................................................................................................................................28 Verify the new Network Administrators view on the Virtual Machine ...................................................................................................29 Policy-based virtual machine connectivity ................................................................................................................................................ 30 Verify open ports within your virtual machine ......................................................................................................................................30 Configuration of an IP-based access list..............................................................................................................................................32 Verify the application of the IP-based access list .................................................................................................................................33 Mobile VM Security .................................................................................................................................................................................. 34 Private VLANs .....................................................................................................................................................................................34 Removing the Private VLAN configuration...........................................................................................................................................38 Traffic Inspection of individual Virtual Machines ....................................................................................................................................... 39 Configure an ERSPAN monitor session ..............................................................................................................................................39 Create an ERSPAN Session on the Nexus 1000V ..............................................................................................................................40 Configuring a VMkernel Interface to transport the ERSPAN Session ..................................................................................................41 Test the session and VMotion the VM .................................................................................................................................................43 Conclusion................................................................................................................................................................................................ 46 Feedback.................................................................................................................................................................................................. 46 Lab proctors ............................................................................................................................................................................................. 47
Page 2
In the highly agile VMware environment, the new Cisco Virtual Network Link (VN-Link) technology on the Nexus 1000V will integrate with VMware's vNetwork Distributed Switch framework to create a logical network infrastructure across multiple physical hosts that will provide full visibility, control and consistency of the network.
Mobile VM security and network policy Policy moves with a virtual machine during live migration ensuring persistent network, security, and storage compliance Ensures that live migration won't be affected by disparate network configurations Improves business continuance, performance management, and security compliance
Non-disruptive operational model for your server virtualization, and networking teams Aligns management and operations environment for virtual machines and physical server connectivity in the data center Maintains the existing VMware operational model Reduces total cost of ownership (TCO) by providing operational consistency and visibility throughout the network
Page 3
Lab Overview
Objectives
The goal of this manual is to give you a chance to receive hands-on experience with a subset of the features of the Cisco Nexus 1000V Distributed Virtual Switch (DVS). The Cisco Nexus 1000V introduces many new features and capabilities. This lab will give you an overview of these features and introduce you to the main concepts.
Cisco CloudLab
This lab is hosted in Ciscos cloud-based hands-on and demo lab. Within this cloud you are provided with your personal dedicated virtual pod (vPod). You connect via RDP to a so-called control center within this host and walk through the lab steps below. All necessary tools to complete this lab can be found in the control center. Refer to the separate documentation for Cisco CloudLab for details on how to reach the control center within your vPod.
Figure 1. Logical Lab Topology
The username and password to access the Control Center of this vPod are listed below: User Name: VPOD\administrator Password: <Refer to the CloudLab Portal>
Page 4
Lab Exercises
This lab was designed to be completed in sequential order. As some steps rely on the successful completion of previous steps, you are required to complete all steps before moving on. The individual lab steps are: Cisco Nexus 1000V deployment Attaching Virtual Machines to the Cisco Nexus 1000V VMotion and Visibility Policy-based Virtual Machine connectivity Traffic Inspection of a Virtual Machine Quality of Service (QoS) for Virtual Machines
Page 5
Logical Topology
The diagram below represents the logical lab setup of a vPod as it pertains to the Cisco Nexus 1000V.
Figure 2. Logical Pod Design
Your pod consists of: Two physical VMware ESX servers. They are called esx01.vpod.local and esx02.vpod.local. One VMware vCenter, reachable at vcenter.vpod.local via the vSphere client. One Cisco Nexus 1000V Virtual Supervisor Module, reachable at vsm.vpod.local via SSH. One pre-configured upstream switch to which you do not have access to.
Page 6
Access
During this lab configuration steps need to be performed on the VMWare vCenter as well as the Cisco Nexus 1000V Virtual Supervisor Module (VSM) within the CloudLab Virtual Pod. The VMWare vCenter is accessible through the vClient application. The VSM is accessible through a SSH connection. Use the usernames and passwords listed below for accessing your vPods elements. Usernames and Passwords vCenter Login Password VPOD\Administrator Cisco123 Use the vSphere client feature Use Windows session credentials for easier login. Nexus 1000V VSM Login Password admin Cisco123
All necessary applications used within this lab are available on the desktop of the control center machine to which you are connected via Remote Desktop Protocol (RDP).
Page 7
Please tick Use Windows session credentials and click on Login for vSphere Client authentification. After a successful login youll see the following vSphere Client application screen.
Page 8
Deployment
While the Nexus 1000V has already been registered in vCenter, it is still necessary to connect the different ESX hosts as part of the Nexus 1000V. In order to automatically install the necessary Virtual Ethernet Module (VEM) of the Cisco Nexus 1000V into the ESX hosts, we will be using VMware Virtual Update Manager (VUM). In a vSphere setup VUM is used to stage and apply patches and updates to ESX hosts. The goal of this step consists of adding the two hosts to the Nexus 1000V. In this lab you will: Create a uplink port-profile and apply it on the uplink interface of the ESX hosts Add the two hosts to the Nexus 1000V Switch
Lab Setup
In order to add a new host to the Distributed Switch we need to create a port-profile to enable the communication between the Virtual Supervisor Module and the different Virtual Ethernet Module. On top of that we want to enable the VMotion traffic on the same interface. Each pod is composed of 2 ESX Host, 1 Virtual Supervisor Module and one Virtual Center. Both ESX host are connected an upstream switch using 4 different NICs. Out of these NICs one will be used with the Nexus 1000V to carry all the management, VMotion traffic and application traffic coming from the VM.
Page 9
Page 10
The uplink port-profile already includes a configuration line for private vlans. This configuration is necessary for a later lab step and will be explained in the corresponding section. It already has to be included at this stage as certain configurations cannot be altered once the uplink port profile is in use.
One special characteristics of the uplink port profile should be pointed out at this stage: type ethernet: This configuration line means that the corresponding port-profile can only be applied to a physical Ethernet port. This is also indicated through a special icon in the vSphere client: channel-group auto: This configuration line activates the feature virtual port-channel host mode. It allows the Nexus 1000V to form a port-channel with upstream switches that do not support multichassis etherchannel. Congratulation you just configured your first port-profile!
Page 11
3.
You are presented with all hosts that are part of the data center but not part of the DVS. The VEM component has already been pre-installed on the ESX hosts. An alternative would be the usage of VMware Update Manager (VUM), which would make the integration of the ESX host to the Nexus 1000V completely automated and transparent.
Page 12
4. Select the hosts and the NICs that will be assigned to the DVS. Currently vmnic0 is already in use by the traditional vSwitch to enable the initial management of your ESX hosts, while vmnic1 is used for iSCSI storage traffic and vmnic2 provides network access to the existing VMs through a vSwitch. Please only choose vmnic3 to become part of the Cisco Nexus 1000V DVS. Assign the uplink port profile Uplink that you created in the previous step to vmnic3 on host esx01.vpod.local and click on Next.
Note:
In real life scenarios uplink port-profiles are configured by the networking administrator to match the setting of the physical upstream switches. This ensures that there is no mis-configuration between the physical network and the virtual network. It also enables network administrators to use features for this uplink that are available on other Cisco switches (e.g. QoS, Etherchannel, )
Page 13
5. The next screen offers you the possibility to migrate existing VMKernel to the Nexus 1000V. For the purpose of this lab do not choose to migrate any VMkernel ports and click on Next
Note:
Migrating the Management Network and/or iSCSI will result in a loss of management and storage connectivity of the hosts. In a real-life scenario it is possible to even migrate the service console to the Cisco Nexus 1000V and thereby completely decommission the VMware vSwitch. But this lab has not been prepared to do so. Therefore under no circumstances choose vmnic0 and/or vmnic1 to become part of the Cisco Nexus 1000V DVS.
6. Similar to the previous screen, this next screen allows you to migrate existing Virtual Machine Networks to the Nexus 1000V. For the purpose of this lab do not choose to migrate any existing Virtual Machine Networks and click Next
Page 14
7. You are presented with an overview of the uplink ports that are created. By default VMWare creates 32 uplink ports per hosts and leaves it to the Nexus 1000V VSM to map them to useful physical ports.
8. Acknowledge these settings by clicking on Finish. After a few seconds this ESX host esx01.vpod.local will appear in the Hosts view of the Distributed Virtual Switch.
Repeat the same steps to add the host esx02.vpod.local to the Cisco Nexus 1000V.
2011 Cisco | VMware. All rights reserved. Page 15
This lab step consists of: Configure a port profile for Virtual Machines (Network Administrator) Assign a VM to a port profile (VMware Administrator)
Page 16
Page 17
Add a vNIC to the VM inside your pod, by associating it to the port-group VM-Client. 1. In VMWare Virtual Center open the settings dialog of the first VM by clicking on Edit Settings. Navigate to the Virtual NIC section and choose the port group VM-Client for the network label and finalize by clicking on OK.
Page 18
4. Click on the Cisco Systems, Inc. link, which you can find on the desktop inside the VM. This opens the web page www.cisco.com with the internet browser and verifies the network connectivity of the VM.
Page 19
5. Close the Virtual Machine Console 6. Repeat steps 1 to 4 for the Virtual Machine Windows 7 B Congratulation you successfully configured the network connectivity for a virtual Machine! This step demonstrated that the workflow introduced by the Cisco Nexus 1000V is much more efficient than the traditional approach using vSwitches: The network team configures the network for the server team. The server team only needs to apply the prepared settings.
Page 20
MAC-Address(es) -------------------------------------00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8 02-00-0c-00-03-00 to 02-00-0c-00-03-80 02-00-0c-00-04-00 to 02-00-0c-00-04-80 Server-IP --------------10.2.11.5 10.2.11.12 10.2.11.11
In the output of the show module command you can see different familiar components: Module 1 and module 2 are reserved for the Virtual Supervisor Module (VSM). The Cisco Nexus 1000V supports a model, where the supervisor can run in an active/standby high availability mechanism. Your labs pod is only equipped with a primary VSM, but not a secondary VSM. Module 3 and module 4 represent a Virtual Ethernet Module (VEM). As shown at the bottom of the screen, each VEM corresponds to a physical ESX host, identified by the server IP address and name. This mapping of virtual line-card to a physical server eases the communication between the network and server team.
Page 21
3. Lets have a look at the interfaces next by using the show interface brief command
Nexus1000V# show interface brief -------------------------------------------------------------------------------Port VRF Status IP Address Speed MTU -------------------------------------------------------------------------------mgmt0 -up 10.2.11.5 1000 1500 -------------------------------------------------------------------------------Ethernet VLAN Type Mode Status Reason Speed Port Interface Ch # -------------------------------------------------------------------------------Eth3/4 1 eth trunk up none 1000(D) 1 Eth4/4 1 eth trunk up none 1000(D) 2 -------------------------------------------------------------------------------Port-channel VLAN Type Mode Status Reason Speed Protocol Interface -------------------------------------------------------------------------------Po1 1 eth trunk up none a-1000(D) none Po2 1 eth trunk up none a-1000(D) none -------------------------------------------------------------------------------Interface VLAN Type Mode Status Reason MTU -------------------------------------------------------------------------------Veth1 11 virt access up none 1500 Veth2 11 virt access up none 1500 -------------------------------------------------------------------------------Port VRF Status IP Address Speed MTU -------------------------------------------------------------------------------ctrl0 -up -1000 1500 Nexus1000V#
The output of the command show interface brief shows you the different interface types that are used within the Cisco Nexus 1000V:
Mgmt0: This interface is used for out of band management and correspond to the second vNIC of the VSM Ethernet Interfaces: These are physical Ethernet interface and correspond to the physical NICs of the ESX hosts. The numbering scheme lets you easily identify the corresponding module and NIC. Port-Channels: Ethernet Interfaces can be bound manually or automatically through vPC-HM into port channels. When using the uplink port-profile configuration mac-pinning there is no need for the configuration of a traditional port-channel on the upstream switch(es). Nonetheless on the Nexus 1000V a virtual port-channel is still formed.
Page 22
Veths: Virtual Ethernet Interfaces connect to VMs and are independent of the host the host that the VM runs on. The numbering scheme therefore does not include any module information. The Veth identifier remains with the VM during its entire life time even while the VM is powered down.
4. Verify on the Nexus 1000V CLI that the corresponding Virtual Ethernet interface has been created for the two virtual machines by issuing the command show interface virtual.
Nexus1000V# show interface virtual ------------------------------------------------------------------------------Port Adapter Owner Mod Host ------------------------------------------------------------------------------Veth1 Net Adapter 1 Windows 7 - A 3 esx01.vpod.local Veth2 Net Adapter 1 Windows 7 - B 4 esx02.vpod.local Nexus1000V#
The output of the above command gives you a mapping of the VM name to its Veth interface. 5. On top of that the Network Administrator can see at any given time which VM is in use and which portprofile it is attached to it by using the show port-profile usage command.
Nexus1000V# show port-profile usage ------------------------------------------------------------------------------Port Profile Port Adapter Owner ------------------------------------------------------------------------------Uplink Po1 Po2 Eth3/4 vmnic3 esx01.vpod.local Eth4/4 vmnic3 esx02.vpod.local VM-Client Veth1 Net Adapter 1 Windows 7 A Veth2 Net Adapter 1 Windows 7 B Nexus1000V#
Note:
The Network administrator can manage the shown virtual ethernet interfaces the same way as a physical interface on a Cisco switch.
Congratulations! You have successfully added Virtual Machines to the Nexus 1000V distributed virtual switch! As a result the network team now has complete insight into the network part of the Server Virtualization infrastructure.
Page 23
VMotion Configuration
You will now create a VMkernel Interface that will be used for VMotion. VMotion is a well-known feature of VMware which allows users to move the Virtual Machine from one physical host to another while the VM remains operational. Therefore this feature is also called live migration. In this step you will configure the VMKernel VMotion interface for both servers 1. The first step is to provision a port-profile for the VMotion Interface. Lets call this port-profile VMotion
Nexus1000V# conf t Nexus1000V(config)# port-profile VMotion Nexus1000V(config-port-prof)# vmware port-group Nexus1000V(config-port-prof)# switchport mode access Nexus1000V(config-port-prof)# switchport access vlan 12 Nexus1000V(config-port-prof)# no shutdown Nexus1000V(config-port-prof)# system vlan 12 Nexus1000V(config-port-prof)# state enabled
2. Go to the Home -> Inventory -> Hosts and Clusters tab and choose the first server esx01 of your pod.
Page 24
3.
Click on the Configuration tab and within the Hardware area on Networking. Under View choose Distributed Virtual Switch.
4.
In order to add the VMKernel VMotion interface choose Manage Virtual Adapters... and afterwards click on Add within the Manage Virtual Adapters dialog. In the Add Virtual Adapter Wizard choose to create a New Virtual Adapter, and then click on the Next button.
Page 25
5. As Virtual Adapter Types you can only choose VMKernel. Click Next
6. Choose VMotion as the port group name. Also check the box right next to Use this virtual NIC for
For the host esx01 choose the IP address 192.168.12.11 and for host esx02 the IP address 192.168.12.12. For both hosts choose the Subnet Mask of 255.255.255.0. Do not change the VMkernel Default Gateway and click on the Next button.
Page 26
8.
Before finishing the Wizard you are presented with an overview of your setting. Verify the correctness of these settings and choose Finish.
9.
You have now successfully added the VMkernel VMotion interface. Close the Manage Virtual Adapters window.
Congratulation! You successfully configured the VMKernel VMotion interface leveraging the Cisco Nexus 1000V. 10. Repeat steps 3 to 8 to configure the VMkernel VMotion Interface on the second host esx02.
Page 27
2. Make note of the associated Veth port and the Module and the ESX hostname currently associated to the Virtual Machine.
Perform a VMotion
Test your previous VMotion configuration by performing a VMotion process. 1. Go to the Home -> Inventory -> Hosts and Clusters tab 2. Drag & drop the Virtual Machine Windows 7 A from the first ESX host of your setup to your second ESX host.
3. Walk through the appearing VMotion wizard by leaving the default settings and clicking on Next and finally finish.
Page 28
5. Open the Virtual Machine Console again and verify that the Virtual Machine still has network connectivity by reloading the default webpage.
Congratulation! You are now able to trace a VM moving across physical ESX hosts via VMotion. The resulting output shows you the current mapping of a Veth port to the Virtual Machine. By comparing the output before and after the VMotion process, you can notice that the Virtual Machine still uses the same Veth port, while the output for Module and Host changes. The Cisco Nexus 1000V provides all the monitoring capabilities that the network team is used to for a Virtual Ethernet port, even while the VM attached to it is live migrated. On top of that all the configuration and statistics follow the VM across the VMotion process. Please migrate the Virtual Machine Windows 7 A back to the host esx01.vpod.local before progressing to the next lab step.
Page 29
Page 30
2. Click on the Cisco Systems, Inc. icon to load the default webpage and choose the link for the Host PortStatus Analyzer
3. Verify that port 135 (Windows RPC) and 445 (Windows CIFS) are open
Page 31
This access list denies all TCP traffic to port 135 (Windows RPC) and 445 (Windows CIFS) while permitting any other IP traffic. 2. You will now apply the access list ProtectVM as an outbound-rule to the virtual Ethernet interfaces (veth) of the existing VMs running Windows 7. Here the concept of port-profiles comes very handy in simplifying the work. As the Veth interface of the Windows 7 VM leverage the port profile VM-Client, adding the access list to this port profile will automatically update all associated Veth interfaces and assign the access list to them.
Nexus1000V(config-acl)# port-profile VM-Client Nexus1000V(config-port-prof)# ip port access-group ProtectVM out
As a result access to both open ports within your Virtual Machine has been blocked.
Note:
The directions in and out of an ACL have to be seen from the perspective of the Virtual Ethernet Module (VEM), not the Virtual Machine. Thus in specifies traffic flowing in to the VEM from the VM, while out specifies traffic flowing out from the VEM to the VM.
Page 32
Congratulations! You have successfully created, applied and verified an IP based access list. This exercise demonstrated that all the features usually used on a physical switch interface can now be applied on the veth and that the concept of port-profile makes the network configuration much easier: Changes to a port-profile will be propagated on the fly on all the VM using it.
Page 33
Mobile VM Security
Another key differentiator of the Cisco Nexus 1000V over the VMWare DVS is the advanced Private VLAN capability. This section demonstrates the capabilities of Private VLANs by placing individual VMs in a Private VLAN while utilizing the uplink port as a promiscuous PVLAN trunk. Thus VMs will not be able to communicate among each other but can only communicate with the default gateway and any other peer beyond the default gateway. The upstream switch does not need to be configured for that. This can for example be used to deploy Server Virtualization within a DMZ. The content of this step includes: Configure Private VLANs. Removing the Private VLAN configuration.
Private VLANs
This section demonstrates the configuration of a Private VLAN towards the connected VM. First we will update the VLAN to run in isolated mode. Then we will configure the VM and uplink port-profile to do the translation between the isolated and the promiscuous VLAN. In order to prevent the requirement of configuring the PVLAN merging on the upstream switch the new feature of promiscuous PVLAN trunks is showcased on the uplink port. This means that the primary and secondary VLAN will be merged before leaving the uplink port.
Note:
When a VLAN is specified to be a primary VLAN for usage with private VLANs it instantly becomes unusable as a VLAN. As your Virtual Machines are still using VLAN 11 for network connectivity your VMs will encounter connectivity issues while you perform the configuration steps below. It is therefore recommend not to change an in-use VLAN from non-PVLAN usage to PVLAN usage in a production environment.
1. First, you will prepare the primary and secondary VLAN on the VSM.
Nexus1000V# conf t Nexus1000V(config)# vlan Nexus1000V(config-vlan)# Nexus1000V(config-vlan)# Nexus1000V(config-vlan)# Nexus1000V(config-vlan)# Nexus1000V(config-vlan)# 11 private-vlan primary vlan 111 private-vlan isolated vlan 11 private-vlan association add 111
Page 34
You can check that the configuration has been successfully applied by issuing the show vlan privatevlan command
Nexus1000V# show vlan private-vlan Primary ------11 Secondary --------111 Type --------------isolated Ports ------------------------------------------
2. As a next step configure the uplink port profile as a promiscuous PVLAN trunk with the primary VLAN 11 and the secondary VLAN 111. The configuration of the promiscuous trunk has already been done during the creation of system-uplink. So it is not necessary to configure it again.
Nexus1000V(config)# port-profile type ethernet Uplink Nexus1000V(config-port-prof)# switchport private-vlan mapping trunk 11 111
3. After this step has been completed, configure the port profile VM-pvlan which connects the Virtual Machines - as a private VLAN in host mode, thus isolating the individual VMs from each other.
Nexus1000V(config)# port-profile VM-pvlan Nexus1000V(config-port-prof)# vmware port-group Nexus1000V(config-port-prof)# switchport mode private-vlan host Nexus1000V(config-port-prof)# switchport private-vlan host-association 11 111 Nexus1000V(config-port-prof)# no shutdown Nexus1000V(config-port-prof)# state enabled
Page 35
5. After a applying a new port-profile to a Virtual Machine is created. Therefore the VMs Windows 7 A and Windows 7 B will no longer be connected to Veth1 and Veth2 respectively as shown in a previous lab step. Verify the current Veth-mapping of the VMs and the usage of PVLAN.
Nexus1000V(config-port)# show interface virtual ------------------------------------------------------------------------------Port Adapter Owner Mod Host ------------------------------------------------------------------------------Veth3 vmk2 VMware VMkernel 3 esx01.vpod.local Veth4 vmk2 VMware VMkernel 4 esx02.vpod.local Veth5 Net Adapter 1 Windows 7 - A 3 esx01.vpod.local Veth6 Net Adapter 1 Windows 7 - B 4 esx02.vpod.local Nexus1000V(config-port)# show interface brief -------------------------------------------------------------------------------Port VRF Status IP Address Speed MTU -------------------------------------------------------------------------------mgmt0 -up 10.2.11.5 1000 1500 -------------------------------------------------------------------------------Ethernet VLAN Type Mode Status Reason Speed Port Interface Ch # -------------------------------------------------------------------------------Eth3/4 1 eth trunk up none 1000(D) 1 Eth4/4 1 eth trunk up none 1000(D) 2 -------------------------------------------------------------------------------Port-channel VLAN Type Mode Status Reason Speed Protocol Interface -------------------------------------------------------------------------------Po1 1 eth trunk up none a-1000(D) none Po2 1 eth trunk up none a-1000(D) none -------------------------------------------------------------------------------Interface VLAN Type Mode Status Reason MTU -------------------------------------------------------------------------------Veth1 11 virt access down nonParticipating 1500 Veth2 11 virt access down nonParticipating 1500 Veth3 12 virt access up none 1500 Veth4 12 virt access up none 1500 Veth5 111 virt pvlan up none 1500 Veth6 111 virt pvlan up none 1500 -------------------------------------------------------------------------------Port VRF Status IP Address Speed MTU -------------------------------------------------------------------------------ctrl0 -up -1000 1500
Page 36
6. The expected behavior of the above configuration is that the first two virtual machines of your pod should both still be able to reach the default gateway and all host beyond this gateway. However they should not be able to reach each other. This can be verified by pinging the default gateway 192.168.1.1 from Windows 7 A. To do so, login to one of the Windows 7 VMs and open the console where you enter the command ping 192.168.1.1. Click on the Command Prompt icon on the desktop within the VM. Now issue the command ping 192.168.1.1.
Try now to ping Windows 7 B from Windows 7 A. The IP address of Windows 7 B is 192.168.1.12. Issue the command ping 192.168.1.12.
7. You can now change the isolated vlan to community vlan. The community VLAN can talk to each other as well as two the promiscuous port. However they cannot talk to an isolated port.
Nexus1000V(config-port)# vlan 111 Nexus1000V(config-vlan)# private-vlan community
Note:
The Virtual Machines using the port-profile VM-pvlan will lose network connectivity for a brief moment (interface flap), when changing the PVLAN mode.
8. Again try to ping the second VM from the first. This time the ping will work.
Congratulations, you have successfully configured a Private VLAN with a promiscuous PVLAN trunk on the uplink! This feature allows you to utilize server virtualization in new areas, such as in the deployment of DMZ. Feel free to move the VMs around the two ESX hosts via VMotion. You will notice that no matter where the 2 VMs reside, the network policies are enforced the same way.
Page 38
Note:
After configuring the VMs to use the original port-profile of VM-Client again, the Veth mapping will correspond again
to the original mapping as outlined in the Attaching a Virtual Machine to the Network lab guide step.
Page 39
Find out what veth interface is being used by the VM named Windows 7 A. In the above example it is associated with veth3.
Note:
Changing the association of a Virtual Machine to a port-group, will create a new veth interface for this VM. You would therefore have to go through the following steps again and update the ERSPAN configuration with the new veth interface information, should you change the port-group.
2. In the VSM configure a new ERSPAN session by issuing the commands below. Note that vethZZ correspond to the veth number of Windows 7 A as identified in step 1. In the above case ZZ would be replaced by 3.
Nexus1000V# conf t Nexus1000V(config)# monitor session 1 type erspan-source Nexus1000V(config-erspan-src)# description Monitor Windows 7 - A VM Nexus1000V(config-erspan-src)# source interface vethZZ both Nexus1000V(config-erspan-src)# destination ip 192.168.1.12 Nexus1000V(config-erspan-src)# erspan-id 999 Nexus1000V(config-erspan-src)# mtu 128 Nexus1000V(config-erspan-src)# no shut
192.168.1.12 is the IP address of Windows 7 B. We will use this VM as our ERSPAN target, where the packet sniffer is installed.
Note:
One of the powerful features of the Nexus 1000V, is the ability to use truncated ERSPAN. Unlike any other switch, the Nexus 1000V, since it is a software switch, can change the size of the ERSPAN Packets to receive only the useful information desired by the network administrator. By changing the MTU to 128, I will only send the GRE header plus some of the packet header but will not saturate the link by sending to much information.
Page 40
The keywords capability l3control indicates to the Cisco Nexus 1000V that the interface will be used to carry L3 Traffic.
2. Create a new VMKernel interface using Virtual Center and apply the newly created port-profile
Page 41
Subnet Mask of 255.255.255.0 on the host esx01 and the IP address 192.168.1.102 with the same Subnet Mask of 255.255.255.0 on the host esx02
6. Click on Next and Finish. 7. Repeat steps 2 to 4 to add the new VMKernel ERSPAN interface on server 2 as well
Congratulation! You configured your first ERSPAN session. Now you can monitor and troubleshoot the traffic of a particular Virtual Machine. As the source of the ERSPAN session is a veth interface, you will still be able to span traffic, even if the VM moves to another host due to a VMotion.
Page 42
2. From the Windows 7 A Console. Issue a continuous ping to the default gateway at 192.168.1.1. To do so type ping -t 192.168.1.1
Page 43
3. Open the console to control the VM called Windows 7 B. 4. Start Wireshark by double click on icon on the desktop. Click on Intel(R) PRO 1000MT Network Connection under Interface List to start capturing packets.
Page 44
5. You will a see various different traffic received by the sniffer. Fine-tune the selection of traffic by applying the filter erspan.spanid == 999 && (icmp.type == 0 || icmp.type == 8)
As a result of the filter you will only see the ICMP requests and replies received via ERSPAN. 6. Initiate a VMotion of Windows 7 A from one ESX host to the other one by dragging the VM icon to the new ESX host. Observe that even during the VMotion Wireshark is receiving the spaned traffic. Only while the VM named Windows 7 A is stunned (at around 78% progress) for a very brief moment as part of the VMotion, you will lose a minimal amount of packets (1-2). This is the moment when VMware briefly halts (stuns) all components such as CPU, I/O (NICs) and transfers control from the original VM to the VMotioned VM.
Congratulation! You have successfully monitored the traffic of a particular VM using ERSPAN. Furthermore you saw that you can do this even across a VMotion.
Page 45
Conclusion
You are now familiar with the Nexus 1000V. As you have experienced during the lab, The Nexus 1000V is based on three important pillars:
Security Mobility of the network Non-disruptive operational model
In this lab you: Have gotten familiar with the Cisco Nexus 1000V Distributed Virtual Switch for VMWare ESX. o Install and configure the Nexus 1000V o Added physical ESX host to the DVS o Attached a Virtual Machine to the Distributed Virtual Switch o Tested the VMotion capability Familiarized yourself with advanced features of the Cisco Nexus 1000V o IP based access lists o Configure an ERSPAN session to troubleshoot the VM Traffic o Configure Private-VLAN
Feedback
We would like to improve this lab to better suit your needs. To do so, we need your feedback. Please take 5 minutes to complete the online feedback for this lab. Just click on the link below and answer the online questionnaire. Online Feedback
Page 46
Lab proctors
Christian Elsen Kishan Pallapothu Cuong Tran
Page 47
Revision: 1.1
Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883
VMware, Inc 3401 Hillview Ave Palo Alto, CA 94304 USA www.vmware.com Tel: 1-877-486-9273 or 650-427-5000 Fax: 650-427-5001
Copyright 2008. VMware, Inc. All rights reserved. Protected by one or more U.S. Patent Nos. 6,397,242, 6,496,847, 6,704,925, 6,711,672, 6,725,289, 6,735,601, 6,785,886, 6,789,156, 6,795,966, 6,880,022, 6,944,699, 6,961,806, 6,961,941, 7,069,413, 7,082,598, 7,089,377, 7,111,086, 7,111,145, 7,117,481, 7,149, 843, 7,155,558, 7,222,221, 7,260,815, 7,260,820, 7,269,683, 7,275,136, 7,277,998,7,277,999, 7,278,030, 7,281,102, 7,290,253, 7,356,679 and patents pending. Cisco, the Cisco logo, and Cisco Systems are registered trademarks or trademarks of Cisco Systems, Inc. and/or its affiliates in the United States and certain other countries. All other trademarks mentioned in this document or Website are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (0807R) 09/08
Page 48