How The Nicira NVP ESXi vApp Works

Earlier this week, I posted a primer on the Nicira Network Virtualization Platform (NVP), but something William Lam pointed out that I updated with a note was that the Open vSwitch for ESXi is not “Integrated” into the hypervisor.  This is in fact true, but that being said there is an Open vSwitch “vApp”, which is really a Virtual Machine you build using an ISO image from Nicira.  I wanted to explain briefly how this works and what you need to think about in the deployment of this current version of the Open vSwitch for ESXi.  Bear in mind this is how it is done today with NVP 2.2.1 so I cannot speak to what changes will come in the future this is just to show how it works in this current release.

Open vSwitch vApp Considerations

  1. The Open vSwitch vApp needs to be created and place 1:1 with given ESXi hypervisors.  Simply put if you have 10 ESXi hypervisors, each one needs a copy of the OVS vApp running on it.  Also this Virtual Machine needs to be EXCLUDED from DRS migrations and “pinned” to their respective ESXi hypervisors.
  2. The Open vSwitch vApp needs at least three network adapters for the various connections:  Management, Data Tunnel, Trunk Access
  3. The Open vSwitch Trunk Port uses Promiscuous mode to see any Virtual Machines attached to the vSwitch on the Virtual Machine Integration Bridge port group
  4. All Virtual Machines connect to the same Port Group that acts as an Integration Bridge for NVP
  5. A Read Only account needs to be established on each ESXi host so the OVS vApp can be configured to connect to the ESXi SDK to gather information on the vSphere Switches and Virtual Machines
  6. There are recommended configurations for this Virtual Machine in order to maintain the performance and I would even add in some of my own like Memory Reservations to ensure that the OVS vApp get the physical resources it needs
  7. The Current Nicira NVP OVS vApp is supported and works with ESXi 5.0 and 5.0U1 at this time.

I have illustrated the Open vSwitch vApp Connectivity below.

Key Things to Point Out:

  • vSphere DVS: The reason we separate these is because one does not require ANY uplinks as it is just a connection point for Virtual Machines and the OVS.  The other is for physical connectivity to the IP Network
    • NOTE:  vSphere 5.0 and 5.1 currently have a bug where VM’s on a DVS with no uplinks cannot vMotion.  This is being addressed soon with a patch release so for vMotion to work you do need uplinks but they can later be removed.  These uplinks can and should be Access ports carrying NO VLAN’s just to meet this requirement for vMotion
  • VM Port Group:  This is where ALL virtual Machines connect for access to the NVP fabric.  This is the same as the “Integration Bridge” used in XEN and KVM.
  • Trunk Port Group: This port group is set to VLAN Trunk 0-4094 as well as Promiscuous mode.  This is how the OVS interrogates the vSphere DVS for what Virtual Machines are running so it can assign NVP Interfaces to it.  Currently this is a requirement but also why I would use a separate DVS with no uplinks at some point
  • Management Port Group:  This is for the OVS Management IP address connectivity
  • Data Port Group:  This is the interface that will establish the encapsulation tunnels to other transport nodes.
  • Virtual Machine Switch Ports:  Each Virtual Machine is configured to be blocked on the vSphere distributed switch and each one requires a unique per virtual machine port VLAN set.  I will cover this further in a configuration post with some pre-configuration options so the ports are already pre-configured for any virtual machines that are attached.  It seems the most efficient to pre-allocated the VLAN’s per port and then the ports can all be unblocked.   

The Data Path

As you can guess with this current implementation the Virtual Machine data actually flows through the VM Port Group to the Trunk Port Group and ultimate out the Data Port Group through the established tunnel to another Transport Node so it is in fact passing through the Open vSwitch.  As you can see this is functional and I do have it working in my home lab.  This also means that the facilitation if moving Virtual Machines from standard vSphere networking to Nicira NVP may not be that complicated once the fabric is in place.  In theory it could be as easy as a change in the Virtual Machines Port Group Connection, and then the Virtual Machine will be communicating over the NVP Fabric.  This is assuming the other parts of the Nicira Fabric and other vSphere DVS requirements are configured first, but I can already see a way to migrate from one network to the other.  I may even record a Video of this in my lab to show that it can be done at some point.

About Chris Colotti

Chris is active on the VMUG and event speaking circuit and is available for many events if you want to reach out and ask. Previously to this he spent close to a decade working for VMware as a Principal Architect. Previous to his nine plus years at VMware, Chris was a System Administrator that evolved his career into a data center architect. Chris spends a lot of time mentoring co-workers and friends on the benefits of personal growth and professional development. Chris is also amongst the first VMware Certified Design Experts (VCDX#37), and author of multiple white papers. In his spare time he helps his wife Julie run her promotional products as the accountant, book keeper, and IT Support. Chris also believes in both a healthy body and healthy mind, and has become heavily involved with fitness as a Diamond Team Beachbody Coach using P90X and other Beachbody Programs. Although Technology is his day job, Chris is passionate about fitness after losing 60 pounds himself in the last few years.

8 comments

  1. Chris,

    Probably worth pointing out that that links between VMs and the VM port group are configured with individual VLAN IDs per VM. This is not obvious but is critical for correct separation of virtual network segments.

    — Dmitri

  2. Gangadhar Singh

    Could you please suggest the recommended configuration for OVSvAPP for optimal performance.

    • Unfortunately I have not worked with it for some time so I cannot give any thoughts on it. Also I am not sure if and how much it has changed since I played with it. You might want to reach out to one of the NSX folks to see what their opinion is. Cheers!

Leave a Reply

Your email address will not be published. Required fields are marked *