The Nicira NVP Component Architecture

I thought it would be useful to people to understand the actual components that go into a Nicira NVP deployment.  This is just meant to be a rundown of the various parts that are needed to deploy the Nicira NVP infrastructure.  In doing a few VMware SE knowledge transfer sessions it certainly appeared to “click” with people once they understood what goes into deploying the infrastructure before we even understand how it works.     The Icons used below are what you may see in presentations, but they have been the ones used in the past by Nicira to represent the various components.

Controller Cluster – The Nicira NVP controller cluster is made up of three servers that essentially run the Nicira NVP API’s that the Cloud Management System, (CMS) will interface will.  With the current release there is always three of these deployed and the database information is replicated between them on a regular basis.  For maintenance one controller node at a time can be shutdown while still maintaining access to the API’s by the CMS.  It should be noted that the control cluster simply maintains and programs the various Open vSwitches but does not ever actively participate in the traffic flows of Virtual Machines between transport nodes.
Open vSwitch (OVS) – Open vSwitch is the intelligent edge component that is integrated into various Hypervisors like XENServer and KVM and it what is programmatically controlled by the controller cluster.  The OVS is aware of each controller cluster node so if one becomes unavailable it can still reach the others once it has been registered with the controller cluster.
NVP OVS vApp – This is the Virtual Machine for ESXi deployments.  It is essentially Open vSwitch installed into a Virtual Machine running on ESXi for the purposes of utilizing the overlay network.  There is one OVS vApp required for each ESXi Hypervisor in a cluster and it needs to be excluded from any DRS rules. I have actually documented more detail on how the OVS vApp Works here.
 Service Node – The service node is used to offload various processes from the hypervisor nodes.  To be more specific, Broadcast, Multicast , and unknown unicast traffic flow via the Service Node.  It can also initiate and terminate IPSec tunnels to remote gateways.  It can also serve as a relay point for Hypervisor to Hypervisor traffic in a case where they are not part of the same transport zone.  Lastly it is also a termination point for Multi-Domain Interconnect.  You can have multiple Service Nodes for High Availability and NVP will utilize them accordingly.
NVP Manager – This is the management server with a basic Web Interface used mainly to troubleshoot and verify connections.  It also has an API Inspector that can be used to not only see the various API Commands available, but you can actually execute them from the inspector.  The Web UI essentially uses all the REST API calls on the back-end for everything you do within it manually.  However, from what I have seen the real power is in the automation and the NVP Manager is something you want to use just for troubleshooting long-term.
 L2 or L3 Gateway – This component is a single installation but can vary it’s identity based on how it is configured and added to the Nicira NVP infrastructure   What makes it unique is as either a Layer 2 or Layer 3 Gateway is also configured as a Gateway Service within NVP with one or more gateways for that given service to gain High Availability.  This also means that tenants can share the gateway services without needed individual gateways per tenant   The Layer 3 Gateway is exactly what you think it is, a NAT router between networks.  Additionally the Layer 2 Gateway serves many interesting use cases I will talk about in another post.

Depending on the design most or all of these will be required to achieve the right architecture for you.  For example, maybe you can do everything you want with the Layer 2 Gateway Service without the need for Layer 3 or vice-versa so you may only need one or the other.  I will be posting a separate article on some of the interesting ways the Layer 2 Gateway can be deployed and used once I can get some more gear setup in my lab.

About Chris Colotti

Chris is active on the VMUG and event speaking circuit and is available for many events if you want to reach out and ask. Previously to this he spent close to a decade working for VMware as a Principal Architect. Previous to his nine plus years at VMware, Chris was a System Administrator that evolved his career into a data center architect. Chris spends a lot of time mentoring co-workers and friends on the benefits of personal growth and professional development. Chris is also amongst the first VMware Certified Design Experts (VCDX#37), and author of multiple white papers. In his spare time he helps his wife Julie run her promotional products as the accountant, book keeper, and IT Support. Chris also believes in both a healthy body and healthy mind, and has become heavily involved with fitness as a Diamond Team Beachbody Coach using P90X and other Beachbody Programs. Although Technology is his day job, Chris is passionate about fitness after losing 60 pounds himself in the last few years.

Leave a Reply

Your email address will not be published. Required fields are marked *