VMware NSX: Physical (VLAN) TO Virtual (VXLAN) Bridging Configuration

I came across a scenario which require the connectivity between traditional workloads with legacy VLANs to virtualized networks using VXLAN, and thought of writing a quick blog post on the subject.

VMware NSX provides in-kernel software L2 Bridging capabilities that allow you to connect VLAN backed VMs to VMs connected to NSX based logical network (virtual wires).

Prior to NSX version 6.2, it was not possible to bridge a Logical Switch that was connected to a Distributed Logical Router: for that scenario it was required to connect the Logical Switch directly to an Edge Services Gateway.1

With NSX 6.2 on a given NSX Logical Switch, Distributed logical routing can co-exist with L2 bridging.


In my scenario, I have a database VM “AMS” which is connected to VLAN backed port group “VxRACK MGMT” with a VLAN ID 36


You can see Database VM “AMS” is connected to VxRACK MGMT port group:


And an Application VM “App-Windows” connected to “App-Tier” VxLAN backed logical switch (DLR).



To verify that “AMS” is isolated and cannot ping the application VM, let me try to ping the default gateway of application VM.



It’s been verified that the VM is isolated and the L2 Bridging is not configured yet.

Now let’s configure NSX L2 bridging:

We will enable NSX L2 Bridging between VLAN 36 and the “App-Tier” Logical Switch, so that VM “AMS” will be able to communicate with the rest of the network. With NSX-V 6.2 is now possible to have a L2 Bridge and a Distributed Logical Router connected to the same Logical Switch. This represents an important enhancement as it simplifies the integration of NSX in brownfield environments, as well as the migration from legacy to virtual networking.


Select the “App-Tier” logical switch and click ok:


Click on Distributed port group and select “VxRACK-MGMT” port group:

13To enable the L2 Bridging, click on the Publish Changes button, and wait until the page refreshes.


Verify the published configuration. You will notice the “Routing Enabled” message: it means that this L2 Bridge is also connected to a Distributed Logical Router, which is an enhancement in NSX-V 6.2.15.png

Let’s verify L2 connectivity between the “AMS” VM, attached on VLAN 36, and the machines connected “App-Tier” Logical Switch (App-Windows). First let me ping the default gateway of “App-Tier” logical switch:


Boom…….ping successful: we have verified connectivity between a VM attached on VLAN 36 and the Distributed Logical Router that is the default gateway of the network, through a L2 Bridge provided by NSX!

Now let’s ping the Application VM “App-Windows” from Database VM “AMS” which is on VLAN 36:



NSX L2 Bridging has been verified successfully. I hope you enjoyed the blog, if you think it’s worth sharing, please do.  Keep learning and sharing knowledge.

VMware NSX 6.2 Installation and Configuration: A to Z

This has been a long pending series of blog Posts on VMware NSX (6.2.2) Installation and configuration I wanted to share. Last month I have installed NSX 6.2.2 in my lab and wanted to share my experience.

I have written 12 blog posts in an attempt to cover the complete procedure for NSX installation and Configuration in vSphere environment from the scratch.

Below is the list of blog posts:


(1) VMware NSX Installation and Configuration Part 1 – Prerequisites for Deploying NSX in vSphere Environment:


(2) VMware NSX Installation and Configuration Part 2 – Deployment of NSX Manager Virtual Appliance:


(3)VMware NSX Installation and Configuration Part 3 –NSX Manager vCenter Integration, SSO, Syslog & License configuration


(4) VMware NSX Installation and Configuration Part 4 – Deploy NSX Controller Cluster


(5) VMware NSX Installation and Configuration Part 5- Exclude Virtual Machines from NSX Firewall Protection


(6) VMware NSX Installation and Configuration Part 6 – Prepare Host Clusters for NSX


(7) VMware NSX Installation and Configuration Part 7- VXLAN Transport Parameters Configuration


(8) VMware NSX Installation and Configuration Part 8- Creating a Logical Switch


(9) VMware NSX Installation and Configuration Part 9-Adding a Distributed Logical Router


(10) VMware NSX Installation and Configuration Part 10- Adding an Edge Services Gateway


(11) VMware NSX Installation and Configuration Part 11-Configuring OSPF on a Logical (Distributed) Router:


(12) VMware NSX Installation and Configuration Part 12-Configure OSPF on an Edge Services Gateway


Hope you liked the posts, do share comment and like if you find them helpful. Till then keep learning and sharing.




VMware NSX Installation and Configuration Part 7- VXLAN Transport Parameters Configuration


1 In vCenter, navigate to Home > Networking & Security > Installation and select the Host Preparation tab.

2 Click Not Configured in the VXLAN column.

3 Set up logical networking. This involves selecting a VDS, a VLAN ID, an MTU size, an IP addressing mechanism, and a NIC teaming policy. The MTU for each switch must be set to 1550 or higher. By default, it is set to 1600.

If the vSphere distributed switch (VDS) MTU size is larger than the VXLAN MTU, the VDS MTU will not be adjusted down. If it is set to a lower value, it will be adjusted to match the VXLAN MTU. For example, if the VDS MTU is set to 2000 and you accept the default VXLAN MTU of 1600, no changes to the VDS MTU will be made. If the VDS MTU is 1500 and the VXLAN MTU is 1600, the VDS MTU will be changed to 1600.




Configuring VXLAN results in the creation of new distributed port groups.


Assign a Segment ID Pool and Multicast Address Range:

VXLAN segments are built between VXLAN tunnel end points (VTEPs). A hypervisor host is an example of a typical VTEP. Each VXLAN tunnel has a segment ID. You must specify a segment ID pool for each NSX Manager to isolate your network traffic. If an NSX controller is not deployed in your environment, you must also add a multicast address range to spread traffic across your network and avoid overloading a single multicast address.


1 In vCenter, navigate to Home > Networking & Security > Installation and select the Logical Network Preparation tab. 2 Click Segment ID > Edit.



4 If any of your transport zones will use multicast or hybrid replication mode, add a multicast address or a range of multicast addresses.

Having a range of multicast addresses spreads traffic across your network, prevents the overloading of a single multicast address, and better contains BUM replication.

Add a Transport Zone:

A transport zone controls to which hosts a logical switch can reach. It can span one or more vSphere clusters. Transport zones dictate which clusters and, therefore, which VMs can participate in the use of a particular network.

An NSX environment can contain one or more transport zones based on your requirements. A host cluster can belong to multiple transport zones. A logical switch can belong to only one transport zone. NSX does not allow connection of VMs that are in different transport zones. The span of a logical switch is limited to a transport zone, so virtual machines in different transport zones cannot be on the same Layer 2 network.

A distributed logical router cannot connect to logical switches that are in different transport zones. After you connect the first logical switch, the selection of further logical switches is limited to those that are in the same transport zone. Similarly, an edge services gateway (ESG) has access to logical switches from only one transport zone.


1 In vCenter, navigate to Home > Networking & Security > Installation and select the Logical Network Preparation tab.

2 Click Transport Zones and click the New Transport Zone (+) icon


3 In the New Transport Zone dialog box, type a name and an optional description for the transport zone. 4 Depending on whether you have a controller node in your environment, or you want to use multicast addresses, select the control plane mode.

  • Multicast: Multicast IP addresses in the physical network are used for the control plane. This mode is recommended only when you are upgrading from older VXLAN deployments. Requires PIM/IGMP in the physical network.
  • Unicast: The control plane is handled by an NSX controller. All unicast traffic leverages optimized headend replication. No multicast IP addresses or special network configuration is required.
  • Hybrid: Offloads local traffic replication to the physical network (L2 multicast). This requires IGMP snooping on the first-hop switch and access to an IGMP querier in each VTEP subnet, but does not require PIM. The first-hop switch handles traffic replication for the subnet.

5 Select the clusters to be added to the transport zone.


That was it regarding the VXLAN configuration , in the next blog i will cover the creation of VXLAN based logical switch.

VMware NSX Installation and Configuration Part 6 – Prepare Host Clusters for NSX

Host preparation is the process in which the NSX Manager

1) Installs NSX kernel modules on ESXi hosts that are members of vCenter clusters and

2) Builds the NSX control-plane and management-plane fabric. NSX kernel modules packaged in VIB files run within the hypervisor kernel and provide services such as distributed routing, distributed firewall, and VXLAN bridging capabilities.

To prepare your environment for network virtualization, you must install network infrastructure components on a per-cluster level for each vCenter server where needed. This deploys the required software on all hosts in the cluster. When a new host is added to this cluster, the required software is automatically installed on the newly added host.



1 In vCenter, navigate to Home > Networking & Security > Installation and select the Host Preparation tab.

2 For all clusters that will require NSX logical switching, routing, and firewalls, click the gear icon and click Install


When the installation is complete, the Installation Status column displays 6.2 Uninstall and the Firewall column displays Enabled. Both columns have a green check mark. If you see Resolve in the Installation Status column, click Resolve and then refresh your browser window.

VIBs are installed and registered with all hosts within the prepared cluster: n esx-vsip n esx-vxlan To verify, SSH to each host and run the:

esxcli software vib list | grep esx command. In addition to displaying the VIBs, this command shows the version of the VIBs installed.


In the next post , we will look into the VXLAN configuration parameters.