Part 2 – How to list vSwitch “MAC Address table” on ESXi host?
The other way to list MAC addresses of open ports on vSwitches on the ESXi host is based on net-stats tool.
Use this one-liner.
for VSWITCH in $(vsish -e ls /net/portsets/ | cut -c 1-8); do net-stats -S $VSWITCH | grep \{\"name | sed 's/[{,"]//g' | awk '{$9=$10=$11=$12=""; print $0}'; done
This is not a final word. đ
Part 1 – How to list vSwitch “MAC Address table” on ESXi host?
Sometimes You need to list MAC addresses loged on host’s vSwitches to eliminate VM’s MAC address duplicates.
- Create a shell script:
- Copy and past the code listed below:
- Change the file’s permissions
- Run the script
vi mac_address_list.sh
#!/bin/sh
#vmrale
for VSWITCH in `vsish -e ls /net/portsets/ | cut -c 1-8`
do
echo $VSWITCH
for PORT in `vsish -e ls /net/portsets/$VSWITCH/ports | cut -c 1-8`
do
CLIENT_NAME=`vsish -e get /net/portsets/$VSWITCH/ports/$PORT/status | grep clientName | uniq`
ADDRESS=`vsish -e get /net/portsets/$VSWITCH/ports/$PORT/status | grep unicastAdd | uniq`
echo -e "\t$PORT\t$CLIENT_NAME\t$ADDRESS"
done
done
chmod 755 mac_address_list.sh
./mac_address_list.sh
Simple, but useful! đ
… but this is not the only one possible method đ
vSphere 6.5 – Whatâs new in networking Â
In this article I will try to review all new network features.
1. vmknic gateway
- Each VMKERNEL port can have its own Gateway.
- This will make it easy for vSphere features to function seamlessly.
- This eliminates the need for adding and maintaining static routes.
Before vSphere 6.5 there was only one default gateway allowed for all VMKernel ports in an ESXi host. vSphere features such as DRS , iSCSI, vMotion, etc. leverage that use VMKERNEL ports are constrained by this limitation. Many of the VMKERNEL ports were not routable without the use of static routes unless they belonged to a subnet other than the one with the default gateway. These static routes had to be manually created and were hard to maintain.
vSphere 6.5 provides the capability to have separate default Gateways for every VMKernel port. This simplifies management of VMKernel ports and eliminates the need for static routes.
Prior to vSphere 6.5, VMware services like DRS, iSCSI, vMotion & provisioning leverage a single gateway. This has been an impediment as one needed to  add static routes on all hosts to get around the problem. Managing these routes could be cumbersome process and not scalable.
vSphere 6.5 provides capabilities, where different services use different default gateways. It will make it easy for end users to consume these feature without the need to add static routes. vSphere 6.5 completely eliminates the need for static routes for all VMKernel based services making it simpler and more scalable.
2.SR-IOV provisioning:
VM provisioning workflow prior to vSphere 6.5, for SR-IOV devices required the user to manually assign the SR-IOV NIC. This resulted in VM provisioning operations being inflexible and not amenable to automation at scale. In vSphere 6.5 SR-IOV devices can be added to virtual machines like any other device making it easier to manage and automate.
3.Support for ERSPAN:
ERSPANÂ mirrors traffic on one or more âsourceâ ports and delivers the mirrored traffic to one or more âdestinationâ ports on another switch. vSphere 6.5 includes support for the ERSPAN protocol.
4.Improvements in DATAPATH:
 vSphere 6.5 has data path improvements to handle heavy load. In order to process large numbers of packets, CPU needs to be performing optimally, in 6.5 ESXi hosts leverage CPU resources in order to maximize the packet rate of VMs.
Where are the improvements being made ?
- VMXNET 3 optimization
- Using copy TX for small messages size (<=256B)
- Optimized usage of pinned memory
- Physical NIC improvements
- Native driver support for Intel cards (removes overhead of translating from VMkernel to VMKLinux data structures)
- CPU Scheduling Improvements
- Up to 8 separate threads can be created per vNIC
- To enable on VM level add:
- Up to 8 separate threads can be created per vNIC
ethernetX.ctxPerDev = â3â to vmx file
Summary:
- Optimizing code to improve efficiency
- Allowing the ability to increase thread count for networking
- Introducing support for more native drivers (Intel)
- VMXNET3 enhancements
How to monitor virtual network â story about netflow in vSphere environment.
Before we start talking about NetFlow configuration on VMware vSphere letâs back to basics and review protocol itself. NetFlow was originally developed by Cisco and has become a reasonably standard mechanism to perform network analysis. NetFlow collect network traffic statistics on designated interfaces. Commonly used in the physical world to help gain visibility into traffic and understanding just who is sending what and to where.
NetFlow comes in a variety of versions, from v1 to v10. VMware uses the IPFIX version
of NetFlow, which is version 10. Each NetFlow monitoring environment need to have exporter ( device carrying  netflow flowâs) , collector (main component ) and of course some network to monitor and analyze đ
Below You can see basic environment diagram:
We can describe flow as tcp/ip packets sequence (without direction) that have common:
- Input interface
- Source IP
- Destination IP
- TCP/IP Protocol
- Source Port (TCP/UDP)
- Destination Port (TCP/UDP)
- ToS IP
Note. vSphere 5.0 uses NetFlow version 5, while vSphere 5.1 and beyond uses IPFIX (version 10).
Ok, we know that distributed virtual is needed to configure NetFlow on vSphere but what about main component NetFlow collector â as usual we have couple options that we can simply divide in commercial software with fancy graphical interfaces and open source staff for admins that still like old good cli đ
Below I will show simple implementation steps describing examples from both approach :
Manage engine NetFlow analyzer v12.2, more about software on https://www.manageengine.com/products/netflow/?gclid=CP3HlJbyv9ACFSQz0wod_UcDCw my lab VM setup:
- Guest OS:Windows 2008R2
- 4GB RAM
- 2vCPU
- 60 GB HDD
- vNIC interface connected to ESXi management network
Installation (using embedded database just for demo purpose) is really simple and straight forward. Let’s start from starting the installer:
- accept license agreements
- choose installation folder on vm hdd
- choose installation component option â for this demo purpose we go with simple environment with only one collector server, central reporting is not necessary
- choose web server and collector services TCP/IP ports
- provide communication details â again in this demo we have all components on one server and we can simply go with localhost
- optional â configuration proxy server details
- select database â on this demo i used embedded Postgresql , but if You choose MS database remember about ODBC config.
- installation is quite fast â couple more minutes and solution will be ready and available to start work:
⌠Web client like in VMware need couple CPU cycles to start đ
.. and finally we can see fancy ManageEngine NetFlow collector
II) Open-Source netdump tool – nfdump is distributed under the BSD license, and can be downloaded at: http://sourceforge.net/projects/nfdump/ my lab VM steup:
- GOS: Debian 8.6
- 4GB RAM
- 2vCPU
- 60 HDD
- vNIC interface connected to ESXi management network
- We need to start from adding some sources to our debian distribution:
- CLI Installation nfdump packet:
- Run simple flow capture to verify if collector is running and creating output flow statictics files (you can see that i use same tcp port 9995 and folder on my desktop as output destination):
Ok, now it is time to back to vSphere and configure DVS to send network traffic to collector:
- IP Address: This is the IP of the NetFlow Collector
- Port: This is the port used by the NetFlow Collector.
- Switch IP Address: This one can be confusing – by assigning an IP address of here, the NetFlow Collector will treat the VDS as one single entity. It does not need to be a valid, routable IP, but is merely used as an identifier.
- Active flow export timeout in seconds: The amount of time that must pass before
- the switch fragments the flow and ships it off to the collector.
- Idle flow export timeout in seconds: Similar to the active flow timeout, but for flows
- that have entered an idle state.
- Sampling rate: This determines the interval packet to collect. By default, the value is 0,
- meaning to collect all packets. If you set the value to something other than 0, it will
- collect every X packet.
- Process internal flows only: Enabling ensures that the only flows collected are ones that occur between VMs on the same host.
And enable it at designated port group level:
Finally we can create simple lab scenario and capture some ftp flow statistics between two vmâs on different ESXi :
VMâs are running in dedicated vlan on the same DVS port group, collector is running on management network to communicate with vCenter and ESXi hosts. I used ftp connection to generate traffic between vmâs below examples output from two collectors (test ran separate as collector share the same ip)Â :
ftp client on first vm:
ftp server on second vm:
flow statistics example from netdump:
flow statistics from ManageEngine
Network virtualization for Dummies – from VMware
VMware shared a free ebook – Network Virtualization for Dummies. It’s the next book from seriers “for Dummies”. The main goal of the series is to describe technical aspects in the most clear and easy way as possible. I haven’t read this one yet but “Virtualization for Dummies” was quite good in my opinion.
If you are keen on network virtualization topic, I strongly encourage you to download it here.