One of the members of the VMware User Community (VMTN) inspired me to build a configuration where two VMs use PVRDMA network adapters to communicate. The goal I wanted to achieve was to establish the communication between VMs
without using Host Channel Adapter cards installed in hosts. It’s possible to configure it as stated here, in the VMware vSphere documentation.
For virtual machines on the same ESXi hosts or virtual machines using the TCP-based fallback, the HCA is not required.
To do this task, I prepared one ESXi host (6.7U1) managed by vCSA (6.7U1). One of the requirements for RDMA is a vDS. First, I configured a dedicated vDS for RDMA communication. I simply set a basic vDS configuration (DSwitch-DVUplinks-34) with a default portgroup (DPortGroup) and, then, equipped it with just one uplink.
In vSphere, a virtual machine can use a PVRDMA network adapter to communicate with other virtual machines that have PVRDMA devices. The virtual machines must be connected to the same vSphere Distributed Switch.
Second, I created a VMkernel port (vmk1) dedicated to RDMA traffic in this DPortGroup without assigning an IP address to this port (No IPv4 settings).
Third, I set Advanced System Setting Net.PVRDMAVmknic on ESXi host and gave it a value pointing to VMkernel port (vmk1). Tag a VMkernel Adapter for PVRDMA.
Then, I enabled “pvrdma” firewall rule on host in Edit Security Profile window. Enable the Firewall Rule for PVRDMA.
The next steps are related to configuration of the VMs. First, I created a new virtual machine. Then, I added another Network adapter to it and connected it to DPortGroup on vDS. For the Adapter Type of this Network adapter I chose PVRDMA and Device Protocol RoCE v2. Assign a PVRDMA Adapter to a Virtual Machine.
Then, I installed Fedora 29 on a first VM. I chose it because there are many tools to easily test a communication using RDMA. After the OS installation, another network interface showed up on the VM. I addressed it in a different IP subnet. I used two network interfaces in VMs, the first one to have an access through SSH and the second one to test RDMA communication.
Then I set “Reserve all guest memory (All locked)” in VM’s Edit Settings window.
I had two VMs configured enough – in the infrastructure layer – to communicate using RDMA.
To do it I had to install appropriate tools. I found them on GitHub, here.
To use them, I had to install them first. I did it using the procedure described on the previously mentioned page.
dnf install cmake gcc libnl3-devel libudev-devel pkgconfig valgrind-devel ninja-build python3-devel python3-Cython
Next, I installed the git client using the following command.
yum install git
Then I cloned the git project to a local directory.
mkdir /home/rdma git clone https://github.com/linux-rdma/rdma-core.git /home/rdma
I built it.
cd /home/rdma bash build.sh
Afterwards, I cloned a VM to have a communication partner for the first one. After cloning, I reconfigured an appropriate IP address in the cloned VM.
Finally I could test the communication using RDMA.
On the VM that functioned as a server I ran a listener service on the interface mapped to PVRDMA virtual adapter:
cd /home/rdma/build/bin ./rping -s -a 192.168.0.200 -P
I ran this command, which allowed me to connect the client VM to the server VM:
./rping -c -I 192.168.0.100 -a 192.168.0.200 -v
It was working beautifully!
Sometimes You want to shutdown vCSA or PSC gracefully, but You don’t have an access to GUI through vSphere Client or VAMI.
How to do it in CLI? I’m going to show You right now using dcli, because I’m exploring a potential of this tool and I can’t get enough.
- Open an SSH session to vCSA and log in as root user.
- Run dcli command in an interactive mode.
- Use shutdown API call, to shutdown an appliance, giving a delay value (0 means now) and a description of disk task.
- Enter an appropriate administrator user name e.g. email@example.com and a password.
- Decide if You want save the credentials in the credstore. You can enter ‘y’ as yes.
Wait until the appliance will go down. 🙂
I know that it’s not the quickest way, but the point is to have fun.
The orphaned VMs in vCenter inventory is an unusual view in experienced administrator’s Web/vSphere Client window. But in large environments, where many people manage hosts and VMs it will happen sometimes.
You do know how to get rid of them using traditional methods described in VMware KB articles and by other well known bloggers, but there’s a quite elegant new method using dcli.
This handy tool is available in vCLI package, in 6.5/6.7 vCSA shell and vCenter Server on Windows command prompt. Dcli does use APIs to give an administrator the interface to call some methods to get done or to automate some tasks.
How to use it to remove orphaned VMs from vCenter inventory?
- Open an SSH session to vCSA and log in as root user.
- Run dcli command in an interactive mode.
- Get a list of VMs registered in vCenter’s inventory. Log in as administrator user in your SSO domain. You can save credentials in the credstore for future use.
- From the displayed list get VM’s MoID (Managed Object Id) of the affected VM, e.g. vm-103.
- Run this command to delete the record of the affected VM using its MoID from vCenter’s database.
- Using Web/vSphere Client check the vCenter’s inventory if the affected VM is now deleted.
The other way to list MAC addresses of open ports on vSwitches on the ESXi host is based on net-stats tool.
Use this one-liner.
This is not a final word. 🙂
Sometimes You need to list MAC addresses loged on host’s vSwitches to eliminate VM’s MAC address duplicates.
- Create a shell script:
- Copy and past the code listed below:
- Change the file’s permissions
- Run the script
Simple, but useful! 🙂
… but this is not the only one possible method 🙂
Creating virtual switch through GUI is well described in documentation and pretty intuitive using GUI. However, sometimes it might be useful to know how to do it with CLI or Powershell, thus making the process part of a script to automate initial configuration of ESXi after installation.
Here you will find commands which are necessary to create and configure a standard virtual switch using CLI and Powershell. Those examples will describe the process of vSwitch creation for vMotion traffic which involves VMkernel creation.
I. vSwitch configuration through CLI
- Create a vSwitch named “vMotion”
esxcli network vswitch standard add -v vMotion
- Check whether your newly created vSwitch was configured and is available on the list.
esxcli network vswitch standard list
- Add physical uplink (vmnic) to your vSwitch
esxcli network vswitch standard uplink add -u vmnic4 -v vMotion
- Designate an uplink to be used as active.
esxcli network vswitch standard policy failover set -a vmnic4 -v vMotion
- Add a port group named “vMotion-PG” to previously created vSwitch
esxcli network vswitch standard portgroup add -v vMotion -p vMotion-PG
- Add a VMkernel interface to a port group (Optional – not necessary if you are creating a vSwitch just for VM traffic)
esxcli network ip interface add -p vMotion-PG -i vmk9
- Configure IP settings of a VMkernel adapter.
esxcli network ip interface ipv4 set -i vmk9 -t static -I 172.20.14.11 -N 255.255.255.0
- Tag VMkernel adapter for a vMotion service. NOTE – service tag is case sensitive.
esxcli network ip interface tag add -i vmk9 -t vmotion
Done, your vSwitch is configured and ready to service vMotion traffic.
II. vSwitch configuration through PowerCLI
- First thing is to connect to vCenter server.
Connect-VIServer -Server vcsa.vclass.local -User firstname.lastname@example.org -Password VMware1!
- Indicate specific host and create new virtual switch, assigning vmnic at the same time.
$vswitch1 = New-VirtualSwitch -VMHost sa-esx01.vclass.local -Name vMotion -NIC vmnic4
- Create port group and add it to new virtual switch.
New-VirtualPortGroup -VirtualSwitch $vswitch1 -Name vMotion-PG
- Create and configure VMkernel adapter.
New-VMHostNetworkAdapter -VMHost sa-esx01.vclass.local -PortGroup vMotion-PG -VirtualSwitch vMotion -IP 172.20.11.11 -SubnetMask 255.255.255.0 -vmotionTrafficEnabled $true
vSAN 6.6 it’s 6th generation of the product and there are more than 20+ new features and enhancements in this release, such as:
- Native encryption for data-at-rest
- Compliance certifications
- Resilient management independent of vCenter
- Degraded Disk Handling v2.0 (DDHv2)
- Smart repairs and enhanced rebalancing
- Intelligent rebuilds using partial repairs
- Certified file service & data protection solutions
- Stretched clusters with local failure protection
- Site affinity for stretched clusters
- 1-click witness change for Stretched Cluster
- vSAN Management Pack for vRealize
- Enhanced vSAN SDK and PowerCLI
- Simple networking with Unicast
- vSAN Cloud Analytics with real-time support notification and recommendations
- vSAN Config Assist with 1-click hardware lifecycle management
- Extended vSAN Health Services
- vSAN Easy Install with 1-click fixes
- Up to 50% greater IOPS for all-flash with optimized checksum and dedupe
- Support for new next-gen workloads
- vSAN for Photon in Photon Platform 1.1
- Day 0 support for latest flash technologies
- Expanded caching tier choice
- Docker Volume Driver 1.1
… ok now lets review main enhancements:
vSAN 6.6 introduces the industry’s first native HCI security solution. vSAN will now offer data-at-rest encryption that is completely hardware-agnostic. No more concern about someone walking off with a drive or breaking in to a less-secure, edge IT location and stealing hardware. Encryption is applied at the cluster level, and any data written to a vSAN storage device, both at the cache layer and persistent layer can now be fully encrypted. And vSAN 6.6 supports 2-factor authentication, including SecurID and CAC.
Certified file services and data protection solutions are available from 3rd party partners in the VMware Ready for vSAN Program to enable customers to extend and complement their vSAN environment with proven, industry-leading solutions. These solutions provide customers with detailed guidance on how to complement vSAN. (EMC NetWorker is avaialble today with new solutions coming on soon)
vSAN stretched cluster was released in Q3’15 to provide an Active-Active solution. vSAN 6.6 adds a major new capability that will deliver a highly-available stretched cluster that addresses the highest resiliency requirements of data centers. vSAN 6.6 adds support for local failure protection that can provide resiliency against both site failures and local component failures.
PowerCLI Updates: Full featured vSAN PowerCLI cmdlets enable full automation that includes all the latest features. SDK/API updates also enable enterprise-class automation that brings cloud management flexibility to storage by supporting REST APIs.
VMware vRealize Operations Management Pack for vSAN released recently, provides customers with native integration for simplified management and monitoring. The vSAN management pack is specifically designed to accelerate time to production with vSAN, optimize application performance for workloads running on vSAN and provide unified management for the Software Defined Datacenter (SDDC). It provides additional options for monitoring, managing and troubleshooting vSAN along with the end-to-end infrastructure solutions.
Finally, vSAN 6.6 is well suited for next-generation applications. Performance improvements, especially when combined with new flash technologies for write-intensive applications, enable vSAN to address more emerging applications like Big Data. The vSAN team has also tested and released numerous reference architectures for these types of solutions, including Big Data, Splunk and InterSystems Cache.
- Splunk Reference Architecture: http://www.emc.com/collateral/service-overviews/h15699-splunk-vxrail-sg.pdf
- Citrix XenDestkop/XenApp Blog: https://blogs.vmware.com/virtualblocks/2017/02/27/citrix-xenapp-xendesktop-7-12-vmware-vsan-6-5-flash/
- vSAN, VxRail and Pivotal Cloud Foundry RA: https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/products/vsan/vmware-pcf-vxrail-reference-architeture.pdf
- vSAN and InterSystems Blog: https://community.intersystems.com/post/intersystems-data-platforms-and-performance-%E2%80%93-part-8-hyper-converged-infrastructure-capacity
- Intel, vSAN and Big Data Hadoop: https://builders.intel.com/docs/storagebuilders/Hyper-Converged_big_data_using_Hadoop_with_All-Flash_VMware_vSAN.pdf
Recently we had some strange problems with our 6.5 lab vCenter (Windows version with MSSQL Server db), which frequently crashed. After some digging in vpxd logs it seem to be related to vc db permissions:
“17-05-28T19:36:53.443+02:00 error vpxd [Originator@6876 sub=Default] [VdbStatement] SQLError was thrown: “ODBC error: (42000) – [Micrsoft][SQL Server Native Client 11.0][SQL Server]VIEW SERVER STATE permission was denied on object ‘server’, database ‘master’.” is returned when executing SQL statement “SELECT DB_NAME(mf.DATABASE_ID) Db_Name, CASE mf.FILE_ID WHEN 1 THEN ‘DATA’ WHEN 2 THEN ‘LOG’ END File_Type, vol.VOLUME_MOUNT_POINT AS Drive, CONVERT(INT,vol.AVAILABLE_BYTES/1048576.0) FreeSpaceInMB, (mf.SIZE*8)/1024 VCDB_Space_Mb, mf.PHYSICAL_NAME Physical_Name, SERVERPROPERTY(‘edition’) Sql_Server_Edition, SERVERPROPERTY(‘productversion’) Sql_Server_Version FROM SYS.M” action.“
The SQL execution is failing as the vCenter Server database user has no permisss on ‘master’ database, to resolve this issue, grant additional privileges to the vCenter Server database user:
grant VIEW SERVER STATE to [vCenter_database_user]
GRANT VIEW ANY DEFINITION TO [vCenter_database_user]
This is a mini article to start our Q&A set, a set of not easy to find answer real life questions 😉
Recently I received a question-related to advanced settings SAP app on vSphere platform:
“One of our customer ask us to set the following option to their virtual system: Misc.GuestLibAllowHostInfo This is according to SAP note: 1606643 where SAP requires reconfigure virtual system default configuration. I can’t find details information, which host data would be exposed to virtual system. Could you please point me to documentation or describe which information is being transferred from HOST to virtual systems?“
- After some research I was able to find answer :
“Misc.GuestLibAllowHostInfo” and “tools.guestlib.enableHostInfo” these configurations if enabled allow the guest OS to access some of the ESXi host configurations, mainly performance metrics e.g. how many CPU cores the host has, their utilization and contention etc. There is no confidential information from other customers which would be visible, however, it may give the user of those SAP VMs access to performance/resource information which you may not want to share.
The following document outlines the effect of the changes as I have described above.
I believe the “might use the information to perform further attacks on the host” could only apply to other vulnerabilities which may exist for the particular hardware information that the guestOS can gather from the ESXi host.
Other than that I am not sure there is any other concern to worry about.
Do you have any interesting virtualization related question?