Browsed by
Author: Radek

Part 1 – PVRDMA and how to test it in home lab.

Part 1 – PVRDMA and how to test it in home lab.

One of the members of the VMware User Community (VMTN) inspired me to build a configuration where two VMs use PVRDMA network adapters to communicate. The goal I wanted to achieve was to establish the communication between VMs
without using Host Channel Adapter cards installed in hosts. It’s possible to configure it as stated here, in the VMware vSphere documentation.

For virtual machines on the same ESXi hosts or virtual machines using the TCP-based fallback, the HCA is not required.

To do this task, I prepared one ESXi host (6.7U1) managed by vCSA (6.7U1). One of the requirements for RDMA is a vDS. First, I configured a dedicated vDS for RDMA communication. I simply set a basic vDS configuration (DSwitch-DVUplinks-34) with a default portgroup (DPortGroup) and, then, equipped it with just one uplink.

In vSphere, a virtual machine can use a PVRDMA network adapter to communicate with other virtual machines that have PVRDMA devices. The virtual machines must be connected to the same vSphere Distributed Switch.

Second, I created a VMkernel port (vmk1) dedicated to RDMA traffic in this DPortGroup without assigning an IP address to this port (No IPv4 settings).

Third, I set Advanced System Setting Net.PVRDMAVmknic on ESXi host and gave it a value pointing to VMkernel port (vmk1). Tag a VMkernel Adapter for PVRDMA.

Then, I enabled “pvrdma” firewall rule on host in Edit Security Profile window. Enable the Firewall Rule for PVRDMA.

The next steps are related to configuration of the VMs. First, I created a new virtual machine. Then, I added another Network adapter to it and connected it to DPortGroup on vDS. For the Adapter Type of this Network adapter I chose PVRDMA and Device Protocol RoCE v2. Assign a PVRDMA Adapter to a Virtual Machine.

Then, I installed Fedora 29 on a first VM. I chose it because there are many tools to easily test a communication using RDMA. After the OS installation, another network interface showed up on the VM. I addressed it in a different IP subnet. I used two network interfaces in VMs, the first one to have an access through SSH and the second one to test RDMA communication.

Then I set “Reserve all guest memory (All locked)” in VM’s Edit Settings window.

I had two VMs configured enough – in the infrastructure layer – to communicate using RDMA.

To do it I had to install appropriate tools. I found them on GitHub, here.
To use them, I had to install them first. I did it using the procedure described on the previously mentioned page.

dnf install cmake gcc libnl3-devel libudev-devel pkgconfig valgrind-devel ninja-build python3-devel python3-Cython	

Next, I installed the git client using the following command.

yum install git

Then I cloned the git project to a local directory.

mkdir /home/rdma
git clone https://github.com/linux-rdma/rdma-core.git /home/rdma

I built it.

cd /home/rdma
bash build.sh

Afterwards, I cloned a VM to have a communication partner for the first one. After cloning, I reconfigured an appropriate IP address in the cloned VM.

Finally I could test the communication using RDMA.

On the VM that functioned as a server I ran a listener service on the interface mapped to PVRDMA virtual adapter:

cd /home/rdma/build/bin
./rping -s -a 192.168.0.200 -P

I ran this command, which allowed me to connect the client VM to the server VM:

./rping -c -I 192.168.0.100 -a 192.168.0.200 -v

It was working beautifully!

pvrdma

dcli and how to shutdown vCSA

dcli and how to shutdown vCSA

Sometimes You want to shutdown vCSA or PSC gracefully, but You don’t have an access to GUI through vSphere Client or VAMI.

How to do it in CLI? I’m going to show You right now using dcli, because I’m exploring a potential of this tool and I can’t get enough.

  1. Open an SSH session to vCSA and log in as root user.
  2. Run dcli command in an interactive mode.

    dcli +i
  3. Use shutdown API call, to shutdown an appliance, giving a delay value (0 means now) and a description of disk task.

    com vmware appliance shutdown poweroff --delay 0 --reason 'Shutdown now'
  4. Enter an appropriate administrator user name e.g. administrator@vsphere.local and a password.
  5. Decide if You want save the credentials in the credstore. You can enter ‘y’ as yes.

Wait until the appliance will go down. 🙂

I know that it’s not the quickest way, but the point is to have fun.

VCSA Tools – Part 1 – journalctl. Better way for vCSA log revision.

VCSA Tools – Part 1 – journalctl. Better way for vCSA log revision.

There’s a plenty of great CLI tools in VCSA that modern vSphere administrator should know, so I decided to share my knowledge and describe them in the series of articles.

The first one is journalctl. A tool that simplifies and quickens the VCSA troubleshooting process.

Below I’m presenting how I’m using it, to filter the logs records.

Log in to VCSA shell and run the commands below, regarding to the result you want to achive.

The logs from the current boot:

journalctl -b


The boots that journald is aware of:

journalctl --list-boots

It will show this kind of results representing each known boot:

-3 26c7f0e356eb4d0bbd8d9fad1b457808 Wed 2019-01-09 15:18:12 UTC—Wed 2019-01-09 16:58:18 UTC
-2 124d73cf46d441908db7609813e0c49a Wed 2019-01-09 17:01:08 UTC—Fri 2019-01-11 10:22:39 UTC
-1 ebf1ba1936404fb086727b61ac825d47 Fri 2019-01-11 11:22:59 UTC—Fri 2019-01-11 15:48:37 UTC
 0 bd70a6d99a7245dc9b7859c5f2a7ef6f Mon 2019-01-14 10:19:47 UTC—Tue 2019-01-15 12:38:16 UTC

Now you can use it, to display the results of the chosen boot:

journalctl -b -1


To limit the results to the given timeframe, use dates or keywords:

journalctl --since "2019-01-10" --until "2019-01-14 03:00"

or:

journalctl --since yesterday

or:

journalctl --since 09:00 --until "2 hour ago"


Limiting the logs by service name:

Get the service names first:

systemctl list-unit-files | grep -i vmware

To show only records for vpxd service in current boot:

journalctl -b -u vmware-vpxd.service


Filtering by Id (Process, User or Group):

Get the user Id, e.g.:

cat /etc/passwd | grep -i updatemgr

then:

journalctl _UID=1017 --since today


Filtering by the binary path:

journalctl -b --since "20 minute ago" /usr/sbin/vpxd


Displaying kernel messages:

journalctl -k


Limiting messages by their priorities:

journalctl -p err -b

where the priority codes are:

0: emerg
1: alert
2: crit
3: err
4: warning
5: notice
6: info
7: debug

Of course those examples don’t exhaust all the possibilities this tool has, but consider them as a usage starting point. Feel free to add new, useful examples in comments below.

That’s all for today!

dcli and orphaned VMs in vCenter Server inventory

dcli and orphaned VMs in vCenter Server inventory

The orphaned VMs in vCenter inventory is an unusual view in experienced administrator’s Web/vSphere Client window. But in large environments, where many people manage hosts and VMs it will happen sometimes.

You do know how to get rid of them using traditional methods described in VMware KB articles and by other well known bloggers, but there’s a quite elegant new method using dcli.
This handy tool is available in vCLI package, in 6.5/6.7 vCSA shell and vCenter Server on Windows command prompt. Dcli does use APIs to give an administrator the interface to call some methods to get done or to automate some tasks.

How to use it to remove orphaned VMs from vCenter inventory?

  1. Open an SSH session to vCSA and log in as root user.
  2. Run dcli command in an interactive mode.

    dcli +i
  3. Get a list of VMs registered in vCenter’s inventory. Log in as administrator user in your SSO domain. You can save credentials in the credstore for future use.

    com vmware vcenter vm list
  4. From the displayed list get VM’s MoID (Managed Object Id) of the affected VM, e.g. vm-103.
  5. Run this command to delete the record of the affected VM using its MoID from vCenter’s database.

    com vmware vcenter vm delete --vm vm-103
  6. Using Web/vSphere Client check the vCenter’s inventory if the affected VM is now deleted.

It’s working!

Part 2 – How to list vSwitch “MAC Address table” on ESXi host?

Part 2 – How to list vSwitch “MAC Address table” on ESXi host?

The other way to list MAC addresses of open ports on vSwitches on the ESXi host is based on net-stats tool.

Use this one-liner.

		
for VSWITCH in $(vsish -e ls /net/portsets/ | cut -c 1-8); do net-stats -S $VSWITCH | grep \{\"name | sed 's/[{,"]//g' | awk '{$9=$10=$11=$12=""; print $0}'; done		
		
	

This is not a final word. 🙂

Part 1 – How to list vSwitch “MAC Address table” on ESXi host?

Part 1 – How to list vSwitch “MAC Address table” on ESXi host?

Sometimes You need to list MAC addresses loged on host’s vSwitches to eliminate VM’s MAC address duplicates.

  1. Create a shell script:
  2. vi mac_address_list.sh
  3. Copy and past the code listed below:
  4. 
    #!/bin/sh
    #vmrale
    for VSWITCH in `vsish -e ls /net/portsets/ | cut -c 1-8`
    do
            echo $VSWITCH
            for PORT in `vsish -e ls /net/portsets/$VSWITCH/ports | cut -c 1-8`
            do
                    CLIENT_NAME=`vsish -e get /net/portsets/$VSWITCH/ports/$PORT/status | grep clientName | uniq`
                    ADDRESS=`vsish -e get /net/portsets/$VSWITCH/ports/$PORT/status | grep unicastAdd | uniq`
                    echo -e "\t$PORT\t$CLIENT_NAME\t$ADDRESS"
            done
    done        
    
    
  5. Change the file’s permissions
  6. chmod 755 mac_address_list.sh
  7. Run the script
  8. ./mac_address_list.sh

Simple, but useful! 🙂

… but this is not the only one possible method 🙂