Browsed by
Tag: 6.5

Part 2 – How to list vSwitch “MAC Address table” on ESXi host?

Part 2 – How to list vSwitch “MAC Address table” on ESXi host?

The other way to list MAC addresses of open ports on vSwitches on the ESXi host is based on net-stats tool.

Use this one-liner.

		
for VSWITCH in $(vsish -e ls /net/portsets/ | cut -c 1-8); do net-stats -S $VSWITCH | grep \{\"name | sed 's/[{,"]//g' | awk '{$9=$10=$11=$12=""; print $0}'; done		
		
	

This is not a final word. 🙂

Part 1 – How to list vSwitch “MAC Address table” on ESXi host?

Part 1 – How to list vSwitch “MAC Address table” on ESXi host?

Sometimes You need to list MAC addresses loged on host’s vSwitches to eliminate VM’s MAC address duplicates.

  1. Create a shell script:
  2. vi mac_address_list.sh
  3. Copy and past the code listed below:
  4. 
    #!/bin/sh
    #vmrale
    for VSWITCH in `vsish -e ls /net/portsets/ | cut -c 1-8`
    do
            echo $VSWITCH
            for PORT in `vsish -e ls /net/portsets/$VSWITCH/ports | cut -c 1-8`
            do
                    CLIENT_NAME=`vsish -e get /net/portsets/$VSWITCH/ports/$PORT/status | grep clientName | uniq`
                    ADDRESS=`vsish -e get /net/portsets/$VSWITCH/ports/$PORT/status | grep unicastAdd | uniq`
                    echo -e "\t$PORT\t$CLIENT_NAME\t$ADDRESS"
            done
    done        
    
    
  5. Change the file’s permissions
  6. chmod 755 mac_address_list.sh
  7. Run the script
  8. ./mac_address_list.sh

Simple, but useful! 🙂

… but this is not the only one possible method 🙂

vMotion fails to migrate VMs between ESXi host which have the same configuration

vMotion fails to migrate VMs between ESXi host which have the same configuration

As a rule vMotion requires the same family of CPU among involved servers which ensure the same feature set to be presented in order to succeed.

This is obvious statement, if you have for example Intel Xeon V3 and v4 CPU generations in your cluster you need EVC in order to make it work. But recently I have came across and issue that vMotion were failing to migrate VMs between hosts with identical configuration. That were Dells R730 with V3 Intel CPUs to be more precise.

The error message stated as follows:

The target host does not support the virtual machine’s current hardware requirements.
To resolve CPU incompatibilities, use a cluster with Enhanced vMotion Compatibility (EVC) enabled. See KB article 1003212.
com.vmware.vim.vmfeature.cpuid.stibp
com.vmware.vim.vmfeature.cpuid.ibrs
com.vmware.vim.vmfeature.cpuid.ibpb

Turn on EVC it says, but wait a minute – EVC for the same CPUs? That sounds ridiculous as far as there were 3 exactly the same hosts in the cluster. To make it more unusual I was not able to mgirate VMs only from host 02 to others but was able to migrate VMs online between 01 and 03 and so on. So it definitely was related to host 02 itself.

So I did additional tests which revealed even more weird behaviour for example:

  • I was able to cold migrate a VM from host02 to 01 and then back from 01 to 02 this time online.
  • I was able to migrate VMs without any issues between 02 and 03.
  • all configuratian, communication and so on were correct/
  • not able to migrate VMs using Shared-Nothing vMotion

But then after a few such attempts I realized then host02 has different build than others, small difference but it was a key thing here.

The build number of host02 was: 7526125, whilst other had 7388607. Not a big deal as far as vCenter had higher build it should not be an issue.

The clue here is that 7526125 is a BN of Spectre/Meltdown fixes which were withdrawn, so there were not installed on the rest of hosts in the cluster resulting in different capability set presented to ESXi which are:

  • “Capability Found: cpuid.IBRS”
  • “Capability Found: cpuid.IBPB”
  • “Capabliity Found: cpuid.STIBP”

There are currently 2 ways to deal with such issue:

  1. Cold migrate your VMs if you need to or simply wait for new patches from VMware.
  2. Reinstall that single host to ensure the same capabilities. That’s the way I have choosen because in my case that server had some additional hardware issues that had to be addressed.

For additional information take a look at:

  • https://kb.vmware.com/s/article/52085
  • https://kb.vmware.com/s/article/52345
  • https://kb.vmware.com/s/article/52245

 

Perennially reservations weird behaviour whilst not configured correctly

Perennially reservations weird behaviour whilst not configured correctly

Whilst using RDM disks in your environment you might notice long (even extremely long) boot time of your ESXi hosts. That’s because ESXi host uses a different technique to determine if Raw Device Mapped (RDM) LUNs are used for MSCS cluster devices, by introducing a configuration flag to mark each device as perennially reserved that is participating in an MSCS cluster. During the start of an ESXi host, the storage mid-layer attempts to discover all devices presented to an ESXi host during the device claiming phase. However, MSCS LUNs that have a permanent SCSI reservation cause the start process to lengthen as the ESXi host cannot interrogate the LUN due to the persistent SCSI reservation placed on a device by an active MSCS Node hosted on another ESXi host.

Configuring the device to be perennially reserved is local to each ESXi host, and must be performed on every ESXi host that has visibility to each device participating in an MSCS cluster. This improves the start time for all ESXi hosts that have visibility to the devices.

The process is described in this KB  and is requires to issue following command on each ESXi:

 esxcli storage core device setconfig -d naa.id –perennially-reserved=true

You can check the status using following command:

esxcli storage core device list -d naa.id

In the output of the esxcli command, search for the entry Is Perennially Reserved: true. This shows that the device is marked as perennially reserved.

However, recently I came across on a problem with snapshot consolidation, even storage vMotion was not possible for particular VM.

Whilst checking VM settings one of the disks was locked and indicated that it’s running on a delta disks which means there is a snapshot. However, Snapshot manager didn’t showed any snapshot, at all. Moreover, creating new and delete all snapshot which in most cases solves the consolidation problem didn’t help as well.

Per1

In the vmkernel.log while trying to consolidate VM lots of perenially reservation entries was present. Which initially I ignored because there were RDMs which were intentionally configured as perennially reserved to prevent long ESXi boot.

log

However, after digging deeper and checking a few things, I return to perenially reservations and decided to check what the LUN which generates these warnings is and why it creates these entries especially while trying consolidation or storage vMotion of a VM.

To my surprise I realised that datastore on which the VM’s disks reside is configured as perenially reserved! It was due to a mistake when the PowerCLi script was prepared accidentially someone configured all available LUNs as perenially reserved. Changing the value to false happily solved the problem.

The moral of the story is simple – logs are not issued to be ignored 🙂

vSphere 6.5 – Update Manager changes

vSphere 6.5 – Update Manager changes

Going through our list of articles about new features in vSphere 6.5 the last one is vSphere Update Manager for vCenter Server Appliance. Since vSphere 6.5 it’s fully embedded and integrated with vCenter Server Appliance with no Windows dependencies. It means that vCenter Server Appliance delivers now Update Manager as an optional service similar to Auto Deploy, etc.

Since vSphere 6.5 there is no longer possible to connect Update Manager instance that is installed on a Windows Server machine with vCenter Appliance.

That’s mean that you have two ways to use Update Manager component:
• You can install the Update Manager server component either on the same Windows server where the vCenter Server is installed or on a separate machine. To install Update Manager, you must have Windows administrator credentials for the computer on which you install Update Manager.
• You can deploy vSphere Update Manager in a secured network without Internet access. In such a case, you can use the vSphere Update Manager download service to download update metadata and update binaries.

In the a facelifted Web Client Update Manager Web Client appears as an Update Manager tab under the Configure tab in vSphere Web Client.

The whole management processes are rather the same so there isn’t anything special and worth to notice here since the product is pretty simple, easy on of course it’s doing the job.