Browsed by
Tag: esxi 6.5

HTML5 Client – the new way of managing vSphere environment?

HTML5 Client – the new way of managing vSphere environment?

Since vSphere 6.5, VMware killed standard Windows vSphere Client. However, it was promised so we should not be suprised (anyway I am still shocked ;)).

Fortunatelly, every cloud has a silver lining. I reckon that VMware is aware that the current Web Client is not a perfect solution. That’s why they released completely new HTML5 vSphere Client which seems to be quite useful, intuitive and what’s the most important – it works as it should in therms of response times. Some of Administrators claims it reminds the old GSX console.

The darker side of the new Client is that it’s constrained in terms of functionality and it will not let you perform all of the administrative tasks. But do not worry it’s the first release and I hope VMware will expand the functionality quickly.

The HTML5 Client could be accessed by energing the FQDN or IP address of our vCenter in the Web browser, then you will see two possible options – classic Web Client and the new one. You will also notice that there is a caution saying that it has only partial functionality.

html5_1

The list of non supported functionalities you will find here.

After you sign in to the new administration interface you will see quite grey and simple but in my opinion still good looking interface.

html5_2

The whole structure of it is designed to be intuitive expecially for those Admins which are still using mostly just the standard vSphere client. In my option the design combines the best things from Web and Windows Client in one interface. The problem is just the lask of functionality. I decided to try it and start with configuring iSCSI in my new nested LAB. However, I was quickly brought to the heel – there was not an option to add software SCSI adapter. This suddenly ended my adventure with new HTML5 Client 🙂

To sum up, it would be a handy tool in the future, it just need to be completed in terms of functionality.  Unfortunatelly, for those who want to migrate to vSphere 6.5 there will be still a need to use Web Client.

vSphere 6.5 – Stronger security with NFS 4.1

vSphere 6.5 – Stronger security with NFS 4.1

NFS 4.1 is been supported since vSphere 6.0 and  but now we are looking into providing stronger security. In vSphere 6.5 we have better security  by providing strong cryptographic algorithms with Kerberos (AES). Also, IPV6 is supported but not with Kerberos and that is another area we are looking into along with supporting integrity checks.

Aa we know vSphere 6 NFS client also does not support the more advanced encryption type know is AES. So lets take a look at what is new in vSphere 6.5 NFS in terms of encryption standard :
storage5

 Summary:

  • NFS 4.1 has been supported since vSphere 6.0 ,
  • Currently support stronger cryptographic algorithms with Kerberos authentication using AES ,
  • Introducing Kerberos integrity check (SEC_KRB5i) along with Kerberos authentication in vSphere 6.5,
  • Adding Support IPV6 with Kerberos ,
  • Added Host Profiles support for NFS 4.1 ,
  • Better security for customer environments .

 

vSphere 6.5 – New scale limits for paths & LUNs

vSphere 6.5 – New scale limits for paths & LUNs

In vSphere 6.5 VMware  doubled  the  current limits and continuously work on reaching new scale around this . Current limits (before 6.5) pose challenge as for example in some cases our customers have 8 paths to a LUN, in this configuration one can have max of 128 LUNs in a cluster. Also, many of the customers tend to have smaller size LUNs to segregate important data for easy backup and restore. This approach can also exhaust current LUN and Path limits.

Large LUN limits  enable  to have larger cluster sizes and hence reducing management over head.storage4

SUMMARY:

  • Current Limit is 256 LUNs and 1024 Paths ,
  • This limits customer deployments requiring higher Path counts ,
  • Customers requiring small sized LUNs for important files/data require larger LUN limits to work with ,
  • Larger Path/LUN limits can enable larger cluster sizes, reducing the overhead of managing multiple clusters ,
  • Support 512 LUNs and 2K paths in vSphere 6.5 .

 

vSphere 6.5 – vSphere HA Orchestrated Restart

vSphere 6.5 – vSphere HA Orchestrated Restart

VMware announced a new feature in vSphere 6.5 called HA Orchestrated Restart. But wait a minute – it was already available in previous version where you were able to set the restart priority for specific VMs or group of VMs. So what’s going on with this “new feature” ? As always, the devil is in the details 🙂

Let’s start from the old behavior. Using VM overrides in previous version of vSphere, we could set one of three available priorities – High, Medium (default) and Low. However it doesn’t guarantee that the restart order will be successful for our three-tier apps because HA is only really concerned about the resources to the VM, and once the VM had received the resources, HA’s job was done. The restart priority defined the order in which VMs would secure their resources. But if there was plenty of resources for everyone, then the VMs would receive their allocations in pretty quick succession and could start powering on. For example if DB server takes longer to boot than the App server for example, the App will not be able to access the DB and may fail.

vSphere 6.5 now allows you to create VM to VM dependency chains.  These dependency rules are also enforced if when vSphere HA is used to restart VMs from failed hosts.  That’s gives you the ability to configure the right chain of dependency where App server will wait for DB until it boots up. The VM to VM rules must also be created that complies with the Restart Priority level.  In this example, if the app server depended on the database server, the database server would need to be configured in a priority level higher or equal to the app server.orchestrated-restart

Validation checks are also automatically done when this feature is configured to ensure circular dependencies or conflicting rules are not unknowingly created.

There are number of conditions that HA can check for to determine the readiness of a VM which can be chosen by the administrator as the acceptable readiness state for orchestrated restarts.

Conditions:

  1. VM has resources secured (same as old behavior)
  2. VM is powered on
  3. VMware Tools heartbeat detected
  4. VMware Tools Application heartbeat detected

Post condition delays:

  1. User-configurable delay – e.g. wait 10 minutes after power on

The configuration of the dependency chain is very simple.  In the Cluster configuration of the Web Client, you would first create the VM groups under VM/Host Groups.  For each group, you would include only a single VM.

orchestrated-restart2-jpg

The next thing to configure is the VM Rules in VM/Host Rules section.  This is where you can define the dependency between the VM Groups.  Since each group only contains a single VM, you are essentially creating a VM to VM rule.

orchestrated-restart3

In previous releases we were able to manage such behavior using e.g. SRM during failover to recovery site. However there are plenty of use cases where it’s necessary to provide the correct order of restarts during single site and HA cluster. Fortunately, now it’s possible 🙂

vSphere 6.5 – Automatic UNMAP

vSphere 6.5 – Automatic UNMAP

In vSphere 6.5 VMware are looking into automating the UNMAP process, where VMFS  would track the deleted blocks and will be able to reclaim deleted space from the backend array in back ground. This background operation should make sure that there is a minimal storage I/O impact due to UNMAP operations.

storage3

Just to remaind – UNMAP is a VAAI primitive using which we can reclaim dead or stranded space on thinly provisioned VMFS volume. Currently this can be initiated by running a simple ESX CLI command and it can free up deleted blocks from storage.

In vSphere 6.5 VMware is looking into automating the UNMAP process, where VMFS  would track the deleted blocks and will be able to reclaim deleted space from the backend array in back ground. This background operation should make sure that there is a minimal storage I/O impact due to UNMAP operations.

Lets go  though an UNMAP example stating our thought process

  1. VM is being provisioned on a vSphere host and assigned a 6 TB VMDK.
  2. There will be thin provisioned VMDK storage space allocated on storage array.
  3. User installs a POC data analytics application and creates a 400 GB database VM
  4. Once the work is done with this database, user deletes this DB VM – VMFS  initiate a space reclamation in the back ground
  5. 400GB space on the array side should be freed or claimed back

One of the design goal will be to make sure there is minimal impact due to UNMAP on storage I/O. We are also looking into using new SESparse format as a snapshot file format to enable this.

Space reclamation is critical when customers are using All Flash storage due to higher cost of Flash and any storage usage optimization will provide better ROI for customers

Summary:

  • Automatic UNMAP does not require any manual intervention or scripts
  • Space reclamation happens in the background
  • CLI based UNMAP continues to be supported
  • Storage I/O impact due to automatic UNMAP is minimal

Supported in vSphere 6.5 with new VMFS 6 datastores

vSphere 6.5 – VMFS6 & 512e HDD support

vSphere 6.5 – VMFS6 & 512e HDD support

vSphere 6.5 introduces a new VMFS 6 – but why we need new version You ask? –answer: to support new hdd type, and this  point  us to current storage market situation . Well because with  512bytes sector size HDD’s  vendors are hitting drive capacity limits. They can not go beyond a certain size without compromising the resilience and reliability (not the best option in case of our data).             To provide large capacity drives, Storage Industry is moving forward to Advance format (AF) drives. These drives use large physical sector size of 4096 bytes.

storage1

So how does it help? With new AF (4K sector size) format, Disk drive vendors can create more reliable and large capacity HDD to support the growing storage needs. These drives are more cost effective as they provide better $/GB ratio.

Two kinds of 4k drives:

  1. 512 Emulation (512e) mode – these are 4KN drives but expose logical sector size as 512 and have physical sector size as 4K. This mode is important as it continues to work with legacy OS and application and provide large capacity drives. Main disadwantage with these drives is that they will trigger a RMW for storage I/O smaller than 4K. This RMW happens in drive firmware and may have some performance impact in cases where large # of storage IO are smaller than 4K

storage2

  1. 4KN Drives – these drives expose logical sector size and physical sector size as 4K. This drives can not work with legacy OS and application. Whole of the stack from vm guest OS to ESXi to Storage has to be 4KN

Lets now  look at a few advantages of 4k drives.

  • 4K drives require less space for error correction codes than regular 512-byte sector drives . This result in greater data density on 4k drives which provides a better TCO(total cost of ownership) ,
  • 4K drives have a larger ECC field for error correction codes and so inherently provide better data integrity,
  • 4k drives are expected to have better performance than the current 512n drives. However this is only true when the guest OS has been configured to issue I/Os aligned to the 4K sector size.

 

New VMFS 6 SUMMARY:

  • VMFS 5 does not support 4k drives even in emulation mode.If a 512e drives is formatted with VMFS-5 it is still recognized but this configuration is not supported by Vmware,
  • VMFS-6 is designed from the ground up to support AF drives in 512e mode,
  • VMFS-6 metadata is designed to be in alignment with the 4k sector size,
  • 512e drives can only be used with VMFS-6.
vSphere 6.5 – Network-aware DRS

vSphere 6.5 – Network-aware DRS

VMware Distributed Resource Scheduler is a well known VMware feature which is one of the most helpful escpecially in bigger environments. It’s used to balance the load (CPU and Memory) between ESXi hosts in cluster. However, in previous releases it has an imperfection.

Let’s imagine a fallowing situation shown below:

networkdrs1

Assume you have three host in the cluster with 6 VM’s powered on. If you power on  another VM it will be placed on the first host by DRS.

Although host 1 has saturated it’s network in 100% but the VMs running on it are not consuming a large amount of CPU/Memory the next VM will be placed on it. That will cause even bigger network troubles.

Fortunately in vSphere 6.5 DRS will help us in avoiding such situations. That’s due to new feature called Network-aware DRS, which are using the new DRS algorithm. It will now consider network bandwidth when making placement recommendations.  It will calculate the Tx and Rx of the connected physical uplinks and avoid placing new VMs on hosts that are over 80% utilized.  This is an additional placement consideration after all other placement decisions are made.

 

Caution! DRS will not reactively balance the hosts based on network utilization.  Perhaps in future releases it will ?

 

To sum up – Network-aware DRS:

  • Adds network bandwidth considerations by calculating host network saturation (Tx & Rx of connected physical uplinks)
  • Avoids a over-subscribing a host network links, although not guaranteed. Best effort approach.  CPU & MEM performance is still priorities over network.
What’s New in vSphere 6.5 – ProactiveHA

What’s New in vSphere 6.5 – ProactiveHA

Proactive HA is a new feature Available in vSphere 6.5 released recently. It’s a kind of feature which will even better help you to protect you environment in case of hardware failure.

Currently all of the hardware components are redundant including power supplies, fans, network cards etc. However the most possible cause of whole server failure occurs while one of these theoretically redundant components fails. To better imagine that let’s think about power supply fail. There is still the second one but during there is only one it is much more loaded. (Similar things you can observe with hard disks in a RAID group – the biggest possibility of a disk fail is during RAID re-building).

ProactiveHA will help you protect the environment in such situations. It will detect hardware conditions of a host and allow you to evacuate the VMs before the trivial issue causes the serious outage.  For this feature to function, the hardware vendor must participate.  Their hardware monitoring solution will advertise the health of the hardware, and vCenter will query that system to get a status of the hardware components such as the fans, memory, and power supplies.  vSphere can then be configured to respond according to the failure.

 

To let it functions there is a new ESXi host state in vSphere 6.5 – Quarantine mode. It’s similar to maintenance mode but it is not as severe as maintenance mode. It’s mean that DRS will attempt to evacuate all VMs from the host, but only if:

  • No performance impact on any virtual machine in the cluster
  • None of the business rules is disregarded
  • Additionally, any soft affinity or-anti-affinity rules will not be overridden by the evacuation. However, DRS will seek to avoid placing any new VMs.

To set the Proactive HA features, find the Partial Failures and Responses section and set how vSphere should respond to partial failures.  The options are to place a degraded host into Quarantine Mode, Maintenance Mode, or Mixed Mode.

Mixed mode means that for moderate degradation, the host will be placed into Quarantine Mode.  For Severe failures, it will be placed into Maintenance Mode.

proactiveha

For the moment of writing and availibility of vSphere 6.5 GA the supported failure condition types are:

  • Memory
  • Power
  • Fan
  • Network
  • Storage
vSphere 6.5 – VM Encryption

vSphere 6.5 – VM Encryption

 

Next new security  functionality in vSphere 6.5 – encryption is implemented via Storage Policies. If You add to the vm an encryption storage policy it will encrypt the disk.

security7

Key features:

  • No modification within the Guest.
  • VM Agnostic
    • Guest OS
    • DataStore
    • HW Version
    • Policy driven
  • Encrypts both VMDK and VM files
  • No access to encryption keys by the Guest
  • Full support of vMotion

Diagram below shows how it works:

 

security8

  1. Register a VM on a host and configure the (new or existing ) VM with Encryption Enabled storage policy and KMIP server
  2. vCenter gets a key from the KMIP server. That key is used to encrypt the VM files and the VM Disks.
  3. VC loads the key into the ESXi hosts. All hosts that don’t have the key will get the key to support DRS/HA.
  4. Once the key is loaded into the KeyCache on the ESXi host, encryption and decryption of the disk will happen at the IO Filter (introduced in 6.0 U1) level.

But let’s ask who can manage vm encryption ?

… Security Administrators will manage KMS and keys only “subset” of vSphere Admins will / should  manage encryption within vSphere. We have new default role “No Cryptography Administrator” , additional we got new vCenter crypto priviledges like : Encrypt, Decrypt, Manage Keys , Clone. So we basically can delegate encryption priviledges to varius admins via custom roles in the way that we well know from previous environment editions – below example :

security9

Lets see how to add in our vCenter KMS configuration – it is straight forward You just  need to find new tab  at web client and add new connection :

security10

security11

… and finally examples of supported KMS servers (below is not a full list)

Note !!!

most  KMIP 1.1 compliant key managers get the approval – but as usual verify with VMware interoperability matrix to have 100% sure

security12

vSphere 6.5 Security Enhancements  

vSphere 6.5 Security Enhancements  

 

In this article I will try to point most important security enhancements in recently released vSphere 6.5 platform.  As we can hear from “pre GA” sneak peek information VMware will build security in 3 areas:

  • Secure access – logs monitoring and audit
  • Secure infrastructure – hypervisor with minimal footprint = minimal attack surface and cryptographic option to provide SecureBoot
  • Secure data – hypervisor-level encryption for VM data

Let’s go deeper  into the  technology – below is a list of implemented security features / technology in vSphere 6.5 that we will discuss in details:

  • Enhanced Logging
  • VM Encryption
  • Backup and Restore encrypted VMs
  • Encrypted vMotion
  • Secure Boot – ESXi and VMs

 

I’ll provide links to the features above in the near feature. Please, stay tuned 🙂