Browsed by
Category: Storage

vSphere 6.5 – Stronger security with NFS 4.1

vSphere 6.5 – Stronger security with NFS 4.1

NFS 4.1 is been supported since vSphere 6.0 and  but now we are looking into providing stronger security. In vSphere 6.5 we have better security  by providing strong cryptographic algorithms with Kerberos (AES). Also, IPV6 is supported but not with Kerberos and that is another area we are looking into along with supporting integrity checks.

Aa we know vSphere 6 NFS client also does not support the more advanced encryption type know is AES. So lets take a look at what is new in vSphere 6.5 NFS in terms of encryption standard :
storage5

 Summary:

  • NFS 4.1 has been supported since vSphere 6.0 ,
  • Currently support stronger cryptographic algorithms with Kerberos authentication using AES ,
  • Introducing Kerberos integrity check (SEC_KRB5i) along with Kerberos authentication in vSphere 6.5,
  • Adding Support IPV6 with Kerberos ,
  • Added Host Profiles support for NFS 4.1 ,
  • Better security for customer environments .

 

vSphere 6.5 – New scale limits for paths & LUNs

vSphere 6.5 – New scale limits for paths & LUNs

In vSphere 6.5 VMware  doubled  the  current limits and continuously work on reaching new scale around this . Current limits (before 6.5) pose challenge as for example in some cases our customers have 8 paths to a LUN, in this configuration one can have max of 128 LUNs in a cluster. Also, many of the customers tend to have smaller size LUNs to segregate important data for easy backup and restore. This approach can also exhaust current LUN and Path limits.

Large LUN limits  enable  to have larger cluster sizes and hence reducing management over head.storage4

SUMMARY:

  • Current Limit is 256 LUNs and 1024 Paths ,
  • This limits customer deployments requiring higher Path counts ,
  • Customers requiring small sized LUNs for important files/data require larger LUN limits to work with ,
  • Larger Path/LUN limits can enable larger cluster sizes, reducing the overhead of managing multiple clusters ,
  • Support 512 LUNs and 2K paths in vSphere 6.5 .

 

vSphere 6.5 – Automatic UNMAP

vSphere 6.5 – Automatic UNMAP

In vSphere 6.5 VMware are looking into automating the UNMAP process, where VMFS  would track the deleted blocks and will be able to reclaim deleted space from the backend array in back ground. This background operation should make sure that there is a minimal storage I/O impact due to UNMAP operations.

storage3

Just to remaind – UNMAP is a VAAI primitive using which we can reclaim dead or stranded space on thinly provisioned VMFS volume. Currently this can be initiated by running a simple ESX CLI command and it can free up deleted blocks from storage.

In vSphere 6.5 VMware is looking into automating the UNMAP process, where VMFS  would track the deleted blocks and will be able to reclaim deleted space from the backend array in back ground. This background operation should make sure that there is a minimal storage I/O impact due to UNMAP operations.

Lets go  though an UNMAP example stating our thought process

  1. VM is being provisioned on a vSphere host and assigned a 6 TB VMDK.
  2. There will be thin provisioned VMDK storage space allocated on storage array.
  3. User installs a POC data analytics application and creates a 400 GB database VM
  4. Once the work is done with this database, user deletes this DB VM – VMFS  initiate a space reclamation in the back ground
  5. 400GB space on the array side should be freed or claimed back

One of the design goal will be to make sure there is minimal impact due to UNMAP on storage I/O. We are also looking into using new SESparse format as a snapshot file format to enable this.

Space reclamation is critical when customers are using All Flash storage due to higher cost of Flash and any storage usage optimization will provide better ROI for customers

Summary:

  • Automatic UNMAP does not require any manual intervention or scripts
  • Space reclamation happens in the background
  • CLI based UNMAP continues to be supported
  • Storage I/O impact due to automatic UNMAP is minimal

Supported in vSphere 6.5 with new VMFS 6 datastores

vSphere 6.5 – VMFS6 & 512e HDD support

vSphere 6.5 – VMFS6 & 512e HDD support

vSphere 6.5 introduces a new VMFS 6 – but why we need new version You ask? –answer: to support new hdd type, and this  point  us to current storage market situation . Well because with  512bytes sector size HDD’s  vendors are hitting drive capacity limits. They can not go beyond a certain size without compromising the resilience and reliability (not the best option in case of our data).             To provide large capacity drives, Storage Industry is moving forward to Advance format (AF) drives. These drives use large physical sector size of 4096 bytes.

storage1

So how does it help? With new AF (4K sector size) format, Disk drive vendors can create more reliable and large capacity HDD to support the growing storage needs. These drives are more cost effective as they provide better $/GB ratio.

Two kinds of 4k drives:

  1. 512 Emulation (512e) mode – these are 4KN drives but expose logical sector size as 512 and have physical sector size as 4K. This mode is important as it continues to work with legacy OS and application and provide large capacity drives. Main disadwantage with these drives is that they will trigger a RMW for storage I/O smaller than 4K. This RMW happens in drive firmware and may have some performance impact in cases where large # of storage IO are smaller than 4K

storage2

  1. 4KN Drives – these drives expose logical sector size and physical sector size as 4K. This drives can not work with legacy OS and application. Whole of the stack from vm guest OS to ESXi to Storage has to be 4KN

Lets now  look at a few advantages of 4k drives.

  • 4K drives require less space for error correction codes than regular 512-byte sector drives . This result in greater data density on 4k drives which provides a better TCO(total cost of ownership) ,
  • 4K drives have a larger ECC field for error correction codes and so inherently provide better data integrity,
  • 4k drives are expected to have better performance than the current 512n drives. However this is only true when the guest OS has been configured to issue I/Os aligned to the 4K sector size.

 

New VMFS 6 SUMMARY:

  • VMFS 5 does not support 4k drives even in emulation mode.If a 512e drives is formatted with VMFS-5 it is still recognized but this configuration is not supported by Vmware,
  • VMFS-6 is designed from the ground up to support AF drives in 512e mode,
  • VMFS-6 metadata is designed to be in alignment with the 4k sector size,
  • 512e drives can only be used with VMFS-6.
Increase VMware ESXi iSCSI storage performance ? – lets demistyfy all tips and tricks

Increase VMware ESXi iSCSI storage performance ? – lets demistyfy all tips and tricks

 

Before we start I would like to describe main motivation to write this article which is quite simple – to gather in one place all basic theoretical background about iscsi protocol and best practices at its implementation on vSphere platform with special consideration about potential performance tuning tips & tricks . This is first part of the series where We (I’m counting on readers participation) try to gather and verify all this “magical” parameters often treated as myths by many Admins.

To begin let’s start from something boring but as usual necessary 😉 … theoretical background.

iSCSI is an network based storage standard that enable connectivity between iSCSI initiator (client) and target (storage device) over well known IP network. To explain this storage standard in very simple way we can say that SCSI packets are encapsulated in IP packet and sent over traditional TCP/IP network where targets and initiators can de-encapsulate TCP/IP datagrams to read SCSI commands. We have couple options in case of implementation this standard because TCP/IP network model components transporting SCSI commands can be realized at software and/or hardware layer.

 

Important iSCSI standard concepts and terminology:

  • Initiator – functions as an iSCSI client. An initiator typically serves the same purpose to a computer as a SCSI bus adapter would, except that, instead of physically cabling SCSI devices (like hard drives and tape changers), an iSCSI initiator sends SCSI commands over an IP network. Initiators can be divided into two broad types:
    • A software initiator implement iSCSI using code component that use existing network card to emulate SCSI device and communicate thru iSCSI protocol. Software initiators are available for most popular operating systems and are the simplest and best economic method of deploying iSCSI.
    • A hardware initiator based on dedicated hardware, typically use special firmware running on that hardware and implementing iSCSI above network adapter acting as HBA card in server. Hardware decrease CPU overhead of iSCSI and TCP/IP processing that is why it may improve the performance of servers thet use iSCSI protocol to communicate with storage devices.
  • Target – functions as resource located on an iSCSI server, most often dedicated network connected storage device (well known as storage array) that provide target as access gateway to its resources. But it may also be a “general-purpose” computer or even virtual machine – because as with initiators iSCSI target can be realized at software layer.
  • Logical unit number – in iscsi terms LUN stands for logical unit and is specified by unique number. A LUN is representation of an individual SCSI (logical) device that is provided /accessible thru target. After iscsi connection is establish (emulate connection to scsi hdd) initiators treat iSCSI LUNs as they would a raw SCSI or IDE hard drive. In many deployments LUN usually representing part of large RAID (Redundant Array of Independent Disksdisk) array, it leaves access to underlying filesystem – regarding of the operating system that use it.
  • Addressing – iSCSI uses TCP/IP pots (usual 860 and 3260) for the protocol to name objects use to address it with special names refer to both iSCSI initiators and targets. iSCSI provides name-formats:
    • iSCSI Qualified Name (IQN)
      • iqn -iSCSI qualified name
      • datethat the naming authority took ownership of the domain
      • reversed domain name of the authority
      • Optional “:” prefixing a storage target name specified by the naming authority.
    • Extended Unique Identifier (EUI)

Format: eui.{EUI-64 bit address} (eui.xxxxxxxxx)

  • T11 Network Address Authority (NAA)

Format: naa.{NAA 64 or 128 bit identifier} (naa.xxxxxxxxxxx)

Note : IQN format addresses occur most commonly.

  • iSNS – iSCSI initiators can locate appropriate storage resources using theInternet Storage Name Service (iSNS) protocol. iSNS provide provides iSCSI SANs with the same management model as dedicated Fibre Channel  In practice, administrators can implement many deployment goals for iSCSI without using iSNS.

iSCSI protocol is over IETF responsibility – to have more information please see RFC 3720, 3721, 3722, 3723, 3747, 3780,3783, 4018,4173,4544,4850,4939, 5046, 50475048,7143

http:// tools.ietf.org

  

And finally for those who dare to read all boring theory part – main dish: my performance“ tips and tricks” list to demystify in this blog series journey :

  1. iSCSI initiator (hardware or software) queue depth:

        //example for softoware iscsi initiator

#esxcfg-module -s iscsivmk_LunQDepth=64 iscsi_vmk

  1. Adjusting Round Robin IOPS limit :

        //example for max iops and bytes parameter

#esxcli storage nmp psp roundrobin deviceconfig set -t=iops -I=10 -d=naa.xxxxxxxxxxxx

#esxcli storage nmp psp roundrobin deviceconfig set -t=bytes -B 8972 -d=naa.xxxxxxxxxxx

     3. NIC/HBA Driver and firmware version on esxi hypervisor

      // https://www.vmware.com/resources/compatibility/search.php

    4. Using jumbo frames for iSCSI

      // https://kb.vmware.com/kb/1007654

    5. Controlling LUN queue depth throttling

//example based on kb: http://kb.vmware.com/kb/1008113

#esxcli storage core device set –device naa.xxxxxxxxxx–queue-full-threshold  8 –queue-full-sample-size 32

    6. Delay ACK enable /disable

// https://kb.vmware.com/kb/1002598

    7. Port binding considerations use / not use

     // https://kb.vmware.com/kb/2038869

  

On next article I will try to gather all ESXi hypervisor layer configuration level best practices and describe test environment and test methodology.

So let’s end this pilot episode with open question – is it worth to use/implement any of them in vSphere environment ?