Increase VMware ESXi iSCSI storage performance ? – lets demistyfy all tips and tricks

Increase VMware ESXi iSCSI storage performance ? – lets demistyfy all tips and tricks


Before we start I would like to describe main motivation to write this article which is quite simple – to gather in one place all basic theoretical background about iscsi protocol and best practices at its implementation on vSphere platform with special consideration about potential performance tuning tips & tricks . This is first part of the series where We (I’m counting on readers participation) try to gather and verify all this “magical” parameters often treated as myths by many Admins.

To begin let’s start from something boring but as usual necessary 😉 … theoretical background.

iSCSI is an network based storage standard that enable connectivity between iSCSI initiator (client) and target (storage device) over well known IP network. To explain this storage standard in very simple way we can say that SCSI packets are encapsulated in IP packet and sent over traditional TCP/IP network where targets and initiators can de-encapsulate TCP/IP datagrams to read SCSI commands. We have couple options in case of implementation this standard because TCP/IP network model components transporting SCSI commands can be realized at software and/or hardware layer.


Important iSCSI standard concepts and terminology:

  • Initiator – functions as an iSCSI client. An initiator typically serves the same purpose to a computer as a SCSI bus adapter would, except that, instead of physically cabling SCSI devices (like hard drives and tape changers), an iSCSI initiator sends SCSI commands over an IP network. Initiators can be divided into two broad types:
    • A software initiator implement iSCSI using code component that use existing network card to emulate SCSI device and communicate thru iSCSI protocol. Software initiators are available for most popular operating systems and are the simplest and best economic method of deploying iSCSI.
    • A hardware initiator based on dedicated hardware, typically use special firmware running on that hardware and implementing iSCSI above network adapter acting as HBA card in server. Hardware decrease CPU overhead of iSCSI and TCP/IP processing that is why it may improve the performance of servers thet use iSCSI protocol to communicate with storage devices.
  • Target – functions as resource located on an iSCSI server, most often dedicated network connected storage device (well known as storage array) that provide target as access gateway to its resources. But it may also be a “general-purpose” computer or even virtual machine – because as with initiators iSCSI target can be realized at software layer.
  • Logical unit number – in iscsi terms LUN stands for logical unit and is specified by unique number. A LUN is representation of an individual SCSI (logical) device that is provided /accessible thru target. After iscsi connection is establish (emulate connection to scsi hdd) initiators treat iSCSI LUNs as they would a raw SCSI or IDE hard drive. In many deployments LUN usually representing part of large RAID (Redundant Array of Independent Disksdisk) array, it leaves access to underlying filesystem – regarding of the operating system that use it.
  • Addressing – iSCSI uses TCP/IP pots (usual 860 and 3260) for the protocol to name objects use to address it with special names refer to both iSCSI initiators and targets. iSCSI provides name-formats:
    • iSCSI Qualified Name (IQN)
      • iqn -iSCSI qualified name
      • datethat the naming authority took ownership of the domain
      • reversed domain name of the authority
      • Optional “:” prefixing a storage target name specified by the naming authority.
    • Extended Unique Identifier (EUI)

Format: eui.{EUI-64 bit address} (eui.xxxxxxxxx)

  • T11 Network Address Authority (NAA)

Format: naa.{NAA 64 or 128 bit identifier} (naa.xxxxxxxxxxx)

Note : IQN format addresses occur most commonly.

  • iSNS – iSCSI initiators can locate appropriate storage resources using theInternet Storage Name Service (iSNS) protocol. iSNS provide provides iSCSI SANs with the same management model as dedicated Fibre Channel  In practice, administrators can implement many deployment goals for iSCSI without using iSNS.

iSCSI protocol is over IETF responsibility – to have more information please see RFC 3720, 3721, 3722, 3723, 3747, 3780,3783, 4018,4173,4544,4850,4939, 5046, 50475048,7143



And finally for those who dare to read all boring theory part – main dish: my performance“ tips and tricks” list to demystify in this blog series journey :

  1. iSCSI initiator (hardware or software) queue depth:

        //example for softoware iscsi initiator

#esxcfg-module -s iscsivmk_LunQDepth=64 iscsi_vmk

  1. Adjusting Round Robin IOPS limit :

        //example for max iops and bytes parameter

#esxcli storage nmp psp roundrobin deviceconfig set -t=iops -I=10 -d=naa.xxxxxxxxxxxx

#esxcli storage nmp psp roundrobin deviceconfig set -t=bytes -B 8972 -d=naa.xxxxxxxxxxx

     3. NIC/HBA Driver and firmware version on esxi hypervisor


    4. Using jumbo frames for iSCSI


    5. Controlling LUN queue depth throttling

//example based on kb:

#esxcli storage core device set –device naa.xxxxxxxxxx–queue-full-threshold  8 –queue-full-sample-size 32

    6. Delay ACK enable /disable


    7. Port binding considerations use / not use



On next article I will try to gather all ESXi hypervisor layer configuration level best practices and describe test environment and test methodology.

So let’s end this pilot episode with open question – is it worth to use/implement any of them in vSphere environment ?


Comments are closed.