Browsed by
Category: Infrastructure

Infinio Accelerator – how it works?

Infinio Accelerator – how it works?

In my last post about Infinio Accelerator we introduced product and basics about it. Now it is time to go more deep – how this server side cache is working ?

Infinio’s cache inserts server RAM (and optionally, flash devices) transparently into the I/O stream. By dynamically populating server-side media with the hottest data, Infinio’s software reduces storage requirements to a small fraction of the workload size. Infinio is built on VMware’s vSphere APIs for I/O Filtering (VAIO) framework. This enables administrators to use VMware’s Storage Policy Based Management to apply Infinio’s storage acceleration filter to VMs, VMDKs, or groups of VMs transparently.

infinio3

An Infinio cluster seamlessly supports typical cluster-wide VMware operations, such as vMotion, HA, and DRS. Introduction of Infinio doesn’t require any changes to the environment. Datastore configuration, snapshot and replication setup, backup scripts, and integration with VMware features like VAAI and vMotion all remain the same.

infinio4

 

Infinio’s core engine is a content-based memory cache that scales out to accommodate expanding workloads and additional nodes. Deduplication enables the memory-first design, which can be complemented with flash devices for large working sets. In tiered configuration such as this, the cache is persistent, enabling fast warming after either planned or unplanned downtime.

infinio5

  Note. Infinio’s transparent server-side cache doesn’t require any changes to the environment !

 Lets go with installation – is easy and entirely non-disruptive with no reboots or downtime. It can be completed in just a few steps via an automated installation wizard. The installation wizard collects vCenter credentials and location, and desired Management Console information, then automatically deploys the console :

  1. Run infinio setup and agree to license terms

infinio6

2. Add vcenter FQDN and user credentials (in example we go with sso admin)

infinio7

3. Select destination esxi and other parameters to deploy ovf management console vm (datastore and network)

infinio8

  1. Set management console hostname and network information (IP address, DNS)

infinio9

  1. Create admin user for management console

infinio10

  1. setup auto-support (in our trial scenario we skip this step)

infinio11

  1. Preview config and deploy management console.

infinio12

infinio13

  1. Login to management console

infinio14

infinio15

In the next article we will provide some real performance result form our lab tests – so stay tuned 🙂

 

Mysterious Infinio – Product overview

Mysterious Infinio – Product overview

Shared storage  performance and characteristics (iops,latency)  is crucial for overall  vSphere platform performance and users satisfaction. In the advent of ssd and memory cache solutions we have many options to chose in case storage acceleration (local ssd, array side ssd , server side ssd). Lets discuse further server side caching – act of caching data on the server.

Data can be cached anywhere and at any point on the server that makes sense. It is common to cache commonly used data from the DB to prevent hitting the DB every time the data is required. We cache the results from competition scores since the operation is expensive in terms of both processor and database usage. It is also common to cache pages or page fragments so that they don’t need to be generated for every visitor.

In this article I would like to introduce one of the commercial server side caching solution from INFINIO – Infinio Accelerator 3.

infinio1

Infinio Accelerator increases IOPS and decreases latency by caching a copy of the hottest data on serverside resources such as RAM and flash devices. Native inline deduplication ensures that all local storage resources are used as efficiently as possible,reducing the cost of performance. Infinio is built on VMware’s VAIO (vSphere APIs for I/O Filters) framework,which is the fastest and most secure way to intercept I/O coming from a virtual machine. Its benefits can be realized on any storage that VMware supports; in addition, integration with VMware features like DRS, SDRS, VAAI and vMotionall continue to function the same way once Infinio is installed. Finally, future storage innovation that VMware releases will be available immediately through I/O Filter integration.

infinio2

The I/O Filter is the most direct path to storage for capabilities like caching and replication that need to intercept the data path. (Image courtesy of VMware)

Licensing

Infinio is licensed per ESXi host in an Infinio cluster. Software may be purchased for perpetual or term use:

  • A perpetual license allows the use of the licensed software indefinitely with an annual cost for support and maintenance.
  • A term license allows the use of software for one year, including support and maintenance.

For more information on licensing and pricing, contact sales@infinio.com.

System requirements

Infinio Accelerator requires min. VMware vSphere ESXi 6 U2 (Standard, Enterprise,or Enterprise Plus) and VMware vCenter 6 U2.

Note! vSphere 6.5 is supported and on VMware HCL !

Infinio works with any VMware supported datastore, including a variety of SAN, NAS, and DAS hardware supporting VMFS, Virtual Volumes (VVOLs), and Virtual SAN (vSAN).

  • Infinio’s cluster size mirrors that of VMware vSphere’s, scaling out to 64 nodes.
  • Infinio’s Management Console VM requires 1 vCPU, 8GB RAM, and 80GB of HDD space.

I’m very happy to announce that we received very friendly response from Infinio support and we get an option to download trial version of software – next articles will describe product in more depth and show “real life” examples of use in our lab environment.

Please, stay tuned 🙂

 

VMware Auto Deploy Configuration in vSphere 6.5

VMware Auto Deploy Configuration in vSphere 6.5

 

 

 

The architecture of auto deploy has changed in vSphere 6.5, one of the main difference is the ImageBuilder build in vCenter and the fact that you can create image profiles through the GUI instead of PowerCLI. That is really good news for those how is not keen on PowerCLI. But let’s go throgh the new configuration process of Auto Deploy. Below I gathered all the necessary steps to configure Auto Deploy in your environment.

  1. Enable Auto Deploy services on vCenter Server. Move to Administration -> System Configuration -> Related Objects, look for and start fallowing services:
  • Auto Deploy
  • ImageBuilder Service

You can change the startup type to start them with the vCenter server automatically as well.

Caution! In case you do not see any services like on the screan below, probably vmonapi and vmware-sca services are stopped.ad1

To start them, log in to vCenter Server through SSH and use fallowing commands:

#service-control  – -status         // to verify the status of these services

#service-control  – -start vmonapi vmware-sca       //to start services

ad2

Next, go back to Web Client and refresh the page.

 

  1. Prepare the DHCP server and configure DHCP scope including default gateway. A Dynamic Host Configuration Protocol (DHCP) scope is the consecutive range of possible IP addresses that the DHCP server can lease to clients on a subnet. Scopes typically define a single physical subnet on your network to which DHCP services are offered. Scopes are the primary way for the DHCP server to manage distribution and assignment of IP addresses and any related configuration parameters to DHCP clients on the network.

When basic DHCP scope settings are ready, you need to configure additional options:

  • Option 066 – with the Boot Server Host Name
  • Option 067 – with the Bootfile Name (it is a file name observed at Auto Deploy Configuration tab on vCenter Server – kpxe.vmw-hardwired)

ad3

  1. Configure TFTP server. For lab purposes I nearly always using the SolarWinds TFTP server, it is very easy to manage. You need to copy the TFTP Boot Zip files available at Auto Deploy Configuration page observed in step 2 to TFTP server file folder and start the TFTP service.

ad4

At this stage when you are try to boot you fresh server should get the IP Address and connect to TFTP server. In the  Discovered Hosts tab of Auto Deploy Configuration you will be able to see these host which received IP addresses and some information from TFTP server, but no Deploy Rule has been assigned to them.

ad5

  1. Create an Image Profile.

Go to Auto Deploy Configuration page -> Software Depots tab  and Import Software Depot

ad6

 

Click on Image Profiles so see the Image Profiles that are defined in this Software Depot.

ad7

The ESXi software depot contains the image profiles and software packages (VIBs) that are used to run ESXi. An image profile is a list of VIBs.

 

Image profiles define the set of VIBs to boot ESXi hosts with. VMware and VMware partners make image profiles and VIBs available in public depots. Use the Image Builder PowerCLI to  examine the depot and the Auto Deploy rule engine to specify which image profile to assign to which host. VMware customers can create a custom image profile based on the public image profiles and VIBs in the depot and apply that image profile to the host.

 

  1. Add Software Depot.

Click on Add Software Depot icon and add custom depot.

ad8

Next point in the newly created custom software depot select Image Profiles and click  New Image Profile.

ad9

I selected the minimum required VIBs to boot ESXi host which are:

  • esx-base 6.5.0-0.0.4073352 VMware ESXi is a thin hypervisor integrated into server hardware.
  • misc-drivers 6.5.0-0.0.4073352 This package contains miscellaneous vmklinux drivers
  • net-vmxnet3 1.1.3.0-3vmw.650.0.0.4073352 VMware vmxnet3
  • scsi-mptspi 4.23.01.00-10vmw.650.0.0.4073352 LSI Logic Fusion MPT SPI driver
  • shim-vmklinux-9-2-2-0 6.5.0-0.0.4073352 Package for driver vmklinux_9_2_2_0
  • shim-vmklinux-9-2-3-0 6.5.0-0.0.4073352 Package for driver vmklinux_9_2_3_0
  • vmkplexer-vmkplexer 6.5.0-0.0.4073352 Package for driver vmkplexer
  • vsan 6.5.0-0.0.4073352 VSAN for ESXi.
  • vsanhealth 6.5.0-0.0.4073352 VSAN Health for ESXi.
  • ehci-ehci-hcd 1.0-3vmw.650.0.0.4073352 USB 2.0 ehci host driver
  • xhci-xhci 1.0-3vmw.650.0.0.4073352 USB 3.0 xhci host driver
  • usbcore-usb 1.0-3vmw.650.0.0.4073352 USB core driver
  • vmkusb 0.1-1vmw.650.0.0.4073352 USB Native Driver for VMware

But the list could be different for you.

 

ad10

  1. Create a Deploy Rule.

ad11

ad12

ad13

ad14

ad15

  1. Activate Deploy Rule

ad16

  1. That’s it, now you can restart you host, it should boot and install according to your configuration now.
VMware Auto Deploy considerations

VMware Auto Deploy considerations

According to VMware definitione vSphere Auto Deploy can provision hundreds of physical hosts with ESXi software. You can specify the image to deploy and the hosts to provision with the image. Optionally, you can specify host profiles to apply to the hosts, a vCenter Server location (datacenter, folder or cluster), and assign a script bundle for each host. In short that is the tool to automate your ESXi deployment or upgrade.

As far as I know in particular on the Polish market it is not a widely used tool. However, it can be helpful for Integrator’s Companies to improve and make far more faster deployment of new environments. Furthermore, VMware claims the scripted or automated deployments should be used for every deployment with 5 or more hosts. Nonetheless, even if you are woring as a System Engineer or  at other implementation position I believe you are not installing new deployments every week..If that is every month – lucky you.

Well, is it really worth to prepare the AutoDeploy environment to deploy for instance 8 new hosts? – It depends.

IMHO, for such small deployments if you are really keen on making it a little bit fater the better way is to use kickstarts scripts. It can be much faster, expecially in case you are using them at least from time to time and you have prepared a good template (According the vSphere 6.5 I’m changing my mind a little bit due to changes which make AutoDpeloy preparation far more quicker)

However, Auto Deploy that’s not only deployment. It can be a kind of environment and change management. That can only be a specific kind of infrastructure where you use AutoDeploy to boot ESXi hosts instead of booting from local hard drives/SD cards.

Nevertheless, in Polands it is easier to meet classic PXE deployment booting from SAN than AutoDeploy. Is it the same trend seen around the world?

I am looking forward to hearing from you about yours experience with Auto Deploy.

VirtualVillage’s Lab environment

VirtualVillage’s Lab environment

We have received a few questions about our lab which is rather extraordinary 🙂 Some of you wanted us to publish a picture of it. Unfortunatelly, I’ve got only the old one (nowadays cables are better organised so it looks far more better). I’m sorry for the quality of the picture as well.

vvs-lab

Anyway, at this moment we are in implementation phase – the management cluster is going to be expanded to four host cluster. We are planning to implement NSX in physical environment to expand our basic knowledge about the pruduct. Unfortunatelly, these kinds of toys for big boys aren’t cheap and we are looking for some cut-prices or good offers for refubrished components. However CPU is already waiting so it shouldn’t take much time.

When the upgrade of the environment will be finished, I’ll post the new picture of the whole lab 🙂

Taking the advantage of the occassion coming with the last day of 2016, I wish you a Happy New your and remarkable party! It’s high time to begin preparations 😉

 

VirtualVillage’s home LAB

VirtualVillage’s home LAB

It is possible to learn especially about VMware products using just books, official trainings, blogs, etc. However, we believe that the real knowledge is available only by practice and not all could be tested or verified using production environments 🙂

And again, you can test a lot just using Workstation on your notebook (providing it is powerful enough) but these days there are more and more virtual infrastructure component which requires a lot of resources. Furthermore, having real servers and storage array is also a little bit different than deploying a few small virtual machines running on a notebook.

That is why a few years ago we decided to join forces and build the real laboratory where we are able to test even the most sophisticated  deployments not only with VMware products without being constraint by the resources.

The main hardware components of our lab infrastructure are included in the table below.

Hardware Component Quantity Details Purpose
ServerFujitsu TX200 S7 2 2x CPU E5-4220, 128 GB RAM Payload Cluster
Server Fujitsu TX100 S1 2 Router/Firewall and Backup
Server Fujitsu TX100 S3 3 1x CPU E3-1240, 32 GB RAM Management Cluster
NAS Synology DS2413+ 1 12 x 1 TB SATA 7,2K Gold Storage
NAS Synology RS3617+ 1 12 x 600 GB SAS 15K Silver Storage
NAS QNAP T410 1 4 x 1TB SATA 5,4K Bronze Storage (ISO)
Switch HPE 1910 1 48x 1 Gbps Connectivity

 

Of course we didn’t buy it at once. The environment evaluates with increasing needs. ( In the near future we are going to expand management cluster with 4 host and deploy NSX).

The logical topology looks like this:

lab

 

Despite the fact that most of our servers use tower cases, we installed them in a self made 42U Rack. Unfortunatelly, especially during the summer it could not go without air conditoning (this is one of the most power consuming part of the lab..)

 

Later, either me or Daniel will describe the software layer of our Lab. I hope, it will give an inspiration to anyone who is thinking about own lab.

 

How to monitor virtual network – story about netflow in vSphere environment.

How to monitor virtual network – story about netflow in vSphere environment.

Before we start talking about NetFlow configuration on VMware vSphere let’s back to basics and review protocol itself. NetFlow was originally developed by Cisco and has become a reasonably standard mechanism to perform network analysis. NetFlow collect network traffic statistics on designated interfaces. Commonly used in the physical world to help gain visibility into traffic and understanding just who is sending what and to where.

NetFlow comes in a variety of versions, from v1 to v10. VMware uses the IPFIX version

of NetFlow, which is version 10. Each NetFlow monitoring environment need to have exporter ( device carrying  netflow flow’s) , collector (main component ) and of course some network to monitor and analyze 😉

Below You can see basic environment diagram:

netflow1

We can describe flow as tcp/ip packets sequence (without direction) that have common:

  • Input interface
  • Source IP
  • Destination IP
  • TCP/IP Protocol
  • Source Port (TCP/UDP)
  • Destination Port (TCP/UDP)
  • ToS IP

Note. vSphere 5.0 uses NetFlow version 5, while vSphere 5.1 and beyond uses IPFIX (version 10).

Ok, we know that distributed virtual is needed to configure NetFlow on vSphere but what about main component NetFlow collector – as usual we have couple options that we can simply divide in commercial software with fancy graphical interfaces and open source staff for admins that still like old good cli 😉

Below I will show simple implementation steps describing examples from both approach :

Manage engine NetFlow analyzer v12.2, more about software on https://www.manageengine.com/products/netflow/?gclid=CP3HlJbyv9ACFSQz0wod_UcDCw my lab VM setup:

  • Guest OS:Windows 2008R2
  • 4GB RAM
  • 2vCPU
  • 60 GB HDD
  • vNIC interface connected to ESXi management network

Installation (using embedded database just for demo purpose) is really simple and straight forward. Let’s start from starting the installer:

netflow2

 

  1. accept license agreements

netflow3

  1. choose installation folder on vm hdd

netflow4

  1. choose installation component option – for this demo purpose we go with simple environment with only one collector server, central reporting is not necessary

netflow5

  1. choose web server and collector services TCP/IP ports

netflow6

  1. provide communication details – again in this demo we have all components on one server and we can simply go with localhost

netflow7

 

  1. optional – configuration proxy server details

netflow8

  1. select database – on this demo i used embedded Postgresql , but if You choose MS database remember about ODBC config.

netflow9

  1. installation is quite fast – couple more minutes and solution will be ready and available to start work:

netflow10

 

… Web client like in VMware need couple CPU cycles to start 😉

netflow11

.. and finally we can see fancy ManageEngine NetFlow collector

netflow12

II) Open-Source netdump tool  – nfdump is distributed under the BSD license, and can be downloaded at: http://sourceforge.net/projects/nfdump/ my lab VM steup:

  • GOS: Debian 8.6
  • 4GB RAM
  • 2vCPU
  • 60 HDD
  • vNIC interface connected to ESXi management network

 

  1. We need to start from adding some sources to our debian distribution:

netflow13

  1. CLI Installation nfdump packet:

netflow15

netflow14

  1. Run simple flow capture to verify if collector is running and creating output flow statictics files (you can see that i use same tcp port 9995 and folder on my desktop as output destination):

netflow16

 

Ok, now it is time to back to vSphere and configure DVS to send network traffic to collector:

netflow17

 

  • IP Address: This is the IP of the NetFlow Collector
  • Port: This is the port used by the NetFlow Collector.
  • Switch IP Address: This one can be confusing – by assigning an IP address of here, the NetFlow Collector will treat the VDS as one single entity. It does not need to be a valid, routable IP, but is merely used as an identifier.
  • Active flow export timeout in seconds: The amount of time that must pass before
  • the switch fragments the flow and ships it off to the collector.
  • Idle flow export timeout in seconds: Similar to the active flow timeout, but for flows
  • that have entered an idle state.
  • Sampling rate: This determines the interval packet to collect. By default, the value is 0,
  • meaning to collect all packets. If you set the value to something other than 0, it will
  • collect every X packet.
  • Process internal flows only: Enabling ensures that the only flows collected are ones that occur between VMs on the same host.

And enable it at designated port group level:

netflow18

Finally we can create simple lab scenario and capture some ftp flow statistics between two vm’s on different ESXi :

netflow19

VM’s are running in dedicated vlan on the same DVS port group, collector is running on management network to communicate with vCenter and ESXi hosts. I used ftp connection to generate traffic between vm’s below examples output from two collectors (test ran separate as collector share the same ip)  :

 

ftp client on first vm:

netflow20

ftp server on second vm:

netflow21

flow statistics example from netdump:

netflow22

flow statistics from ManageEngine

netflow23

 

VLAN Discovery Failed

VLAN Discovery Failed

Sometimes you can see plenty of strange entries while observing vmkernel.log related to FCoE. It won’t be unusual if you use FCoE. However, if you don’t you could be a little bit curious or even worried about it, the timeouts or “link down” entries aren’t normal for most of vSphere Admins.

The problem could be seen when you are using some kinds of converged network cards. In my case it was HPE C7000 with Virtual Connects and Qlogic 57840. It’s a 10 Gb/s NIC which is also capable of FCoE and iSCSI offload. Anyway, FCoE isn’t used in any part of this infrastructure. Therefore fallowing entries were a little bit strange for me:

<3>bnx2fc:vmhba32:0000:87:00.0: bnx2fc_vlan_disc_timeout:218 VLAN 1002 failed. Trying VLAN Discovery.
<3>bnx2fc:vmhba32:0000:87:00.0: bnx2fc_start_disc:3260 Entered bnx2fc_start_disc
<3>bnx2fc:vmhba32:0000:87:00.0: bnx2fc_vlan_disc_timeout:193 VLAN Discovery Failed. Trying default VLAN 1002
<6>host4: fip: link down.
<6>host4: libfc: Link down on port ( 0)
<3>bnx2fc:vmhba32:0000:87:00.0: bnx2fc_vlan_disc_cmpl:266 vmnic2: vlan_disc_cmpl: hba is on vlan_id 1002

Furthermore I realized that in statistic of  HBA there are listed two unfamiliar adapters: vmhba32 and vmhba33, what else they are listed with different driver used and with no traffic passed.

The driver bnx2fc indicates that it’s a driver of my network card. That’s means that the driver is loaded even if you do not use FCoE. The driver used for my network card is bnx2x, but there available and installed also bnx2fc, bnx2i, bnx2 and cnic. I was determined to make my vmkernel as clear as possible so I decided to turn it off.

After some investigation and test I managed to do it and get rid of these rubbish entries.

To turn off the FCoE in case you do not use it, you have to perform fallowing steps:

1. Remove the bnx2fc vib

        # esxcli software vib remove --vibname=scsi-bnx2fc

2. Move to /etc/rc.local.d and remove a script called 99bnx2fc.sh which is responsible for loading the driver when the host boots.

3. Disable the FCoE on all network cards involved:

     # esxcli fcoe nic disable -n vmnicX

4. Reboot the host and check that the errors aren’t present anymore in the logs.

 

Despite in driver version 2.713.10.v60.4 according to release notes which can be found here the problem should be resolved, however in my case it wasn’t.

ESXi host connection lost due to CDP/LLDP protocol

ESXi host connection lost due to CDP/LLDP protocol

You can observe some random and intermittent loss of connection to ESXi 6.0 host running on Dell servers (both Rack and Blade). It’s caused by a bug with Cisco Discovery Protocol /Link Layer Discovery Protocol.  It can be also seen while generating VMware support log bundle because during this process these protocol are also used to include information about the network.

 

What are these protocols for? Both of them perform similar roles in the local area network. They are used by network devices to advertise their identity, capabilities and neighbors. The main difference is that CDP is a Cisco proprietary protocol and LLDP is vendor-neutral. There are also other niche protocols like Nortel Discovery Protocol, Foundry Discovery Protocol or Link Layer Topology.

CDP and LLDP are also compatible with VMware virtual switches and thereby they can gather and display information about the physical switches.  CDP is available for both standard and distributed switches whilst LLDP is available only for distributed virtual switches since vSphere 5.0

cdp

Cisco Discovery Protocol information displayed on vSwitch level.

 
There is currently no resolution for this bug but thanks to the VMware Technical Support the workaround described below is available.

 

Turn off the CDP for each  vSwitch:

# esxcfg-vswitch –B down vSwitchX

You can also verify the current status of CDP using fallowing command:

# esxcfg-vswitch –b vSwitchX

This simple task will resolve the problem with random connection loss of ESXi hosts. Anyway it will not solve the problem with loss of connection during generation of log bundle.

To confirm that the prblem exist you can simply run fallowing command:

# vm-support –w /vmfs/volumes/datastore_name

Even though we turned off the CDP, during log generation process ESXi are using it to gather information about network topology.

To fix it you have to download this script called disablelldp2.py and perform the steps below:

  1. Copy the script to a datastore which is shared with all hosts,
  2. Open SSH to an ESXi host,
    1. Move to a destination where you copied the script,
    2. Grant the permission: # chmod 555 disablelldp2.py,
  3. Run the script: ./disablelldp2.py,
  4. After the script is executed move to /etc/rc.local.d and edit local.sh file. It should look like this:

#!/bin/sh

# local configuration options

# Note: modify at your own risk!  If you do/use anything in this # script that is not part of a stable API (relying on files to be in # specific places, specific tools, specific output, etc) there is a # possibility you will end up with a broken system after patching or # upgrading.  Changes are not supported unless under direction of # VMware support.

ORIGINAL_FILE=/sbin/lldpnetmap

MODIFIED_FILE=/sbin/lldpnetmap.original

if test -e “$MODIFIED_FILE”
then
echo “$MODIFIED_FILE already exists.”
else
mv “$ORIGINAL_FILE” “$MODIFIED_FILE”
echo “Omitting LLDP Script.” > “$ORIGINAL_FILE”
chmod 555 “$ORIGINAL_FILE”
fi
exit 0

  1. Restart the ESXi server and run vm-support command to confirm that the problem is solved.

 

 

 

 

 

 

Virtual SAN Storage performance tests in action

Virtual SAN Storage performance tests in action

Virtual SAN provides the Storage Performance Proactive Tests which lets you to check parameters of your environment in an easy way. You just need few clicks to run a test.

Well, we can see a lot of tests from nested labs in the Internet, however there are not so many real examples.

I decided to share some results of such a real environment which consists 5 ESXi hosts. Each is equipped with 1x SSD Disk 1,92 TB and 5x HDD 4 TB.

It’s almost 10% of space reserved for cache on SSD. VMware claims that the minimum is 10% so it shouldn’t be so bad.

Since Virtual SAN 6.2 released with vSphere 6.0 Update 2, VMware makes the testing of performance much more easy. That’s possible thanks to Storage Performance Proactive Tests which lets you to check parameters of your environment in an easy way. They are available from Web Client, you just need few clicks to run a test. Perhaps those aren’t the most sophisticated whilst they are really easy to use.

To start some tests, simply move to Monitor>Virtual SAN>Proactive Tests tab on VSAN cluster level and click at run test button (green triangle).

As you quickly realise there are few kind of tests:

  • Stress Test
  • Low Stress test
  • Basic Sainity Test, focus on Flash cache layer
  • Performance characterization – 100% Read, optimal RC usage
  • Performance characterization – 100% Write, optimal WB usage
  • Performance characterization – 100% Read, optimal RC usage after warm-up
  • Performance characterization – 70/30 read/write mix, realistic, optimal flash cache usage
  • Performance characterization – 70/30 read/write mix, realistic, High I/O Size, optimal flash cache usage
  • Performance characterization – 100% read, Low RC hit rate / All-Flash Demo
  • Performance characterization – 100% streaming reads
  • Performance characterization – 100% streaming writes

 

Let’s start from a multicast performance test of our network. In case the received bandwidth is below 75 MB/s the rest would fail.

multicast-performance-test

Caution!!! VMware doesn’t recommend using this tests  on production environments especially during business hours.

Test number 1 – Low stress test, the duration set to 5 minutes.

low-stress

As we can see the IOPS counts around 40K for my cluster of five hosts. The average throughput around 30 MB/s, gives ca. 155 MB/s total.

Test number 2 – Stress test, duration set to 5 minutes.

stress-test

Here we can see that my VSAN reached about 37K of IOPS and almost 260 MB /s of throughput.

Test number 3 – Basic Sanity test, focus on Flash cache layer, duration set to 5 minutes

basic-sanity-test-focus-on-flash-cache-layer

Test number 4 – 100% read, optimal RC usage, duration set to 5 minutes.

100-percent-read-optimal-rc-usage

Here we can see how does the SSD performs while most of the reads

Test number 5 –  100% read, optimal RC usage after warm-up

100-percent-read-optimal-rc-usage-after-warmup

Test number 6 – 100 % write, optimal WB usage

100-percent-write-optimal-wb-usage

 

 

If you have any other real results of your VSAN I’d be glad to see it and compare different configurations.