Browsed by
Tag: vsan

VSAN real capacity utilization

VSAN real capacity utilization

There are a few caveats that make the calculation and planning of VSAN capacity tough and gets even harder when you try to map it with real consumption on the VSAN datastore level.

  1. VSAN disks objects are thin provisioned by default.
  2. Configuring full reservation of storage space through Object Space Reservation rule in Storage Policy, does not mean

disk object block will be inflated on a datastore. This only means the space will be reserved and showed as used in VSAN Datastore Capacity pane.

Which makes it even harder to figure out why size of “files” on this datastore is not compliant with other information related to capacity.

  1. In order to plan capacity you need to include overhead of Storage Policies. Policies – as I haven’t met an environment which would use only one for all kinds of workloads. This means that planning should start with dividing workloads for different groups which might require different levels of protections.
  1. Apart from disks objects there are different objects especially SWAP which are not displayed in GUI and can be easily forgotten. However, based on the size of environment they might consume considerable amount of storage space.
  1. VM SWAP object does not adhere to Storage Policy assigned to VM. What does it mean? Even if you configure your VM’s disks with PFTT=0

SWAP will always utilize PFTT=1. Unless you configure advanced option (SwapThickProfivisionedDisabled) to disable it.

I have made a test to check how much space will consume my empty VM. (Empty means here without operating system even)

In order to see that a VM called Prod-01 has been created with 1 GB of memory and 2 GB of Hard disk and default storage policy assigned (PFTT=1)

Based on the Edit Setting window the VM disk size on datastore is 4 GB (Maximum sized based on disk size and policy). However, used storage space is 8 MB which means there will be 2 replicas 4 MB each, which is fine as there is no OS installed at all.

VMka wyłączona

However, when you open datastore files you will see this list with Virtual Disk object you will notice that the size is 36 864 KB which gives us 36 MB. So it’s neither 4 GB nor 8 MB as displayed by edit setting consumption..vsan pliki

Meanwhile datastore provisioned space is listed as 5,07 GB.

vmka dysk 2GB default policy i 1GB RAM - wyłączona

 

So let’s power on that VM.

Now the disks size remain intact, but other files appear as for instance SWAP has been created as well as log and other temporary files.

VSAN VMKa wlaczona

 

Looking at datastore provisioned space now it shows 5,9 GB. Which again is confisung even if we forgot about previous findings powering on VM triggers SWAP creation which according to the theory should be protected with PFTT=1 and be thick provisioned. But if that’s the case then the provisioned storage consumption should be increased by 2 GB not 0,83 (where some space is consumed for logs and other small files included in Home namespace object)

 

vmka dysk 2GB default policy i 1GB RAM - włączona

Moreover during those observations I noticed that during the VM booting process the provisioned space is peaking up to 7,11 GB for a very short period of time

And this value after a few seconds decreases to 5.07 GB. Even after a few reboots those values stays consistent.

vmka dysk 2GB default policy i 1GB RAM - podczas bootowania

The question is why those information are not consistent and what heppens during booting of the VM that is the reason for peak of provisioned space?

That’s the quest for not to figure it out 🙂

 

 

VMware Virtual SAN 6.6 what’s new

VMware Virtual SAN 6.6 what’s new

1vsan

vSAN 6.6 it’s 6th generation of the product and there are more than 20+ new features and enhancements in this release, such as:

  • Native encryption for data-at-rest
  • Compliance certifications
  • Resilient management independent of vCenter
  • Degraded Disk Handling v2.0 (DDHv2)
  • Smart repairs and enhanced rebalancing
  • Intelligent rebuilds using partial repairs
  • Certified file service & data protection solutions
  • Stretched clusters with local failure protection
  • Site affinity for stretched clusters
  • 1-click witness change for Stretched Cluster
  • vSAN Management Pack for vRealize
  • Enhanced vSAN SDK and PowerCLI
  • Simple networking with Unicast
  • vSAN Cloud Analytics with real-time support notification and recommendations
  • vSAN Config Assist with 1-click hardware lifecycle management
  • Extended vSAN Health Services
  • vSAN Easy Install with 1-click fixes
  • Up to 50% greater IOPS for all-flash with optimized checksum and dedupe
  • Support for new next-gen workloads
  • vSAN for Photon in Photon Platform 1.1
  • Day 0 support for latest flash technologies
  • Expanded caching tier choice
  • Docker Volume Driver 1.1

 

… ok now lets review main enhancements:

vSAN 6.6 introduces the industry’s first native HCI security solution. vSAN will now offer data-at-rest encryption that is completely hardware-agnostic. No more concern about someone walking off with a drive or breaking in to a less-secure, edge IT location and stealing hardware. Encryption is applied at the cluster level, and any data written to a vSAN storage device, both at the cache layer and persistent layer can now be fully encrypted.  And vSAN 6.6 supports 2-factor authentication, including SecurID and CAC.

2vsan

Certified file services and data protection solutions are available from 3rd party partners in the VMware Ready for vSAN Program to enable customers to extend and complement their vSAN environment with proven, industry-leading solutions. These solutions provide customers with detailed guidance on how to complement vSAN. (EMC NetWorker is avaialble today with new solutions coming on soon)

3vsan

vSAN stretched cluster was released in Q3’15 to provide an Active-Active solution. vSAN 6.6 adds a major new capability that will deliver a highly-available stretched cluster that addresses the highest resiliency requirements of data centers. vSAN 6.6 adds support for local failure protection that can provide resiliency against both site failures and local component failures.

4vsan

PowerCLI Updates: Full featured vSAN PowerCLI cmdlets enable full automation that includes all the latest features. SDK/API updates also enable enterprise-class automation that brings cloud management flexibility to storage by supporting REST APIs.

VMware vRealize Operations Management Pack for vSAN released recently, provides customers with native integration for simplified management and monitoring. The vSAN management pack is specifically designed to accelerate time to production with vSAN, optimize application performance for workloads running on vSAN and provide unified management for the Software Defined Datacenter (SDDC). It provides additional options for monitoring, managing and troubleshooting vSAN along with the end-to-end infrastructure solutions.

5vsan

Finally, vSAN 6.6 is well suited for next-generation applications. Performance improvements, especially when combined with new flash technologies for write-intensive applications, enable vSAN to address more emerging applications like Big Data. The vSAN team has also tested and released numerous reference architectures for these types of solutions, including Big Data, Splunk and InterSystems Cache.

RESOURCES:

  • Splunk Reference Architecture: http://www.emc.com/collateral/service-overviews/h15699-splunk-vxrail-sg.pdf
  • Citrix XenDestkop/XenApp Blog: https://blogs.vmware.com/virtualblocks/2017/02/27/citrix-xenapp-xendesktop-7-12-vmware-vsan-6-5-flash/
  • vSAN, VxRail and Pivotal Cloud Foundry RA: https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/products/vsan/vmware-pcf-vxrail-reference-architeture.pdf
  • vSAN and InterSystems Blog: https://community.intersystems.com/post/intersystems-data-platforms-and-performance-%E2%80%93-part-8-hyper-converged-infrastructure-capacity
  • Intel, vSAN and Big Data Hadoop: https://builders.intel.com/docs/storagebuilders/Hyper-Converged_big_data_using_Hadoop_with_All-Flash_VMware_vSAN.pdf

 

 

Virtual SAN Storage performance tests in action

Virtual SAN Storage performance tests in action

Virtual SAN provides the Storage Performance Proactive Tests which lets you to check parameters of your environment in an easy way. You just need few clicks to run a test.

Well, we can see a lot of tests from nested labs in the Internet, however there are not so many real examples.

I decided to share some results of such a real environment which consists 5 ESXi hosts. Each is equipped with 1x SSD Disk 1,92 TB and 5x HDD 4 TB.

It’s almost 10% of space reserved for cache on SSD. VMware claims that the minimum is 10% so it shouldn’t be so bad.

Since Virtual SAN 6.2 released with vSphere 6.0 Update 2, VMware makes the testing of performance much more easy. That’s possible thanks to Storage Performance Proactive Tests which lets you to check parameters of your environment in an easy way. They are available from Web Client, you just need few clicks to run a test. Perhaps those aren’t the most sophisticated whilst they are really easy to use.

To start some tests, simply move to Monitor>Virtual SAN>Proactive Tests tab on VSAN cluster level and click at run test button (green triangle).

As you quickly realise there are few kind of tests:

  • Stress Test
  • Low Stress test
  • Basic Sainity Test, focus on Flash cache layer
  • Performance characterization – 100% Read, optimal RC usage
  • Performance characterization – 100% Write, optimal WB usage
  • Performance characterization – 100% Read, optimal RC usage after warm-up
  • Performance characterization – 70/30 read/write mix, realistic, optimal flash cache usage
  • Performance characterization – 70/30 read/write mix, realistic, High I/O Size, optimal flash cache usage
  • Performance characterization – 100% read, Low RC hit rate / All-Flash Demo
  • Performance characterization – 100% streaming reads
  • Performance characterization – 100% streaming writes

 

Let’s start from a multicast performance test of our network. In case the received bandwidth is below 75 MB/s the rest would fail.

multicast-performance-test

Caution!!! VMware doesn’t recommend using this tests  on production environments especially during business hours.

Test number 1 – Low stress test, the duration set to 5 minutes.

low-stress

As we can see the IOPS counts around 40K for my cluster of five hosts. The average throughput around 30 MB/s, gives ca. 155 MB/s total.

Test number 2 – Stress test, duration set to 5 minutes.

stress-test

Here we can see that my VSAN reached about 37K of IOPS and almost 260 MB /s of throughput.

Test number 3 – Basic Sanity test, focus on Flash cache layer, duration set to 5 minutes

basic-sanity-test-focus-on-flash-cache-layer

Test number 4 – 100% read, optimal RC usage, duration set to 5 minutes.

100-percent-read-optimal-rc-usage

Here we can see how does the SSD performs while most of the reads

Test number 5 –  100% read, optimal RC usage after warm-up

100-percent-read-optimal-rc-usage-after-warmup

Test number 6 – 100 % write, optimal WB usage

100-percent-write-optimal-wb-usage

 

 

If you have any other real results of your VSAN I’d be glad to see it and compare different configurations.