Browsed by
Author: Daniel

Infinio Accelerator – how it works?

Infinio Accelerator – how it works?

In my last post about Infinio Accelerator we introduced product and basics about it. Now it is time to go more deep – how this server side cache is working ?

Infinio’s cache inserts server RAM (and optionally, flash devices) transparently into the I/O stream. By dynamically populating server-side media with the hottest data, Infinio’s software reduces storage requirements to a small fraction of the workload size. Infinio is built on VMware’s vSphere APIs for I/O Filtering (VAIO) framework. This enables administrators to use VMware’s Storage Policy Based Management to apply Infinio’s storage acceleration filter to VMs, VMDKs, or groups of VMs transparently.

infinio3

An Infinio cluster seamlessly supports typical cluster-wide VMware operations, such as vMotion, HA, and DRS. Introduction of Infinio doesn’t require any changes to the environment. Datastore configuration, snapshot and replication setup, backup scripts, and integration with VMware features like VAAI and vMotion all remain the same.

infinio4

 

Infinio’s core engine is a content-based memory cache that scales out to accommodate expanding workloads and additional nodes. Deduplication enables the memory-first design, which can be complemented with flash devices for large working sets. In tiered configuration such as this, the cache is persistent, enabling fast warming after either planned or unplanned downtime.

infinio5

  Note. Infinio’s transparent server-side cache doesn’t require any changes to the environment !

 Lets go with installation – is easy and entirely non-disruptive with no reboots or downtime. It can be completed in just a few steps via an automated installation wizard. The installation wizard collects vCenter credentials and location, and desired Management Console information, then automatically deploys the console :

  1. Run infinio setup and agree to license terms

infinio6

2. Add vcenter FQDN and user credentials (in example we go with sso admin)

infinio7

3. Select destination esxi and other parameters to deploy ovf management console vm (datastore and network)

infinio8

  1. Set management console hostname and network information (IP address, DNS)

infinio9

  1. Create admin user for management console

infinio10

  1. setup auto-support (in our trial scenario we skip this step)

infinio11

  1. Preview config and deploy management console.

infinio12

infinio13

  1. Login to management console

infinio14

infinio15

In the next article we will provide some real performance result form our lab tests – so stay tuned 🙂

 

Mysterious Infinio – Product overview

Mysterious Infinio – Product overview

Shared storage  performance and characteristics (iops,latency)  is crucial for overall  vSphere platform performance and users satisfaction. In the advent of ssd and memory cache solutions we have many options to chose in case storage acceleration (local ssd, array side ssd , server side ssd). Lets discuse further server side caching – act of caching data on the server.

Data can be cached anywhere and at any point on the server that makes sense. It is common to cache commonly used data from the DB to prevent hitting the DB every time the data is required. We cache the results from competition scores since the operation is expensive in terms of both processor and database usage. It is also common to cache pages or page fragments so that they don’t need to be generated for every visitor.

In this article I would like to introduce one of the commercial server side caching solution from INFINIO – Infinio Accelerator 3.

infinio1

Infinio Accelerator increases IOPS and decreases latency by caching a copy of the hottest data on serverside resources such as RAM and flash devices. Native inline deduplication ensures that all local storage resources are used as efficiently as possible,reducing the cost of performance. Infinio is built on VMware’s VAIO (vSphere APIs for I/O Filters) framework,which is the fastest and most secure way to intercept I/O coming from a virtual machine. Its benefits can be realized on any storage that VMware supports; in addition, integration with VMware features like DRS, SDRS, VAAI and vMotionall continue to function the same way once Infinio is installed. Finally, future storage innovation that VMware releases will be available immediately through I/O Filter integration.

infinio2

The I/O Filter is the most direct path to storage for capabilities like caching and replication that need to intercept the data path. (Image courtesy of VMware)

Licensing

Infinio is licensed per ESXi host in an Infinio cluster. Software may be purchased for perpetual or term use:

  • A perpetual license allows the use of the licensed software indefinitely with an annual cost for support and maintenance.
  • A term license allows the use of software for one year, including support and maintenance.

For more information on licensing and pricing, contact sales@infinio.com.

System requirements

Infinio Accelerator requires min. VMware vSphere ESXi 6 U2 (Standard, Enterprise,or Enterprise Plus) and VMware vCenter 6 U2.

Note! vSphere 6.5 is supported and on VMware HCL !

Infinio works with any VMware supported datastore, including a variety of SAN, NAS, and DAS hardware supporting VMFS, Virtual Volumes (VVOLs), and Virtual SAN (vSAN).

  • Infinio’s cluster size mirrors that of VMware vSphere’s, scaling out to 64 nodes.
  • Infinio’s Management Console VM requires 1 vCPU, 8GB RAM, and 80GB of HDD space.

I’m very happy to announce that we received very friendly response from Infinio support and we get an option to download trial version of software – next articles will describe product in more depth and show “real life” examples of use in our lab environment.

Please, stay tuned 🙂

 

VM Consolidation – Survival Guide

VM Consolidation – Survival Guide

survival-guide

Survival guide for any vm snaphost consolidation problems all in one place :

Note! Make sure any backup software is turned off or that all jobs are stopped. A reboot of the backup server is required to clear any potential residual locks.

  1. Restart vc service – https://kb.vmware.com/kb/1003895
  1. Restart the management agents on the ESXi cluster where problematic vms are working

#services.sh restart   – https://kb.vmware.com/kb/1003490, or manually verify to determine “who” is holding the lock

3. Use vmfstools (-D) command against vm snapshot files:

/vmfs/volumes/<datastore># vmkfstools -D <file name>
You see an output similar to:

[root@test-esx1 testvm]# vmkfstools -D test-000008-delta.vmdk
Lock [type 10c00001 offset 45842432 v 33232, hb offset 4116480
gen 2397, mode 2, owner 00000000-00000000-0000-000000000000mtime 5436998]<————–MAC address of lock owner
RO Owner[0] HB offset 3293184 xxxxxxxx-xxxxxxxx-xxx-xxxxxxxxxxxx <——————————MAC address of read-only lock owner
Addr <4, 80, 160>, gen 33179, links 1, type reg, flags 0, uid 0, gid 0, mode 100600
len 738242560, nb 353 tbz 0, cow 0, zla 3, bs 2097152

//more information in kb: https://kb.vmware.com/kb/10051

If  esxi holding lock you can restart mgmt agents as per above advice or migrate all vms and reboot host or determine which process is holding the lock – just run one of these commands:

# lsof file

# lsof | grep -i file

For example:

# lsof | grep test02-flat.vmdk

You should see an output similar to:

COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME

71fd60b6- 3661 root 4r REG 0,9 10737418240 23533 Test02-flat.vmdk

Check the process with the PID returned in above, in our  example:

# ps -ef | grep 3661

to kill the process, run the command:

# kill

All in all when we solve “locks” problems we can continue vm consolidation process :

  1. Connect to the ESXi where is problematic vm directly
  1. Power off problematic vm
  1. Disable CBT for the virtual machine (very ofter ctk files are corrupt, for example we run backup job on vm with active snapshot – this is unsupported config) For more information, see: http://kb.vmware.com/kb/1031873

6.Remove  any files ending with the *-ctk.vmdk file extension in the virtual machine directory.

  1. Enable CBT for the virtual machine again, see: http://kb.vmware.com/kb/1031873
  1. Remove and add vm to inventory (just to verify vm configuration integrity, in case any vmx problems you got error message and you need correct vm config), more information in kb: https://kb.vmware.com/kb/1003743
  1. Create a snapshot:

Right-click the virtual machine.

Click Snapshot.

Click Take Snapshot.

  1. Perform a Delete All operation:

Right-click the virtual machine.

Click Snapshot.

Click Snapshot Manager.

Click Delete All.

TIP:  To verify snapshots are rejoining run the commands:

#watch “ls -lhut –time-style=full-iso *-delta.vmdk”

#watch “ls -lh –full-time *-delta.vmdk *-flat.vmdk”

//more info in kb: https://kb.vmware.com/kb/1007566

  1. Power on vm and verify fix

 

However if above do not work/solve the problem we have two alternate options:

  1. a) clone or storage vmotion problematic vm’s to different datastore
  1. b) use VMware converter and perform v2v operation

That’s it – my survival guide for any vm snapshot consolidation problems – wondering if you have any add ons or different approach view ?

Adding a sound card to ESXi hosted VM

Adding a sound card to ESXi hosted VM

Sound Card in vSphere Virtual Machine is an unsupported configuration. This is feature dedicated to Virtual Machines created in VMware Workstation. However, you can still add HD Audio device to vSphere Virtual Machine by manually editing .vmx file. I have tested it in our lab environment and it works just fine.

Below  procedure how to do this:

1. Verify storage where VM with no soundcard reside

soundcard1

  1. Login with root to the ESXi host where VM reside using SSH.
    3. Navigate to /vmfs/volumes/<VM LUN>/<VM folder>
    In my example it was:
    ~# cd /vmfs/volumes/Local_03esx-mgmt_b/V11_GSS_DO
    4. Shut down problematic VM
    5. Edit .vmx file using VI editor.

IMPORTANT:
Make a backup copy of the .vmx file. If your edits break the virtual machine, you can roll back to the original version of the file.
More information about editing files on ESXi host, refer to KB article: https://kb.vmware.com/kb/1020302

  1. Once you have open vmx to edit, navigate to the bottom of the file and add following lines to the .vmx configuration file:
    sound.present = “true”
    sound.allowGuestConnectionControl = “false”
    sound.virtualDev = “hdaudio”
    sound.fileName = “-1”
    sound.autodetect = “true”
  2. Save file and Power-On Virtual machine.
  3. Once it have booted, and you have enabled Windows Audio Service, sound will work fine.

If you go to “Edit Settings” of the VM, you can see information that device is unsupported. Please be aware that if after adding sound card to you virtual machine, you may exprience any kind of unexpected behavior (tip: in our lab env work this config without issues).

vCenter Server content library

vCenter Server content library

Content Library was introduced in vSphere 6.0 as a way to centrally store and manage VM templates, ISOs, and even scripts. Content Library operates with a Publisher/Subscriber model where multiple vCenter Servers can subscribe to another vCenter Server’s published Content Library so that the data stored within that Content Library is replicated across for local usage. For example, if there are two data centers each with their own vCenter Server a customer could create a Content Library to store their VM templates, ISOs, and scripts in and then the vCenter Server in the other data center could subscribe and have all of those items replicated to a local datastore or even NAS storage. Any changes made to the files in data center 1 would be replicated down to data center 2.

vcenter13

With vSphere 6.5 VMware has added the ability to mount an ISO directly from the Content Library versus having to copy it out to a local datastore prior to mounting. Customers also now have the ability to run VM customizations against a VM during deployment from a VM template within a Content Library. Previously, customers need to pull the template out of CL if a customization was required. Customers can now easily import an updated version of a template as opposed to replacing templates which could disrupt automated processes.

There are now additional optimizations related to the synchronization between vCenter Servers reducing the bandwidth and time required for synchronization to complete.

Customers can also take comfort in knowing that their Content Libraries are also included in the new file-based backup and recovery functionality as well as handled by vCenter HA.

SUMMARY:

  • Improved operational features
    • Mount an ISO file from a Content Library
    • OS customization during VM deployment from a library
    • Update an existing template with a new version
  • Optimized HTTP sync between vCenter Servers
  • Part of VC backup/restore and VC HA
vCenter Server HA – changes in vSphere 6.5

vCenter Server HA – changes in vSphere 6.5

In vSphere 6.5 vCenter has a new native high availability solution that is available exclusively for the vCenter Server Appliance. This solution consists of Active, Passive, and Witness nodes which are cloned from the existing vCenter Server. The vCenter HA cluster can be enabled, disabled, or destroyed at any time. There is also a maintenance mode so planned maintenance does not cause an unwanted failover.

vcenter10

vCenter HA supports both an external PSC as well as an embedded PSC. Note, however, that in vSphere 6.5 at GA an embedded PSC cannot be used to replicate to any other PSC. Thus, if using an embedded PSC the vCenter Server cannot participate in Enhanced Linked Mode.

vCenter HA has some basic network requirements. A vCenter HA network must be established be and separate from the currently used subnet of the primary network interface of the vCenter Server Appliance (eth0). If using the Basic workflow a new interface, eth1, will be added to the appliance automatically prior to the cloning process. eth1 will be attached to the vCenter HA private network. The port group connecting to this network may reside on either a VMware Virtual Standard Switch (VSS) or a VMware Virtual Distributed Switch (VDS). There are no specific TCP/IP requirements for the vCenter HA network other than latency within the prescribed 10 ms RTT. Layer 2 connectivity is not required.

Failover can occur when an entire node is lost (host failure for example) or when certain key services fail. For the initial release of vCenter HA an RTO of about 5 minutes is expected but may vary slightly depending on load, size, and capabilities of the underlying hardware. During a failover event a temporary web page will be displaying indicating that a failover is in progress. That page will then refresh to the vSphere Web Client login page once vCenter Server is back online. In the case where a user is not active during the failover they may not be prompted to re-login. When compared to other high availability solutions, vCenter HA has several advantages:

vcenter11

PSC High Availability

After making vCenter Server highly available we also need to consider the availability options for the Platform Services Controller.

As you remember in vSphere 6.0 to provide HA for the PSC a supported load balancer was required –. If automated failover is not required we got option to manually repoint a vCenter Server between PSCs within an SSO site.vcenter12

In vSphere 6.5 VMware is  providing PSC HA solution that doesn’t require a load balancer but there is some integration work to be completed with other products in the SDDC portfolio before native PSC HA can be enabled.

I plan to test new vC and PSC HA  features in our lab environment – will provide separate article with my configuration details. At this moment let me point you to VMware KB as additional  reference:

https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1024051

https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2147672

https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2147018

VCSA monitoring and recovery options

VCSA monitoring and recovery options

The new vCenter Server Appliance Management Interface is still accessed via port 5480 for any vCenter Server or Platform Services Controller appliance. This refreshed UI now includes additional resource utilization graphs to provide a simple-to-consume visualization of CPU, Memory, Disk, and Database metrics :

vcenter7

Above screenshot to the right shows the new vCenter Database monitoring screen that provides some insight into the PostgreSQL database disk usage to help prevent crashes due to running out of space. There are also new default warnings presented in the vSphere Web Client to alert administrators when the database is getting close to running out of space and a graceful shutdown mechanism at 95% full to prevent database corruption. Customers can also configure syslog in this improved VAMI.

SUMMARY

  • New vCenter Server Appliance Management Interface
  • Built in monitoring : Network, CPU, and Memory
  • Visibility to vPostgres DB
  • Remote syslog configuration

New in vCenter Server 6.5 is native backup and restore for the vCenter Server Appliance. This new out-of-the-box functionality enables customers to backup vCenter Server and Platform Services Controller appliances directly from the VAMI or API. The backup consists of a set of files that will be streamed to a storage device of the customer’s choosing using SCP, HTTP(s), or FTP(s) protocols. This backup fully supports vCenter Server Appliances with embedded and external Platform Services Controllers.

vcenter8

vcenter9

The Restore workflow is launched from the same ISO from which the vCenter Server Appliance or PSC was originally deployed or upgraded. You can see from the lower screenshot that we have a new option to restore right from the deployment UI. The restore process deploys a new appliance and then uses the desired network protocol to ingest the backup files. It is important to note that the vCenter Server UUID and all configuration settings will be retained.

There is also an option to encrypt the backup files using symmetric key encryption. A simple checkbox and encrypted password is used to create the backup set and then that same password must be used to decrypt the backup set during a restore procedure. If the password is lost there is no way to recover those backup files as we do not store the password and do not use reversible encryption.

SUMMARY:

  • Restore vCenter Server instance to a brand new appliance
  • Supports backup/restore of VCSA & PSC appliances
  • Includes embedded and external deployments
  • Supported Protocols include:
    • HTTP/S
    • SCP
    • FTP/S
  • Option for Encryption
  • Restore directly from VCSA ISO
VCSA deployment and migration options

VCSA deployment and migration options

The vCenter Server Appliance deployment experience has been enhanced in the vSphere 6.5 release. Installation workflow is now performed in 2 stages. The first stage deploys an appliance with the basic configuration parameters: IP, hostname, and sizing information including storage, memory, and CPU resources.
vcenter4

Stage 2 then completes the configuration by setting up SSO and role-specific settings. Once Stage 1 is complete we can now snapshot the VM and rollback if any mistakes are made in Stage 2. This prevents from having to start completely over if anything were to go wrong during the deployment process.

NOTE!!! There are versions of the deployment application available for Windows, Linux, and macOS.

 vcenter5

 A new feature in vSphere 6.5 is the ability to migrate a Windows vCenter Server 5.5 or 6.0 to a vCenter Server Appliance 6.5. The migration process starts by running the Migration Assistant, which serves two purposes. The first, pre-checks of the source Windows vCenter Server 5.5 or 6.0 to determine if it meets the criteria to be migrated. Second, it is the data transport mechanism that migrates data from the source Windows vCenter Server 5.5 or 6.0 to the target vCenter Server Appliance 6.5.

The Migration tool will automatically deploy a new vCenter Server Appliance 6.5 and migrate configuration, inventory, and alarm data by default from a Windows vCenter Server 5.5 or 6.0. If you want to keep your historical and performance data (stats, events, tasks) along with configuration, inventory, and alarm data there is the option to also migrate that information. The vSphere 6.5 release of the Migration Tool provides granularity for historical and performance data selection.

vcenter6

Both embedded and external topologies are supported, the Migration Tool will not allow changing your topology during the migration process. Changing of topologies will need to be done before the migration process if consolidation of your vSphere SSO domain is required.

SUMMARY:

  • 5 support for Windows vCenter 5.5 or 6.0 à 6.5
  • Migrations for both embedded and external topologies
  • VUM included
  • Embedded and external Database support: MSSQL, MSSQL Express, Oracle
  • Option to select historical and performance data
vCenter Server Appliance 6.5 – new default deployment choice

vCenter Server Appliance 6.5 – new default deployment choice

vcenter1The vCenter Server Appliance 6.5 is the first VMware Appliance to run on Photon OS, it is a Linux OS optimized for virtualization which will become in near future  standard for all VMware virtual appliances. Photon OS provide many benefits to the performance of the vCenter Server Appliance, which includes about 3x performance gain over its Windows counterpart and significantly reduces boot and restart times. This also means no more dependency on 3rd party for OS patching and should greatly reduces the amount of time it takes VMware to deliver security patches and updates to the vCenter Server Appliance.

VCSA – main features:

  • Native High Availability
  • VMware Update Manager
  • Improved Appliance Management
  • Native Backup / Restore

In vSphere 6.0 we saw performance and scalability parity for the vCenter Server Appliance when compared to it’s Windows-based counterpart. With vSphere 6.5 we now see feature parity and even new features that are exclusive to the vCenter Server Appliance. Let’s take a quick look at each of these new features before addressing them in more details later:

vcenter2

vcenter3

Let’s start with vCenter High Availability which is a native HA solution built right into the appliance. Using an Active/Passive/Witness architecture, vCenter is no longer a single point of failure and can provide a 5-minute RTO. This HA capability is available out of the box and has no dependency on shared storage, RDMs or external databases.

Next, we have the integration of VMware Update Manager into the vCenter Server Appliance. Now VMware Update Manager is included by default into the vCenter Server Appliance and makes deployment and configuration a snap.

Another exclusive feature of the vCenter Server Appliance 6.5 is the improved appliance management capabilities. The vCenter Server Appliance Management Interface continues its evolution and exposes additional health and configurations. This simple user interface now shows Network and Database statistics, disk space, and health in addition to CPU and memory statistics which reduces the reliance on using a command line interface for simple monitoring and operational tasks.

Finally, VMware have added a native backup and restore capability to the vCenter Server Appliance in 6.5 to allow for simple out-of-the-box backup options in addition to the traditional supported methods including VMware Data Protection and VMware vSphere Storage APIs – Data Protection (formerly known as VMware vStorage APIs for Data Protection or VADP). This new backup and restore mechanism allows customers to use a simple user interface to remove reliance on 3rd party backup solutions to protect their vCenter Servers and Platform Services Controllers.

Note !!! All these new features are only available in the vCenter Server Appliance.

vSphere 6.5 – What’s new in networking  

vSphere 6.5 – What’s new in networking  

 

In this article I will try to review all new network features.

1. vmknic gateway

  • Each VMKERNEL port can have its own Gateway.
  • This will make it easy for vSphere features to function seamlessly.
  • This eliminates the need for adding and maintaining static routes.

network1

Before vSphere 6.5 there was only one default gateway allowed for all VMKernel ports in an ESXi host. vSphere features such as DRS , iSCSI, vMotion, etc. leverage  that use VMKERNEL ports are constrained by this limitation. Many of the VMKERNEL ports were not routable without the use of static routes unless they belonged to a subnet other than the one with the default gateway. These static routes had to be manually created and were hard to maintain.

vSphere 6.5 provides the capability to have separate  default Gateways for every VMKernel port. This simplifies management of VMKernel ports and eliminates the need for static routes.

Prior to vSphere 6.5, VMware services like DRS, iSCSI, vMotion & provisioning leverage a single gateway. This has been an impediment as one needed to  add static routes on all hosts to get around the problem. Managing these routes could be cumbersome process and not scalable.

vSphere 6.5 provides capabilities, where different services use different default gateways. It will make it easy for end users to consume these feature without the need to add static routes. vSphere 6.5 completely eliminates the need for static routes for all VMKernel based services making it simpler and more scalable.

 

2.SR-IOV provisioning:

VM provisioning workflow prior to vSphere 6.5, for SR-IOV devices required the user to manually assign the SR-IOV NIC.  This resulted in VM provisioning operations being inflexible and not amenable to automation at scale. In vSphere 6.5 SR-IOV devices can be added to virtual machines like any other device making it easier to manage and automate.

 

3.Support for ERSPAN:

ERSPAN mirrors traffic on one or more “source” ports and delivers the mirrored traffic to one or more “destination” ports on another switch. vSphere 6.5 includes support for the ERSPAN protocol.

network2

 

4.Improvements in DATAPATH:

 vSphere 6.5 has data path improvements to handle heavy load. In order to process large numbers of packets, CPU needs to be performing optimally, in 6.5 ESXi hosts leverage CPU resources in order to maximize the packet rate of VMs.

network3

Where are the improvements being made ?

  1. VMXNET 3 optimization
    1. Using copy TX for small messages size (<=256B)
    2. Optimized usage of pinned memory
  2. Physical NIC improvements
    1. Native driver support for Intel cards (removes overhead of translating from VMkernel to VMKLinux data structures)
  3. CPU Scheduling Improvements
    1. Up to 8 separate threads can be created per vNIC
      • To enable on VM level add:

ethernetX.ctxPerDev = “3” to vmx file

 

Summary:

  • Optimizing code to improve efficiency
  • Allowing the ability to increase thread count for networking
  • Introducing support for more native drivers (Intel)
  • VMXNET3 enhancements