Browsed by
Month: November 2016

Another free eBook from VMware!

Another free eBook from VMware!

After Network Virtualization for Dummies, VMware published today another book from the series – Cloud Management for Dummies.

It can be downloaded at this VMware site. The one thing you need  to do is to sign in and download.

Download this book to learn how to meet these cloud-driven challenges, including:

  • Lifecycle management – from deployment to maintenance
  • Hybrid landscape – manage local and remote services
  • Quality of service – improve uptime and performance
  • Cost containment – capture and communicate usage

 

vSphere 6.5 – vCenter Configuration Backup

vSphere 6.5 – vCenter Configuration Backup

In vSphere 6.5 new feature to backup vCenter Server Appliance is available. You can back up it by using build-in file-based solution which backup the core configuration and inventory into a few files. You can also decide which historical data you want to include in such backup.

The backup is available from VAMI interface ( at port 5480).

backup1

The available locations where you can backup the configuration are:

  • FTP and FTPS
  • SCP
  • HTTP and HTTPS

backup2

As I mentioned before you can choose if you want to backup the historical data aswell or not. The common part ( inventory and configuration) is always checked by default.

backup3

Tha backed up files looks like these:

backup4

 

In case you are forced to use your backup you have to use the vCSA ISO file downloadable from VMware site and then select the Restore option. The process is quite similar to normal deployment (2 stages in the process).

How to monitor virtual network – story about netflow in vSphere environment.

How to monitor virtual network – story about netflow in vSphere environment.

Before we start talking about NetFlow configuration on VMware vSphere let’s back to basics and review protocol itself. NetFlow was originally developed by Cisco and has become a reasonably standard mechanism to perform network analysis. NetFlow collect network traffic statistics on designated interfaces. Commonly used in the physical world to help gain visibility into traffic and understanding just who is sending what and to where.

NetFlow comes in a variety of versions, from v1 to v10. VMware uses the IPFIX version

of NetFlow, which is version 10. Each NetFlow monitoring environment need to have exporter ( device carrying  netflow flow’s) , collector (main component ) and of course some network to monitor and analyze 😉

Below You can see basic environment diagram:

netflow1

We can describe flow as tcp/ip packets sequence (without direction) that have common:

  • Input interface
  • Source IP
  • Destination IP
  • TCP/IP Protocol
  • Source Port (TCP/UDP)
  • Destination Port (TCP/UDP)
  • ToS IP

Note. vSphere 5.0 uses NetFlow version 5, while vSphere 5.1 and beyond uses IPFIX (version 10).

Ok, we know that distributed virtual is needed to configure NetFlow on vSphere but what about main component NetFlow collector – as usual we have couple options that we can simply divide in commercial software with fancy graphical interfaces and open source staff for admins that still like old good cli 😉

Below I will show simple implementation steps describing examples from both approach :

Manage engine NetFlow analyzer v12.2, more about software on https://www.manageengine.com/products/netflow/?gclid=CP3HlJbyv9ACFSQz0wod_UcDCw my lab VM setup:

  • Guest OS:Windows 2008R2
  • 4GB RAM
  • 2vCPU
  • 60 GB HDD
  • vNIC interface connected to ESXi management network

Installation (using embedded database just for demo purpose) is really simple and straight forward. Let’s start from starting the installer:

netflow2

 

  1. accept license agreements

netflow3

  1. choose installation folder on vm hdd

netflow4

  1. choose installation component option – for this demo purpose we go with simple environment with only one collector server, central reporting is not necessary

netflow5

  1. choose web server and collector services TCP/IP ports

netflow6

  1. provide communication details – again in this demo we have all components on one server and we can simply go with localhost

netflow7

 

  1. optional – configuration proxy server details

netflow8

  1. select database – on this demo i used embedded Postgresql , but if You choose MS database remember about ODBC config.

netflow9

  1. installation is quite fast – couple more minutes and solution will be ready and available to start work:

netflow10

 

… Web client like in VMware need couple CPU cycles to start 😉

netflow11

.. and finally we can see fancy ManageEngine NetFlow collector

netflow12

II) Open-Source netdump tool  – nfdump is distributed under the BSD license, and can be downloaded at: http://sourceforge.net/projects/nfdump/ my lab VM steup:

  • GOS: Debian 8.6
  • 4GB RAM
  • 2vCPU
  • 60 HDD
  • vNIC interface connected to ESXi management network

 

  1. We need to start from adding some sources to our debian distribution:

netflow13

  1. CLI Installation nfdump packet:

netflow15

netflow14

  1. Run simple flow capture to verify if collector is running and creating output flow statictics files (you can see that i use same tcp port 9995 and folder on my desktop as output destination):

netflow16

 

Ok, now it is time to back to vSphere and configure DVS to send network traffic to collector:

netflow17

 

  • IP Address: This is the IP of the NetFlow Collector
  • Port: This is the port used by the NetFlow Collector.
  • Switch IP Address: This one can be confusing – by assigning an IP address of here, the NetFlow Collector will treat the VDS as one single entity. It does not need to be a valid, routable IP, but is merely used as an identifier.
  • Active flow export timeout in seconds: The amount of time that must pass before
  • the switch fragments the flow and ships it off to the collector.
  • Idle flow export timeout in seconds: Similar to the active flow timeout, but for flows
  • that have entered an idle state.
  • Sampling rate: This determines the interval packet to collect. By default, the value is 0,
  • meaning to collect all packets. If you set the value to something other than 0, it will
  • collect every X packet.
  • Process internal flows only: Enabling ensures that the only flows collected are ones that occur between VMs on the same host.

And enable it at designated port group level:

netflow18

Finally we can create simple lab scenario and capture some ftp flow statistics between two vm’s on different ESXi :

netflow19

VM’s are running in dedicated vlan on the same DVS port group, collector is running on management network to communicate with vCenter and ESXi hosts. I used ftp connection to generate traffic between vm’s below examples output from two collectors (test ran separate as collector share the same ip)  :

 

ftp client on first vm:

netflow20

ftp server on second vm:

netflow21

flow statistics example from netdump:

netflow22

flow statistics from ManageEngine

netflow23

 

vSphere 6.5 – Stronger security with NFS 4.1

vSphere 6.5 – Stronger security with NFS 4.1

NFS 4.1 is been supported since vSphere 6.0 and  but now we are looking into providing stronger security. In vSphere 6.5 we have better security  by providing strong cryptographic algorithms with Kerberos (AES). Also, IPV6 is supported but not with Kerberos and that is another area we are looking into along with supporting integrity checks.

Aa we know vSphere 6 NFS client also does not support the more advanced encryption type know is AES. So lets take a look at what is new in vSphere 6.5 NFS in terms of encryption standard :
storage5

 Summary:

  • NFS 4.1 has been supported since vSphere 6.0 ,
  • Currently support stronger cryptographic algorithms with Kerberos authentication using AES ,
  • Introducing Kerberos integrity check (SEC_KRB5i) along with Kerberos authentication in vSphere 6.5,
  • Adding Support IPV6 with Kerberos ,
  • Added Host Profiles support for NFS 4.1 ,
  • Better security for customer environments .

 

vSphere 6.5 – New scale limits for paths & LUNs

vSphere 6.5 – New scale limits for paths & LUNs

In vSphere 6.5 VMware  doubled  the  current limits and continuously work on reaching new scale around this . Current limits (before 6.5) pose challenge as for example in some cases our customers have 8 paths to a LUN, in this configuration one can have max of 128 LUNs in a cluster. Also, many of the customers tend to have smaller size LUNs to segregate important data for easy backup and restore. This approach can also exhaust current LUN and Path limits.

Large LUN limits  enable  to have larger cluster sizes and hence reducing management over head.storage4

SUMMARY:

  • Current Limit is 256 LUNs and 1024 Paths ,
  • This limits customer deployments requiring higher Path counts ,
  • Customers requiring small sized LUNs for important files/data require larger LUN limits to work with ,
  • Larger Path/LUN limits can enable larger cluster sizes, reducing the overhead of managing multiple clusters ,
  • Support 512 LUNs and 2K paths in vSphere 6.5 .

 

PowerCLI course

PowerCLI course

I was always keen on getting deeper knowledge about PowerCLI or in other words – start to use it in daily administrative tasks. I decided to do something with it and I think it would be the best way to write my own guide in a form of structured notes and share it here with you. Perhaps someone would find it useful.

Therefore in PowerCLI & VMA tab you could find an agenda of this course which will be systematically updated with next parts.

There are planned fallowing parts:

  1. VMware PowerCLI – Introduction
  2. Useful Tools
  3. Basic commands to generate and export reports
  4. Monitoring VMs with PowerCLI
  5. Managing VMs using PowerCLI
  6. Managing multiple VMs based on their tags
  7. Monitoring ESXi hosts with PowerCLI
  8. Managing ESXi hosts using PowerCLI
  9. Managing virtual networks using PowerCLI
  10. Managing Cluster-wide settings using PowerCLI
  11. Complete ESXi configuration with a single script

 

If you have any sugestions what else should be included in such course, do not hesitate to contact me via comments.

Network virtualization for Dummies – from VMware

Network virtualization for Dummies – from VMware

VMware shared a free ebook – Network Virtualization for Dummies. It’s the next book from seriers “for Dummies”. The main goal of the series is to describe technical aspects in the most clear and easy way as possible. I haven’t read this one yet but “Virtualization for Dummies” was quite good in my opinion.

If you are keen on network virtualization topic, I strongly encourage you to download it here.

 

 

vSphere 6.5 – vSphere HA Orchestrated Restart

vSphere 6.5 – vSphere HA Orchestrated Restart

VMware announced a new feature in vSphere 6.5 called HA Orchestrated Restart. But wait a minute – it was already available in previous version where you were able to set the restart priority for specific VMs or group of VMs. So what’s going on with this “new feature” ? As always, the devil is in the details 🙂

Let’s start from the old behavior. Using VM overrides in previous version of vSphere, we could set one of three available priorities – High, Medium (default) and Low. However it doesn’t guarantee that the restart order will be successful for our three-tier apps because HA is only really concerned about the resources to the VM, and once the VM had received the resources, HA’s job was done. The restart priority defined the order in which VMs would secure their resources. But if there was plenty of resources for everyone, then the VMs would receive their allocations in pretty quick succession and could start powering on. For example if DB server takes longer to boot than the App server for example, the App will not be able to access the DB and may fail.

vSphere 6.5 now allows you to create VM to VM dependency chains.  These dependency rules are also enforced if when vSphere HA is used to restart VMs from failed hosts.  That’s gives you the ability to configure the right chain of dependency where App server will wait for DB until it boots up. The VM to VM rules must also be created that complies with the Restart Priority level.  In this example, if the app server depended on the database server, the database server would need to be configured in a priority level higher or equal to the app server.orchestrated-restart

Validation checks are also automatically done when this feature is configured to ensure circular dependencies or conflicting rules are not unknowingly created.

There are number of conditions that HA can check for to determine the readiness of a VM which can be chosen by the administrator as the acceptable readiness state for orchestrated restarts.

Conditions:

  1. VM has resources secured (same as old behavior)
  2. VM is powered on
  3. VMware Tools heartbeat detected
  4. VMware Tools Application heartbeat detected

Post condition delays:

  1. User-configurable delay – e.g. wait 10 minutes after power on

The configuration of the dependency chain is very simple.  In the Cluster configuration of the Web Client, you would first create the VM groups under VM/Host Groups.  For each group, you would include only a single VM.

orchestrated-restart2-jpg

The next thing to configure is the VM Rules in VM/Host Rules section.  This is where you can define the dependency between the VM Groups.  Since each group only contains a single VM, you are essentially creating a VM to VM rule.

orchestrated-restart3

In previous releases we were able to manage such behavior using e.g. SRM during failover to recovery site. However there are plenty of use cases where it’s necessary to provide the correct order of restarts during single site and HA cluster. Fortunately, now it’s possible 🙂

vSphere 6.5 – Automatic UNMAP

vSphere 6.5 – Automatic UNMAP

In vSphere 6.5 VMware are looking into automating the UNMAP process, where VMFS  would track the deleted blocks and will be able to reclaim deleted space from the backend array in back ground. This background operation should make sure that there is a minimal storage I/O impact due to UNMAP operations.

storage3

Just to remaind – UNMAP is a VAAI primitive using which we can reclaim dead or stranded space on thinly provisioned VMFS volume. Currently this can be initiated by running a simple ESX CLI command and it can free up deleted blocks from storage.

In vSphere 6.5 VMware is looking into automating the UNMAP process, where VMFS  would track the deleted blocks and will be able to reclaim deleted space from the backend array in back ground. This background operation should make sure that there is a minimal storage I/O impact due to UNMAP operations.

Lets go  though an UNMAP example stating our thought process

  1. VM is being provisioned on a vSphere host and assigned a 6 TB VMDK.
  2. There will be thin provisioned VMDK storage space allocated on storage array.
  3. User installs a POC data analytics application and creates a 400 GB database VM
  4. Once the work is done with this database, user deletes this DB VM – VMFS  initiate a space reclamation in the back ground
  5. 400GB space on the array side should be freed or claimed back

One of the design goal will be to make sure there is minimal impact due to UNMAP on storage I/O. We are also looking into using new SESparse format as a snapshot file format to enable this.

Space reclamation is critical when customers are using All Flash storage due to higher cost of Flash and any storage usage optimization will provide better ROI for customers

Summary:

  • Automatic UNMAP does not require any manual intervention or scripts
  • Space reclamation happens in the background
  • CLI based UNMAP continues to be supported
  • Storage I/O impact due to automatic UNMAP is minimal

Supported in vSphere 6.5 with new VMFS 6 datastores

Quick desktop restoration using VMware Mirage

Quick desktop restoration using VMware Mirage

Backup of physical desktops is one of the primary functions of VMware Mirage. Some people say that true men don’t create a backup 🙂

Althought sometimes it’s a good idea to have some kind of backup. Thanks to VMware Mirage IT Administrators can easily create backup, migrate provision and do a lot of the administrative tasks related to user’s desktops and laptops much faster.

VMware Mirage’s main features are:

  • Centralized Virtual Desktops (CVDs)
  • Streamlined Windows Image Management
  • Application Layering
  • Network and Storage Optimizations
  • Branch Reflector
  • Driver Library

Mirage allows you to restore the entire device  from a CVD snapshot in case of hard drive or device failure or replacement need (e.g. stolen laptop, etc.) That’s the case I tested recently and I have to admit that’s pretty straightforward.

Let’s take a look at this. VMware Mirage provides the embedded wizard called Disaster Recovery which can be used to Replace Hard Disk or complete Machine.

mirage1

When you choose that option you can select which CVD will be used to restore the machine.

mirage2

In next step you can pick the right option. There are three options available:

  • Full System Restore – includes Operating system, applications and user data and settings. That’s the option which you will use to restore complete machine.
  • Restore applications, user data and settings
  • Restore User Data and settings

mirage4

During the restore proces you can also switch the Base Layer from your repository. Currently in my test environment there is lack of Base layers. Anyway my case is to restore the orignal image without any changes.

mirage5

There is also a possibility to change Computer name, Domain membership, etc. but again I want to have my new desktop in exactly the same state as the old one. 
mirage7

After these steps you will see the summary. The lenght of the process largely depends on our network and the size of CVD.

mirage9