Browsed by
Tag: vsphere

Adding a sound card to ESXi hosted VM

Adding a sound card to ESXi hosted VM

Sound Card in vSphere Virtual Machine is an unsupported configuration. This is feature dedicated to Virtual Machines created in VMware Workstation. However, you can still add HD Audio device to vSphere Virtual Machine by manually editing .vmx file. I have tested it in our lab environment and it works just fine.

Below  procedure how to do this:

1. Verify storage where VM with no soundcard reside

soundcard1

  1. Login with root to the ESXi host where VM reside using SSH.
    3. Navigate to /vmfs/volumes/<VM LUN>/<VM folder>
    In my example it was:
    ~# cd /vmfs/volumes/Local_03esx-mgmt_b/V11_GSS_DO
    4. Shut down problematic VM
    5. Edit .vmx file using VI editor.

IMPORTANT:
Make a backup copy of the .vmx file. If your edits break the virtual machine, you can roll back to the original version of the file.
More information about editing files on ESXi host, refer to KB article: https://kb.vmware.com/kb/1020302

  1. Once you have open vmx to edit, navigate to the bottom of the file and add following lines to the .vmx configuration file:
    sound.present = “true”
    sound.allowGuestConnectionControl = “false”
    sound.virtualDev = “hdaudio”
    sound.fileName = “-1”
    sound.autodetect = “true”
  2. Save file and Power-On Virtual machine.
  3. Once it have booted, and you have enabled Windows Audio Service, sound will work fine.

If you go to “Edit Settings” of the VM, you can see information that device is unsupported. Please be aware that if after adding sound card to you virtual machine, you may exprience any kind of unexpected behavior (tip: in our lab env work this config without issues).

VCAP6-DCV Design exam experience(s)

VCAP6-DCV Design exam experience(s)

Finally, I’m pround to announce that VCIX6-DCV goal is achived!

Previously I passed the Deploy Exam (you can read about it in this post) which for me personally was far more intuitive and effortless. If you are a practitioner person than visioner and designer it would be quite tought to get used to these kind of questions and reasoning. In my opinion there are a few points which I can not agree with and I would be glad to discuss with the authors of these questions about their points of view 🙂

However, as I read on one of the blogs this is a VMware exam and they could have their own point of view and opinion about best practicies in designing virtual environments.

As you realized I used plural in word experience – it’s not so hard to guess why. Yes, I had to take the exam twice. Nevertheless, I finished the first try quite satisfied and full of hope the reality was brutal. 243 points appeared not to be enought to pass it…That was a food for thoughts.

That made me aware that I need to prepare better and figure out about the key used in design quiestions. It’s not exacly the key but the way of designs constructions. As usually Internet was priceless. First of all I found tips that the exam is similar to VCAP5 version and fallowing this idea I read the VCAP5-DCD Official Cert Guide. This was quite useful. Then I tried to think about the design questions I met and gind out what could be wrong there.

After a few more white papers, blog articles and other readings I took the second try and happily this the reult was much more better and of course I finally managed to pass and gain complete VCIX title.

The few tips from me:

  1. Be fresh and rested at the exam day ( there are 205 minutes, it’s quite a long to sit in front of the screen).
  2. Stay focused and read carefully all the questions and instructions at least twice.
  3. Start from the design questions which would take you a little bit more time.
  4. Be prepared.

Materials I found usefull during preparation time:

  1. VCAP6-Design Blueprint and all associated documents especially those from objective 1.2 and 1.3 should be read more than once
  2. VCAP5-DCD Official Cert Guide
  3. Study Guides of other people
  4. Google+ VCAP-DCD Study Group

I also recommend to get yourself familiar with scoring methodology described at The Cloud JAR’s Blog

 

 

 

VirtualVillage’s home LAB

VirtualVillage’s home LAB

It is possible to learn especially about VMware products using just books, official trainings, blogs, etc. However, we believe that the real knowledge is available only by practice and not all could be tested or verified using production environments 🙂

And again, you can test a lot just using Workstation on your notebook (providing it is powerful enough) but these days there are more and more virtual infrastructure component which requires a lot of resources. Furthermore, having real servers and storage array is also a little bit different than deploying a few small virtual machines running on a notebook.

That is why a few years ago we decided to join forces and build the real laboratory where we are able to test even the most sophisticated  deployments not only with VMware products without being constraint by the resources.

The main hardware components of our lab infrastructure are included in the table below.

Hardware Component Quantity Details Purpose
ServerFujitsu TX200 S7 2 2x CPU E5-4220, 128 GB RAM Payload Cluster
Server Fujitsu TX100 S1 2 Router/Firewall and Backup
Server Fujitsu TX100 S3 3 1x CPU E3-1240, 32 GB RAM Management Cluster
NAS Synology DS2413+ 1 12 x 1 TB SATA 7,2K Gold Storage
NAS Synology RS3617+ 1 12 x 600 GB SAS 15K Silver Storage
NAS QNAP T410 1 4 x 1TB SATA 5,4K Bronze Storage (ISO)
Switch HPE 1910 1 48x 1 Gbps Connectivity

 

Of course we didn’t buy it at once. The environment evaluates with increasing needs. ( In the near future we are going to expand management cluster with 4 host and deploy NSX).

The logical topology looks like this:

lab

 

Despite the fact that most of our servers use tower cases, we installed them in a self made 42U Rack. Unfortunatelly, especially during the summer it could not go without air conditoning (this is one of the most power consuming part of the lab..)

 

Later, either me or Daniel will describe the software layer of our Lab. I hope, it will give an inspiration to anyone who is thinking about own lab.

 

VMware PowerCLI – Introduction

VMware PowerCLI – Introduction

To begin the jurney with PowerCLI we need to start from the installation of PowerCLI itself.

The installation can be done on a Windows based system, that could be some kind of an administration server. The installation files can be found on this VMware site.

There are a few versions available, they are released asynchronously with vSphere and the version numbers do not exactly correspond to vSphere versions. The most recent version is 6.5 whilst there are other like 6.3, 6.0 or 5.8 available.

Before you install the PowerCLI I recommend to change the Execution Policy of Powershell. It is required to run scripts. To do it, run Windows PowerShell as administrator and execute fallowing command:

Set-ExecutionPolicy RemoteSigned

The installation process is really straightforward, that’s why I will not spam the screanshoots of installations here.

After you finish the installation you can run it and see the first Welcome screen like this:

powercli1

 

The first command I suggest to use is:

Get-VICommand

it lists all the available commands. However to display any information about virtual infrastructure you need to connect to a vCenter server or ESXi host. We will do it in the next part after introduction of useful tools which can be used in conjunction with PowerCLI.

How to monitor virtual network – story about netflow in vSphere environment.

How to monitor virtual network – story about netflow in vSphere environment.

Before we start talking about NetFlow configuration on VMware vSphere let’s back to basics and review protocol itself. NetFlow was originally developed by Cisco and has become a reasonably standard mechanism to perform network analysis. NetFlow collect network traffic statistics on designated interfaces. Commonly used in the physical world to help gain visibility into traffic and understanding just who is sending what and to where.

NetFlow comes in a variety of versions, from v1 to v10. VMware uses the IPFIX version

of NetFlow, which is version 10. Each NetFlow monitoring environment need to have exporter ( device carrying  netflow flow’s) , collector (main component ) and of course some network to monitor and analyze 😉

Below You can see basic environment diagram:

netflow1

We can describe flow as tcp/ip packets sequence (without direction) that have common:

  • Input interface
  • Source IP
  • Destination IP
  • TCP/IP Protocol
  • Source Port (TCP/UDP)
  • Destination Port (TCP/UDP)
  • ToS IP

Note. vSphere 5.0 uses NetFlow version 5, while vSphere 5.1 and beyond uses IPFIX (version 10).

Ok, we know that distributed virtual is needed to configure NetFlow on vSphere but what about main component NetFlow collector – as usual we have couple options that we can simply divide in commercial software with fancy graphical interfaces and open source staff for admins that still like old good cli 😉

Below I will show simple implementation steps describing examples from both approach :

Manage engine NetFlow analyzer v12.2, more about software on https://www.manageengine.com/products/netflow/?gclid=CP3HlJbyv9ACFSQz0wod_UcDCw my lab VM setup:

  • Guest OS:Windows 2008R2
  • 4GB RAM
  • 2vCPU
  • 60 GB HDD
  • vNIC interface connected to ESXi management network

Installation (using embedded database just for demo purpose) is really simple and straight forward. Let’s start from starting the installer:

netflow2

 

  1. accept license agreements

netflow3

  1. choose installation folder on vm hdd

netflow4

  1. choose installation component option – for this demo purpose we go with simple environment with only one collector server, central reporting is not necessary

netflow5

  1. choose web server and collector services TCP/IP ports

netflow6

  1. provide communication details – again in this demo we have all components on one server and we can simply go with localhost

netflow7

 

  1. optional – configuration proxy server details

netflow8

  1. select database – on this demo i used embedded Postgresql , but if You choose MS database remember about ODBC config.

netflow9

  1. installation is quite fast – couple more minutes and solution will be ready and available to start work:

netflow10

 

… Web client like in VMware need couple CPU cycles to start 😉

netflow11

.. and finally we can see fancy ManageEngine NetFlow collector

netflow12

II) Open-Source netdump tool  – nfdump is distributed under the BSD license, and can be downloaded at: http://sourceforge.net/projects/nfdump/ my lab VM steup:

  • GOS: Debian 8.6
  • 4GB RAM
  • 2vCPU
  • 60 HDD
  • vNIC interface connected to ESXi management network

 

  1. We need to start from adding some sources to our debian distribution:

netflow13

  1. CLI Installation nfdump packet:

netflow15

netflow14

  1. Run simple flow capture to verify if collector is running and creating output flow statictics files (you can see that i use same tcp port 9995 and folder on my desktop as output destination):

netflow16

 

Ok, now it is time to back to vSphere and configure DVS to send network traffic to collector:

netflow17

 

  • IP Address: This is the IP of the NetFlow Collector
  • Port: This is the port used by the NetFlow Collector.
  • Switch IP Address: This one can be confusing – by assigning an IP address of here, the NetFlow Collector will treat the VDS as one single entity. It does not need to be a valid, routable IP, but is merely used as an identifier.
  • Active flow export timeout in seconds: The amount of time that must pass before
  • the switch fragments the flow and ships it off to the collector.
  • Idle flow export timeout in seconds: Similar to the active flow timeout, but for flows
  • that have entered an idle state.
  • Sampling rate: This determines the interval packet to collect. By default, the value is 0,
  • meaning to collect all packets. If you set the value to something other than 0, it will
  • collect every X packet.
  • Process internal flows only: Enabling ensures that the only flows collected are ones that occur between VMs on the same host.

And enable it at designated port group level:

netflow18

Finally we can create simple lab scenario and capture some ftp flow statistics between two vm’s on different ESXi :

netflow19

VM’s are running in dedicated vlan on the same DVS port group, collector is running on management network to communicate with vCenter and ESXi hosts. I used ftp connection to generate traffic between vm’s below examples output from two collectors (test ran separate as collector share the same ip)  :

 

ftp client on first vm:

netflow20

ftp server on second vm:

netflow21

flow statistics example from netdump:

netflow22

flow statistics from ManageEngine

netflow23

 

Increase VMware ESXi iSCSI storage performance ? – lets demistyfy all tips and tricks

Increase VMware ESXi iSCSI storage performance ? – lets demistyfy all tips and tricks

 

Before we start I would like to describe main motivation to write this article which is quite simple – to gather in one place all basic theoretical background about iscsi protocol and best practices at its implementation on vSphere platform with special consideration about potential performance tuning tips & tricks . This is first part of the series where We (I’m counting on readers participation) try to gather and verify all this “magical” parameters often treated as myths by many Admins.

To begin let’s start from something boring but as usual necessary 😉 … theoretical background.

iSCSI is an network based storage standard that enable connectivity between iSCSI initiator (client) and target (storage device) over well known IP network. To explain this storage standard in very simple way we can say that SCSI packets are encapsulated in IP packet and sent over traditional TCP/IP network where targets and initiators can de-encapsulate TCP/IP datagrams to read SCSI commands. We have couple options in case of implementation this standard because TCP/IP network model components transporting SCSI commands can be realized at software and/or hardware layer.

 

Important iSCSI standard concepts and terminology:

  • Initiator – functions as an iSCSI client. An initiator typically serves the same purpose to a computer as a SCSI bus adapter would, except that, instead of physically cabling SCSI devices (like hard drives and tape changers), an iSCSI initiator sends SCSI commands over an IP network. Initiators can be divided into two broad types:
    • A software initiator implement iSCSI using code component that use existing network card to emulate SCSI device and communicate thru iSCSI protocol. Software initiators are available for most popular operating systems and are the simplest and best economic method of deploying iSCSI.
    • A hardware initiator based on dedicated hardware, typically use special firmware running on that hardware and implementing iSCSI above network adapter acting as HBA card in server. Hardware decrease CPU overhead of iSCSI and TCP/IP processing that is why it may improve the performance of servers thet use iSCSI protocol to communicate with storage devices.
  • Target – functions as resource located on an iSCSI server, most often dedicated network connected storage device (well known as storage array) that provide target as access gateway to its resources. But it may also be a “general-purpose” computer or even virtual machine – because as with initiators iSCSI target can be realized at software layer.
  • Logical unit number – in iscsi terms LUN stands for logical unit and is specified by unique number. A LUN is representation of an individual SCSI (logical) device that is provided /accessible thru target. After iscsi connection is establish (emulate connection to scsi hdd) initiators treat iSCSI LUNs as they would a raw SCSI or IDE hard drive. In many deployments LUN usually representing part of large RAID (Redundant Array of Independent Disksdisk) array, it leaves access to underlying filesystem – regarding of the operating system that use it.
  • Addressing – iSCSI uses TCP/IP pots (usual 860 and 3260) for the protocol to name objects use to address it with special names refer to both iSCSI initiators and targets. iSCSI provides name-formats:
    • iSCSI Qualified Name (IQN)
      • iqn -iSCSI qualified name
      • datethat the naming authority took ownership of the domain
      • reversed domain name of the authority
      • Optional “:” prefixing a storage target name specified by the naming authority.
    • Extended Unique Identifier (EUI)

Format: eui.{EUI-64 bit address} (eui.xxxxxxxxx)

  • T11 Network Address Authority (NAA)

Format: naa.{NAA 64 or 128 bit identifier} (naa.xxxxxxxxxxx)

Note : IQN format addresses occur most commonly.

  • iSNS – iSCSI initiators can locate appropriate storage resources using theInternet Storage Name Service (iSNS) protocol. iSNS provide provides iSCSI SANs with the same management model as dedicated Fibre Channel  In practice, administrators can implement many deployment goals for iSCSI without using iSNS.

iSCSI protocol is over IETF responsibility – to have more information please see RFC 3720, 3721, 3722, 3723, 3747, 3780,3783, 4018,4173,4544,4850,4939, 5046, 50475048,7143

http:// tools.ietf.org

  

And finally for those who dare to read all boring theory part – main dish: my performance“ tips and tricks” list to demystify in this blog series journey :

  1. iSCSI initiator (hardware or software) queue depth:

        //example for softoware iscsi initiator

#esxcfg-module -s iscsivmk_LunQDepth=64 iscsi_vmk

  1. Adjusting Round Robin IOPS limit :

        //example for max iops and bytes parameter

#esxcli storage nmp psp roundrobin deviceconfig set -t=iops -I=10 -d=naa.xxxxxxxxxxxx

#esxcli storage nmp psp roundrobin deviceconfig set -t=bytes -B 8972 -d=naa.xxxxxxxxxxx

     3. NIC/HBA Driver and firmware version on esxi hypervisor

      // https://www.vmware.com/resources/compatibility/search.php

    4. Using jumbo frames for iSCSI

      // https://kb.vmware.com/kb/1007654

    5. Controlling LUN queue depth throttling

//example based on kb: http://kb.vmware.com/kb/1008113

#esxcli storage core device set –device naa.xxxxxxxxxx–queue-full-threshold  8 –queue-full-sample-size 32

    6. Delay ACK enable /disable

// https://kb.vmware.com/kb/1002598

    7. Port binding considerations use / not use

     // https://kb.vmware.com/kb/2038869

  

On next article I will try to gather all ESXi hypervisor layer configuration level best practices and describe test environment and test methodology.

So let’s end this pilot episode with open question – is it worth to use/implement any of them in vSphere environment ?

 

ESXi host connection lost due to CDP/LLDP protocol

ESXi host connection lost due to CDP/LLDP protocol

You can observe some random and intermittent loss of connection to ESXi 6.0 host running on Dell servers (both Rack and Blade). It’s caused by a bug with Cisco Discovery Protocol /Link Layer Discovery Protocol.  It can be also seen while generating VMware support log bundle because during this process these protocol are also used to include information about the network.

 

What are these protocols for? Both of them perform similar roles in the local area network. They are used by network devices to advertise their identity, capabilities and neighbors. The main difference is that CDP is a Cisco proprietary protocol and LLDP is vendor-neutral. There are also other niche protocols like Nortel Discovery Protocol, Foundry Discovery Protocol or Link Layer Topology.

CDP and LLDP are also compatible with VMware virtual switches and thereby they can gather and display information about the physical switches.  CDP is available for both standard and distributed switches whilst LLDP is available only for distributed virtual switches since vSphere 5.0

cdp

Cisco Discovery Protocol information displayed on vSwitch level.

 
There is currently no resolution for this bug but thanks to the VMware Technical Support the workaround described below is available.

 

Turn off the CDP for each  vSwitch:

# esxcfg-vswitch –B down vSwitchX

You can also verify the current status of CDP using fallowing command:

# esxcfg-vswitch –b vSwitchX

This simple task will resolve the problem with random connection loss of ESXi hosts. Anyway it will not solve the problem with loss of connection during generation of log bundle.

To confirm that the prblem exist you can simply run fallowing command:

# vm-support –w /vmfs/volumes/datastore_name

Even though we turned off the CDP, during log generation process ESXi are using it to gather information about network topology.

To fix it you have to download this script called disablelldp2.py and perform the steps below:

  1. Copy the script to a datastore which is shared with all hosts,
  2. Open SSH to an ESXi host,
    1. Move to a destination where you copied the script,
    2. Grant the permission: # chmod 555 disablelldp2.py,
  3. Run the script: ./disablelldp2.py,
  4. After the script is executed move to /etc/rc.local.d and edit local.sh file. It should look like this:

#!/bin/sh

# local configuration options

# Note: modify at your own risk!  If you do/use anything in this # script that is not part of a stable API (relying on files to be in # specific places, specific tools, specific output, etc) there is a # possibility you will end up with a broken system after patching or # upgrading.  Changes are not supported unless under direction of # VMware support.

ORIGINAL_FILE=/sbin/lldpnetmap

MODIFIED_FILE=/sbin/lldpnetmap.original

if test -e “$MODIFIED_FILE”
then
echo “$MODIFIED_FILE already exists.”
else
mv “$ORIGINAL_FILE” “$MODIFIED_FILE”
echo “Omitting LLDP Script.” > “$ORIGINAL_FILE”
chmod 555 “$ORIGINAL_FILE”
fi
exit 0

  1. Restart the ESXi server and run vm-support command to confirm that the problem is solved.