Here is a white paper for best practices concerning distributed virtual switches:
Distributed vSwitches offer an extra choice for NIC Teaming, “Route based on physical NIC load”. Here is a good explanation, plus a comparison to IP-hash based teaming.
DVS information is stored on each ESX host located at /etc/vmware/dvsdata.db. This is a binary file (database) that can be dumped with the net-dvs command and the “-f” switch. The data is also automatically stored on a shared VMFS volume in a folder named “.dvsData”. Here is an interesting article concerning how DVS data stored on vCenter can get out of sync with a host and what action to take to correct the issue: http://kb.vmware.com/kb/1010913
Cisco has created the world’s first 3rd party distributed vswitch for use with vSphere. Here are some details:
DMZ virtualization using Cisco Nexus 1000v:
new choice in vSphere 5 = e1000e:
VMSafe is a technology and set of APIs that allow 3rd party vendors to invent new products to secure vSphere-based servers and applications in ways that could not be accomplished in non-virtualized environments.
This document provides the results of tests performed by NetApp on NetApp storage using FC, iSCSI, and NFS access. The goals of the test was not to measure maximum performance, but instead to measure performance under low, medium, and high workloads. The results indicate:
Test results - http://media.netapp.com/documents/tr-3808.pdf
Here is a link to good whitepaper on configuring iSCSI written as a collaboration from multiple vendors: http://virtualgeek.typepad.com/virtual_geek/2009/09/a-multivendor-post-on-using-iscsi-with-vmware-vsphere.html
Here is a simple illustration of utilizing multiple vkernel ports on the same virtual switch, but separate virtual port groups, and port binding to distribute I/O manually across multiple paths: http://goingvirtual.wordpress.com/2009/07/17/vsphere-4-0-with-software-iscsi-and-2-paths/
TPGS and SLUA:
Explanation of Target Port Group Support (TPGS) and Asymmetric Logic Unit Access (ALUA): http://developers.sun.com/solaris/articles/tpgs_support.html
The native multi-pathing modules in vSphere do not provide true load balancing; however, vSphere provides vStorage APIs and a pluggable archietecture permitting partner storage vendors to produce unique multi-pathing modules. The primary example of this is EMC PowerPath VE, which could be used to provide true load balancing in a vSphere environment.
Here is some really good information on understanding how to analyze and troubleshoot Storage performance issues in vSphere.:
Some vSphere features rely on Changed Block Tracking, which is a vmkernel feature introduced with vSphere 4.0
Improper alignment of a VMFS partition can certainly result in poor performance of applications. Typically, using the vSphere Client to create VMFS datastores result in proper alignment, but this may not always be the case. Here is a document that discusses the impact and the steps to ensure proper alignment; http://www.vmware.com/resources/techresources/608 and http://www.vmware.com/pdf/esx3_partition_align.pdf.
Guest O/S Track Alignment: Likewise, the guest operating system’s file system should also be properly aligned to ensure the best performance. Typically, this rule should only be applied to the non-system disks. For example, when adding a second disk to a Windows server that is intended to be used to install application software, the following Diskpart commands could be used:
select disk 1
create partition primary align=1024
format fs=ntfs unit=64K label="Applications" nowait
In this example, the disk to be partitioned is Disk 1 (the first disk is Disk 0), its beginning partition is offset at 1024 KB, its drive letter is E:, and unit of allocation is 64K.
Here is a good reference: http://msdn.microsoft.com/en-us/library/dd758814(v=sql.100).aspx
Some administrators like to span a VMFS volume across multiple LUNs and partitions (VMFS extents), while others prefer to avoid spanning at all costs. Here is a good blog to help you decide:
Certain activities within a VMFS datastore may result in using SCSI Reservations to temporarily lock the underlying LUN and impacting other hosts sharing the datastore. Here is a reference: http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1005009
Here is information about a free tool from Dell that easily extend the size of partitions within Windows 2003 servers, including VMs:
This is the same tool that is currently used in VMware certification course labs.
ESX servers use monolithic v-disk files (vmdk files) that are pre-allocated to the configured size. These files do Not grow. Having said that, templates can be stored in various formats and other Vmware products allow additional formats. Here are the options:
Thin provision virtual disks do not automatically shrink the virtual disk files whenever files in the virtual disk are deleted. Here is a link that provides a manual method for shrinking the disk: http://www.virtualizationteam.com/virtualization-vmware/vsphere-virtualization-vmware/vmware-esx-4-reclaiming-thin-provisioned-disk-unused-space.html
the standalone version has more features than the Server component used in class. For example, it provides options to automatically shutdown the source server, re-synchronize final data changes, and boot up the new VM.
Many newer CPUs use NUMA architecture. Here is good document concerning NUMA and large VM configurations:
ESX Server provides a Ballooning Mechanism to borrow RAM from a rich VM and give to a poor VM. Here is a link to a good, detailed article that includes an explanation of ballooning and other memory related information:
Memory compression – New Feature in vSphere 4.1
The vmkernel now looks to compress memory in a memory compression cache within the VM. This step exists just prior to vm swapping step:
· Memory compression summary: http://www.gabesvirtualworld.com/memory-management-and-compression-in-vsphere-4-1/
· Memory compression whitepaper: http://www.vmware.com/files/pdf/techpaper/vsp_41_perf_memory_mgmt.pdf
The vscsiStats command can be run from the Service Console prompt on an ESX server to collect performance data on the activity of the virtual SCSI disks used by VMs.
To list all VMs, their world IDs, and virtual SCSI devices use this command:
Use the following command to collect SCSI related performance data from a specific VM and save the data in a CSV file.
vscsiStats -p all -c -w <worldID> > /tmp/vmstats-<vmname>.csv
When finished, be sure to stop the data collection with this command:
vscsiStats -x -w <worldID>
Macro used to built histograms from a CSV file produced by vscsiStats:
The vmkernel can use Binary Translation to run the virtual CPU of a VM, but new physical CPUs allows the Virtual Machine Monitor (VMM) to be moved to the pCPU (will allow the VMM to run on the CPU beneath Ring 0.) Likewise, newer pCPUs can virtualize the Memmory Management Unit (MMU). Virtualizing the VMM and MMU allows the VM to execute more efficiently. This should occur automatically, whenever the hardware supports it. To determine if a VM is actually utilizing hardware based virtualization, follow the advice in this link:
The paravirtualized SCSI adapter recently resulted in improved throughput with less CPU utilization compared to the standard LSI adapter. References:
For special cases where a VM-based application requires extremely high performance, higher than normally expected by a VM, VMDirectPath may be useful. Basically, some I/O devices (a very limited list) may be configured to allow direct access by one VM. In other words, the guest O/S would have direct access to the device, rather than the vmkernel.
In ESXi 5, a new feature called Host Cache has been created. It allows swapping to host cache, which means swapping to a Solid State Drive. Here is a nice article:
Here is the latest released Official VMware Security Hardening Guide for vSphere
vCenter Configuration Manager is worth a look as a tool to help verify compliance with regulatory standards and industry best practices, such as VMware Hardening Guidelines, PCI, FISMA, HIPPA, and SOX:
Typically, I prefer Not to P2V domain controllers, but instead use dcpromo in a Windows VM, then decommission old physical domain controllers. Here are links with details on the challenges and options for performing P2V Migrations of domain controllers
Here is a link that explains the purpose of many of the log files located on ESXi servers:
ESXi log files are not persisted during shutdown, so it is best to configure a datastore to hold the logfiles and / or configure central logging via syslog using a syslog receiver or via vilogger using vSphere Management Assistant. Here are some details for syslog:
And more details, including description of log file:
And details for changing the scratch partition:
Document with information for configuring and supporting VDR: http://www.google.com/url?sa=t&source=web&ct=res&cd=10&ved=0CCcQFjAJ&url=http%3A%2F%2Fmyvirtualcloud.net%2F%3Fp%3D88&rct=j&q=vmware+data+recovery+troubleshooting&ei=K7nFS6XjGJXy9ASLlJipDg&usg=AFQjCNEviFJpnJMgAZz5bUo2ai754ksGpQ
What really happens during a VMotion migration? You will likely see that the following links provide more detail than the descriptions provided in the course materials:
Here is a good link for CPU Compatibility for VMotion:
· VMware vSphere API Reference: http://www.vmware.com/support/developer/vc-sdk/visdk41pubs/ApiReference/index.html
Several automation tools are available to function with vSphere. The most powerful tool is probably the PowerCLI. Generally, if any vCenter task requires automation beyond what the vSphere client provides or if the client does not adequately provide a desired function, the first place to turn to may be PowerCLI. It provides a command set that allows the creation of scripts to call operations to be run in vCenter and the ESX servers.
vSphere 5 offers Auto Deploy and Image Builder which can be used to automate the deployment and configuration of ESXi hosts.
This link provides a list of vm-based storage products that may be useful for proof of concept labs, where shared storage is required, but a real SAN does not exist:
Here is a tool that may simplify deploying a nested vSphere infrastructure (running ESXi in VMs) in a lab environment for those wanting to practice for VCP or gain familiaraity.
Here is some information to supplement the VMware View Install and Configure course: