VMs Sizing Example for EVO: Rail / EMC VSPEX Blue Performance appliance

I have been thinking about writing a blog post on how many VM’s can be hosted on an EMC VSPEX Blue HCIA. There is no straight answer to this, it depends on the VM resource requirements. I have considered a General Purpose VM Profile of 2 vCPU, 4GB of memory and a single 40GB VMDK.

Before going into the sizing details, let us review the total Hardware resources available in EMC VSPEX Blue Performance appliance powered by VMware EVO:Rail.

8-8-2015 4-04-40 PM

Total resources per appliance are:

Processor: 48 Cores {4 x 12 cores}

Memory: 768 GB {4 x 192 GB}

Storage: 1.6 TB of SSB {will be used a read cache and write buffer only}

14.4 TB Raw Storage on 10K SAS drives {4 x 3.6 TB}

Network: 8 x 10GbE NIC

4 x 1GbE NIC {for remote management}

In the following example, I am considering to deploy 100 virtual machines in Hybrid Virtual SAN cluster. Each virtual machine requires 2 vCPU, 4GB of memory And a single 40GB VMDK. This deployment is on a hybrid configuration, which is Running Virtual SAN 6.0 and on-disk format v2. I am going with a conservative approach with vCPU-to-core consolidation ratio of 5:1. The estimation is that the Guest OS and application will consume 50% of the storage.

However, the requirement is to have enough storage to allow VMs to consume 100% of the storage eventually. The only VM Storage Policy setting is NumberOfFailuresToTolerate set to 1. All other Policy settings are left at the defaults. The host will boot from an SATADOM which is there in every node.

Note that we are not including the capacity consumption from component metadata Or witnesses. Both of these are negligible.Taking into account the considerations above, the calculation for a valid configuration would be as follows:

Host Requirements: 4 hosts for Virtual SAN

Total CPU Requirements: 100 x 2 vCPUs = 200 vCPUs

vCPU-to-core ratio: 5:1

Total CPU Core Requirements: 200 / 5 = 40 cores required

How many cores per socket= 6

Total Memory Requirements:

100 x 4GB

= 400GB

 Total Storage Requirements (without FTT): *

100 x 40GB

= 4TB

 Total Storage Requirements (with FTT): *

4TB *2

= 8TB

 Total Storage Requirements (with FTT) + VM Swap (with FTT): *

8 TB + {2 x (100*4)}

= 8 TB + 800 GB

8.8 TB

Since all VMs are thinly provisioned on the VSAN datastore, the estimated storage

Consumption should take into account the thin provisioning aspect before the flash requirement can be calculated.

Estimated Storage Consumption (without FTT) for cache calculation:

(50% of total storage before FTT))

50% of 4TB

= 2TB

  • Cache Required (10% of Estimated Storage Consumption): 200GB
  • Estimated Snapshot Storage Consumption: 0 (keeping this example simple)
  • Total Storage Requirements (VMs + Snapshots):
  • = 8.8TB

 required capacity slack space: 30% {VMware recommendation for VSAN}

 Total Storage Requirement + Slack space:

8.8TB + 2.64TB

= 11.44TB

estimated on-disk format overhead (1%): 114GB**

* Thin provisioning/VM storage consumption is not considered here.

** On-disk format overhead calculation is based on the total storage requirements of capacity layer, so may differ slightly based on final capacity layer size.

CPU Configuration

In this example, the customer requires 40 cores overall. If we take the 10% Virtual

SAN overhead, this brings the total number of cores to 44. The VSPEX Blue appliance has 4 nodes where each node contain and a dual socket system provides 12 cores. That gives a total of 48 cores across the 4-node cluster. This is enough for our 44 core requirement across 4 servers. It also meets The requirements of our virtual machines should one host fail, and all VMs need to Run on just three hosts with a minimum impact to their CPU performance.

Memory Configuration

Each of the 4 Node would need to contain at least 100GB of memory to meet the running requirements. Again, if a host fails, we want to be able to run all 100 VMs on remaining three Node, so we should really consider 140GB of memory per Node.This also provides a 10% overhead for ESXi and Virtual SAN from a memory Perspective. Each VSPEX Blue Performance node contains 192 GB , so we are good to go from memory requirement point of view.

Storage Configuration

For this configuration, a total of 8.8TB of magnetic disk is required, and 200GB of Flash, spread across 4 Nodes. To allow for a 30% of slack space, the actual capacity of the cluster must be 11.44TB. Added to this is the formatting overhead of the v2 Virtual SAN data store. This is Approximately 1% that equates to 114GB. The capacity required is now 11.55TB.

Since we have already factored in a “failures to tolerate”, each host would need to be configured to contain approximately 2.9TB of magnetic disk and approximately 50GB of flash. We advocate following the Virtual SAN best practices of having

Uniformly configured hosts. Each Node in VSPEX Blue applaimmce has 400 GB Flash and 3.6 TB Capacity on 10K SAS drives , which will easily support the storage requirements of 100 VMs with the given profile.

Component Count

The next step is to check whether or not the component count of this configuration would exceed the 3,000 components per host maximum in Virtual SAN 5.5, or the 9,000 components per host maximum in Virtual SAN 6.0 (disk format v2). This 4-Node Virtual SAN cluster supports running 100 virtual machines, each virtual Machine containing a single VMDK. There is no snapshot requirement in this Deployment.

This means that each virtual machine will have the following objects:

  • 1 x VM Home Namespace
  • 1 x VMDK
  •  1 x VM Swap
  • 0 x Snapshot deltas

This implies that there 3 objects per VM. Now we need to work out how many components per object, considering that we are using a VM Storage Policy setting that contains Number of Host Failures to Tolerate = 1 (FTT). It should be noted that Only the VM Home Namespace and the VMDK inherit the FTT setting; the VM Swap object ignores this setting but still uses FTT=1. Therefore when we look at the number of components per object on each VM, we get the following:

  •  2 x VM Home Namespace + 1 witness
  •  2 x VMDK + 1 witness
  •  2 x VM Swap + 1 witness
  • 0 x Snapshot deltas

Now we have a total of 9 components per VM. If we plan to deploy 100 VM, then we Will have a maximum of 900 components. This is well within our limits of 3, 000 Components per host in Virtual SAN 5.5 and 9,000 per host in 6.0.

Conclusion:  EMC VSPEX Blue Performance appliance can run 100 General purpose server workload with FTT = 1 {Failure to tolerate} without any performance impact. It also shows that you can start small with one appliance and grow as you want linearly as shown below.

8-8-2015 4-27-38 PM

I hope this blog was helpful.

vCenter Server Enhancements in vSphere 6

vCenter Server Enhancements in vSphere 6

 

With the release of vSphere 6, there are few significant changes in vCentre server architecture and the way it will be deployed. As far as I can see I think that the deployment has been simplified compared to the previous versions.

There are two ways for the vCentre server deployment:

  • Embedded
  • External

Embedded:

As you can see below, in the embedded configuration vCenter server and Platform Service controller are installed on the same physical/virtual machine.

8-3-2015 2-35-27 PM

The approach of embedded vCenter server configuration comes with its own advantages and disadvantages. Let me cover the advantages first.

  • The biggest advantage is the connection between vCenter Server and the Platform Services Controller is not over the network, therefore vCenter Server is not prone to outages because of connectivity and name resolution issues between vCenter Server and the Platform Services Controller
  • In case you are doing a windows based vCentre server installation, you will need fewer Windows licenses
  • No need of a load balancer to distribute the load across Platform Services Controller
  • You will have to manage fewer virtual machines or physical servers

Disadvantages:

  • There is a Platform Services Controller for each product which might be more than required. This consumes more resources.
  • The model is not scalable and is suited for the small scale environment.

External:

In external configuration, vCenter server and Platform Service controller are installed on different physical/virtual machine.8-3-2015 2-48-35 PM

Installing vCenter Server with an external Platform Services Controller has the following advantages:

  • Less resources consumed by the combined services in the Platform Services Controllers enables a reduced footprint and reduced maintenance.
  • Your environment can consist of more vCenter Server instance.

Installing vCenter Server with an external Platform Services Controller has the following disadvantages:

  • The connection between vCenter Server and Platform Services Controller is over the network and is prone to connectivity and name resolution issues.
  • If you install vCenter Server on Windows virtual machines or physical servers, you need more Microsoft Windows licenses
  • You must manage more virtual machines or physical servers.

With the new release, PSC (Platform service controller) is responsible for the following vCenter services:

  • VMware vCenter Single Sign-On
  • VMware Certificate Authority (CA)
  • License service
  • Lookup service
  • VMware Directory Services

The vCenter server will take care of reminder of the services, which are:

  • vCenter Server
  • vSphere Web Client
  • Inventory Service
  • VMware vSphere Auto Deplo
  • VMware vSphere ESXi Dump Collector
  • vSphere Syslog Collector on Windows and vSphere Syslog Service for the VMware vCenter Server Appliance

We can also install multiple instances of PSC for high availability, in this scenario the Platform Service Controller replicates information such as licenses, roles and permissions, and tags with other Platform Service Controllers  , this allows for a single pane of glass of the environment with Enhanced Linked mode.

Enhanced Linked Mode:

Linked mode using Microsoft ADS/ADAM replaced with Enhanced Linked mode. Platform Service Controller’s now replicate all information required for Linked mode.

8-3-2015 3-05-28 PM

  • Enhanced Linked mode is now enabled by default in an environment
  • vCenter Appliance now supported with Enhanced Linked mode
  • Mixing Windows and Appliance platforms supported

VMware Certificate Authority (CA)

  • VMware CA is a solution to this complexity as it now acts as the Root certificate authority for vSphere to which all certificates are generated
  • Allows for enhanced security as all certificates for components are signed and valid
  • Root certificate can be replaced with one from a corporate CA to integrate vSphere into an existing infrastructure

VMware Endpoint Certificate Store

  • Certificate store on each Platform Services Controller or vCenter host that stores all certificates for components on the server

Individual certificate no longer required for each component

  • In previous releases each component (vCenter Service, Inventory Service, and so on) required a unique certificate
  • In vSphere 6.0 all communication is directed through the Reverse Proxy Endpoint, therefore, only a single certificate per server is required

vCenter for Windows and vCenter Appliance support the same scalability numbers and features:8-3-2015 3-15-50 PM

Virtual SAN 6.0 Hardware Requirements

Off lately i have seen a number of people asking for Prerequisite for setting up VSAN cluster from the hardware perspective. Although this information is available in the VSAN 6.0 design and sizing guide , i thought of writing a short and crisp article.

:8-2-2015 10-12-54 PM

Hardware:
– Minimum of 3 hosts in a cluster configuration
– All 3 hosts must contribute storage
– Recommended that hosts are configured with similar hardware
– Hosts: Scales up to 64 nodes
– Disks: Locally-attached disks
– Hybrid: Magnetic disks and flash devices
– All-Flash: Flash devices only

– SAS/SATA/PCI-e SSD {At least one of each}
– SAS/NL-SAS/SATA HDD{At least one of each}
– 1 Gb/10 Gb NIC
– SAS/SATA controllers (RAID controllers must work in “pass-through” or RAID0” mode)
– 4 GB to 8 GB USB, SD Cards
Network
1 GB Ethernet or
10 GB Ethernet (preferred) (required for all-flash)
“Witness” component (only metadata) acts as tie-breaker during availability decisions

Any server which is on VMware Compatibility Guide(VMware Compatibility Guide > Virtual SAN  ) can be used to setup the VSAN Cluster.

VSAN Cluster can be set in either Hybrid configuration or in all flash configuration .

VSAN Hybrid configuration:

– In Virtual SAN hybrid, all read and write operations always go directly to the Flash tier
– Flash-based devices serve two purposes in Virtual SAN hybrid architecture
– Non-volatile write buffer (30%) {Writes are acknowledged when they enter prepare stage on the flash-based devices}

– Read cache (70%) {Cache hits reduce read latency}
– Cache miss – retrieves data from the magnetic devices

VSAN All Flash configuration:

– In Virtual SAN all-flash, read and write operations always go directly to the Flash devices
– Flash-based devices serve two purposes in Virtual SAN all-flash
– Cache tier (write buffer) { it is recommended to use High endurance flash devices in cache tier}
– Capacity tier {Low endurance flash devices}

Magnetic Disks (HDD)

– SAS/NL-SAS/SATA HDDs supported
– 7200 RPM for capacity
– 10,000 RPM balance between capacity and performance
– 15,000 RPM for additional performance

– NL SAS will provide higher HDD controller queue depth at same drive rotational speed and similar price point
NL SAS recommended if choosing between SATA and NL SAS

Storage Controllers:

– SAS/SATA storage controllers
– Pass-through or “RAID0” mode supported
– Performance using pass-through mode is controller dependent
– Check with your vendor for PCI-e device performance behind a RAID-controller
– Replacing devices for upgrade of failure purposes might require host downtime
– Support for hot-plug devices
– Storage controller queue depth matters
– Higher storage controller queue depth will increase performance
– Minimum queue support of 256
– Validate number of drives supported for each controller

Network:

1 Gb / 10 Gb supported for hybrid architecture
– 10 Gb shared with NetIOC for QoS is recommended for most environments
–  If 1 GB, recommend dedicated links for Virtual SAN
10 Gb supported only for all-flash architecture
– 10 Gb shared with NIOC for QoS will support most environments
Jumbo frames will provide nominal performance increase
– Enable for greenfield deployments
– Enable in large deployments to reduce CPU overhead
Virtual SAN supports both VMware vSphere standard switch and VMware vSphere Distributed Switch™ products
– NetIOC requires VDS
Network bandwidth performance has more impact on host evacuation and rebuild times than on workload performance

Firewall Ports:

Virtual SAN Vendor Provider (VSANVP)
– Inbound and outbound – TCP 8080

Virtual SAN Clustering Service (CMMDS)
– Inbound and outbound UDP 12345 – 23451

Virtual SAN Transport (RDT)
– Inbound and outbound – TCP 2233

Hope this post was useful . more info here:

Click to access VSAN_Design_and_Sizing_Guide.pdf