Exaforge

Cloud, DevOps, Evangelism

XtremIO and the Hands-on Labs: By The Numbers

IMG_2587

One of my single most popular posts last year was some detail about the storage infrastructure of the VMworld HoL, so this year I'd like to do even a bit more depth and more data.

This year, roughly 80% of the HoL were powered by Cisco UCS servers and EMC XtremIO storage. Now, that number varies by how you measure (do you count based on labs provisioned, used? Number of servers connected?), but either way you slice it, its somewhere in the ~80% range. The remainder were serviced by technical preview EVO:RACK systems .

By The Numbers

  • Datacenters: 3 (Santa Clara, CA (us20), Wenatchee, WA (us03), Amsterdam)
  • Total Hosts: 318
  • Total XtremIO Arrays: 7 (5 active)
  • Total Virtual Machines Deployed (entire week): > 103,000
  • Maximum Concurrent Labs Deployed: > 425
  • Maximum VMs Deployed: 4,100

Wenatchee, WA: 172 hosts, 332 sockets, 45TB memory
Santa Clara: 106 hosts, 210 sockets, 35TB memory
Amsterdam: 40 hosts, 80 sockets, 20TB memory

Storage Connectivity

This year, for the first year, the storage subsystem was connected entirely via iSCSI rather than fiberchannel. Combine this with the iSCSI connectivity used by the vCloud Air team, and I think its pretty clear that iSCSI can be used even for the most demanding workloads.

Storage Design

The OneCloud team (the team that runs the HoL infrastrucure) worked closely with the EMC team and we came to a final design of 7 LUNs of 4TB each assigned to each 10-host cluster. Besides configuration of the iSCSI environment, there's very little to actually be decided on an XtremIO system...you choose the LUN size and thats about it. The content addressing architecture consistently spreads the workload as needed across all the SSDs in the system, and the XDP protection technology keeps everything safe.

Sure makes a storage admin's life boring easy when there are so few things that need to be configured to get optimal performance.

Storage Performance

This year, I was able to grab significant performance data from the environment to analyze, and I found it fascinating. Some other time I'll post the Python code I wrote to actually drag this information from vCenter Operations.

Here's an example:

us03-xms1 IOPs
us03-xms1 IOPs

We can see here this cluster (in us03, or Wenatchee) generally pushed around 20K IO/s (this is one of the 3 active X-bricks in the datacenter). In other words, given that these bricks are easily capable of 150K+ IO/s, this array never even got pushed to half its capability. During the show and the tours that the OneCloud team gave, they repeatedly stated that they expect to halve the infrastructure next year.  After 2 years of 100% storage availbility on the part of the XtremIO systems, they're comfortable increasing the size of the failure domain.

To go with that IO/s metric, we also have the throughput metric:

us03-xms1 throughput
us03-xms1 throughput

Now this one is measured in KB/s, so its a little harder to read. However, if you look at the y-axis labels on the left, you'll see they are in the hundreds of thousands. In other words, 400k on that graph represents about 400MB/sec. So the array in question seemed to sustain just shy of 400MB/sec throughput with briefly higher spikes into the gigabytes/sec range. Again - not even close to making this array breathe hard.

The last image I'd like to share is the latency measurements for all of this:

us03-xms1 latency
us03-xms1 latency

Again, you can see that with the exception of a couple brief spikes up to 3-4ms, the system stayed below 500µs (half a millisecond!) for end to end IO (that includes time spent on the network!) for the entire show.  Pretty impressive.

At first I was a bit confused, because 3-4ms (even very briefly) would be well outside what we expect from an XtremIO. I realized after a few moments, however, that this latency is measured on the *datastore* level, meaning it includes any latency introduced in the host iSCSI stack and both directions of the network (in and out).

When I analyzed those brief latency spikes (4 measurements out of 506,219 collected for that array, or 0.00007%), I found signicantly higher network latency as well, meaning the network was the most likely cause of the latency. This is fascinating - and something that only vCenter Operations with its collection-at-scale could provide insight into.

Every other cluster showed these same patterns of moderate usage and ultra low latency.

I'd love to hear your comments, as well as take requests for specific data/analysis, because I've got a ton of data and have only scratched the surface.

Finally, I'd like to thanks Patrick Noia (OneCloud), Josh Schnee (Performance Engineering) Pablo Roesch and Joey Dieckhans (Tech Marketing) for all their help in getting this data!

Today's post was brought to you by Python 2.7.6, vCenter Operations and pygal