Exaforge

Cloud, DevOps, Evangelism

Part 1: Building A Large VMAX & NFS Environment

I've recently been working with a customer to design and implement a pretty sizeable VMware environment thats well in excess of 256 hosts.  I did much of the storage design for this, with some notable help from fellow EMCer Joel Sprouse for the NFS related stuff.  As I went though it, I thought it would be interesting to present the requirements and methods for building such an environment.

So, welcome to Part 1: The Requirements

  • Support at least 400K IO/s (small block, random, 30/70 R/W ratio) in a single system.
  • Support the above IO at less than 3ms latency.
  • Support NFS as the sole access method to the storage from the hosts.
  • Support N+1 failure scenarios with 0% performance loss.
  • Support above performance assuming near-worst case skew (in other words, assume tiering isn't the optimal 5%/70%/25% mix where we see 5% of the space doing 90% of the IO.
  • Proven 5-nines or better availability.
  • No "science project" storage (e.g. GA gear).

The EMC team looked at a number of our options when designing this.  Our initial idea, because the data on this system would be very dedupe-friendly was to use XtremIO, our badass new deduping, all flash box.  However, that broke the last 2 rules - its not yet GA, and it hasn't proven 5 nines in the field (yet!).

Next, we looked at our old standby of the VNX7500 platform.  It could certainly do the vast majority of the requirements, and do them well.  But, it missed on #1 - we can't do 400K IO/s in a single VNX7500 - it just lacks the horsepower.

So, we set our sights on a third option - a VMAX 40K fronted with a VNX VG8 gateway.  This gives us all the requirements - plenty of IOPs, NFS access and all the availability that  VMAX and VG8 are known for.  Eventually, we ended up with a configuration that looked like this:

VMAX

  • 40K - 8 Engines
  • 2TB total cache
  • 128 8GBit FC Ports
  • 352 400 GB EFD (SSD) drives.  8 of those are spares.
  • 408 600 GB 15K FC drives.  8 of those are spares.
  • 124 2TB 7.2K SATA drives.  4 of those are spares.
  • 100% using FAST and VP (Virtual Provisioning)

VNX VG8

  • VG8 Model
  • 8 Datamovers
    • 2 x 10GBe Ethernet Frontend
    • 2 x 8GBit FC Backend
  • 1 Datamover reserved as warm spare (aka standby)

In the next post in the series, I'll outline the initial configuration of the VMAX, including the 'BIN' file.