Exaforge

Cloud, DevOps, Evangelism

Part 3: Building A Large VMAX & NFS Environment

In Part 1 of this series, I discussed the requirements for this design and an overview.  In Part 2, I went into depth on the design questions and decisions for the gateway side of the solution.  In this, Part 3, I will do the same for the VMAX side.

Make / Model / Color

Model: The VMAX is available in a number of different models.  They all run the same operating system (Enginuity), but they have different performance, scaling and feature levels.  For this particular environment, necessity dictated the model.  With a requirement of 400K frontend IO/s, we needed a system that can keep up.  In a RAID1 system, 400K IOPs at the front end of the array corresponds to nearly 800K on the backend, or a total of about 1.2M IO/s.  Only the largest VMAX 40K would be able to reach these numbers while also providing the other requirements: sub-3ms response times and 0% performance degredation during a component failure.

Options: We also elected to use the largest cache configuration available; 2TB.  While not cheap by any means, a bursty, write-heavy workload like this one can really benefit from the extra breathing room.  We also chose to go with a large, 8 engine (aka 16 director) system for maximum CPU horsepower.

Color: The VMAX comes in any color you like, as long as its black with blue light.

Disks & Protection

Obviously, we have to put some serious disk muscle behind this thing to need all that performance, and so we did.  The final disk configuration looks like this:

  • EFD: 400 GB eMLC.  Count: 352.   RAID: R5 3+1

    Obviously the fastest disk option is the EFDs.  In order to maximize available space, we chose the 400GB model, which unlike the 200/100GB models uses MLC, not SLC flash technology.  We rate the performance of these drives equal to the regular SLC drives, however.  We also rate them at a similar level of longevity to the SLC technology as well, due to the over provisioning we do of the drives (e.g. they contain substantially more flash than the rated 400GB).

    We chose RAID5 3+1 protection for these drives.  Normally, for a write intensive workload this would be a poor choice, but we anticipate that the raw speed & number of these drives along with the substantial impact of the DRAM cache will make this a non-issue.  Remember, again, that the goal for these was not raw performance, but consistent performance.  So if we spend an extra 2ms (still sub 3)due to RAID5 overheads but gain consistency across a much larger set of data (due to RAID5's benefit for capacity compared to RAID10), we've won.

    </span></li>

  • FC: 600 GB 15K.  Count: 408.   RAID: R10

    Next, we needed some significant amount of space to land recently-used-but-currently-idle data on.  We needed north of 100TB to do this, so we ended up with 400 FC drives to do so.  It also helps that a large 8-engine system requires a minimum of 320 drives, and those drives should be regular magnetic media (just to avoid using pricy EFDs for Vault space that is rarely used.

    We decided on RAID10 to maximize performance.</li>

  • SATA: 2TB 7.2K.  Count: 124.  RAID: R10

    Last, we needed approx. 100TB to store very old datasets that were unlikely to ever be accessed again.  For these kinds of workloads, SATA is pretty clearly the best choice, so we went with it.

    Again, R10 to protect the data.  We decided against RAID6 due to the write performance penalty.</li> </ul>

    Tiering

    So with the disks out of the way, we need to talk about tiering.  No particular tier of disks in this system can accommodate the full workload, so we have to tier it.  Also - we want to.  We dont want to have the admins for this environment manually moving hundreds of VMs around per day.

    So, we'll run this entire thing with FAST VP (EMC's goofy marketing name for tiering technology on VMAX).

    Of course, it goes without saying at this point that the entire environment will be wide striped across all relevant disks (thats the VP - Virtual Provisioning - part).  At this point, with the latest Enginuity code (5876), VP has very little overhead, and lots of advantages (like wide striping, more granular tiering, etc).

    Next Up, in Part 4, we'll discuss the actual commands and process required to make the above possible.