Cloud, DevOps, Evangelism

Quick ScaleIO Tests

I managed to get my hands on the latest ScaleIO 1.2 beta bits this week, and wanted to share some of the testing results.  I've been pretty impressed.

I installed a cluster consisting of 4 total nodes, 3 of which store data, and one of which provides iSCSI services and acts as a 'tie breaker' in case of management cluster partitions.

Each of the 3 nodes with data (ScaleIO calls these SDS nodes) was a single VM with 2 vCPUs and 1GB of allocated memory.  Very small, and I suspect I can knock them down to 1vCPU given the CPU usage I saw during the tests.  Each one also had a single pRDM to the host's local disk (varying sizes, but all 7200 RPM SATA).  Building the cluster was fairly simple - I just used the OVA that ScaleIO provides (although a CentOS or SuSE VM works too) and used their installer script.  The script asks for a bunch of information, and then simply installs the requisite packages on the VMs and builds the cluster based on your asnwers.  Of course, this can all be done manually, but the handy script to do it is nice.BWEl-VnCIAA3HvZ.png-large

Once it was installed, the cluster was up and running and ready to use.  I built a volume and exported it to a relevant client device (the one serving iSCSI).  From there, I decided to run some tests.

The basic IO patterns were the first ones I tried, and I did pretty well:

  1. 125 MB/s sustained read
  2. 45 MB/s sustained write
  3. 385 IO/s for a 50:50 R:W 8K workload (very database like).

These are pretty great numbers for just 3 slow consumer class drives.  Normally, we'd rate a set of 3 drives like this at about 60% of those numbers.  Check out the dashboard during the write test:

Screen Shot 2013-10-08 at 12.50.07 PM

After that basic test, I decided to get more creative.  I tried removing one of the nodes from the cluster (in a controlled manner) on the fly.  There was about 56GB of data on the cluster at that point, and the total time to remove?  6 mins, 44 sec.  Not bad for shuffling around that much data.  I then added that system back (as a clean system), and the rebalance took only 9 mins, 38 sec - again averaging about 48MB/s (about the peak performance that a SATA drive can sustain).

The last set of tests I decided to run were some uncontrolled failure tests, where I simply hard shut down one of the SDS VMs to see how the system would react.  I was impressed that the cluster noted the failure within about 5 seconds of the event and instantly began moving data around to reprotect it (again, peaking around 54 MB/s).  It took about 7 minutes to rebuild...not bad!  I've included a little screen cast of that below.


I then powered that host back on to see how the rebalance procedure looks (remember, its not a rebuild anymore, because that data has been reprotected already - its pretty much the same as adding a net-new host).  I have another screencast for that too.


All told, I'm pretty impressed.  Can't wait to get some heavier duty hardware (Chad Sakac, are you listening?) to really push the limits.