Exaforge

Cloud, DevOps, Evangelism

Shared VMFS Volumes on non-clustered hosts

There was an interesting post on the EMC Community Network "Everything VMware" forums yesterday about the nature of VMFS locking and how it is affected by multiple hosts accessing the volumes without those hosts being part of a cluster together (or even parts of different clusters).

The first really important piece to understand about this question is how VMFS implements locking.  Obviously the hardest part of any clustered filesystem is ensuring that disk writes (and to a lesser extent, reads) are done in a sane, coordinated and reliable fashion.  Generally this means that any given file can/should only be accessed by one host at a time (again, except for read requests).  There are a million different ways this can be done, and many filesystems / volume managers rely on network access and configuration files to achieve this.

VMFS is different - VMFS uses exclusively on-disk locking semantics and SCSI protocols to achieve its needs.  I will leave the discussions of how reservations/locking works to others who have done great work describing their mechanisms.  You should read those posts - udnerstanding VMFS at a low level will help you every day in your work.

So, this property of VMFS' exclusive use of disk semantics for locks means that hosts dont need to know about each other (via network or config files) in order to effectively use a shared cluster VMFS volume.  Everything about the locking is on the disk, and if the ESX host can see it, it knows everything it needs to know.

So, the short answer is that its perfectly functional to have multiple hosts and even multiple clusters accessing the same VMFS volume, even if those hosts are in different clusters, folders, and even datacenters.  Its even OK if none of them are managed by vCenter.  Its even OK if they are all the free ESXi license.  From a technical perspective, as long as you are not exceeding the maximums (32 hosts / volume, etc) you are in a reasonable (and even supported) configuration.

Now, the relevant question is not 'can you', but should you? I would argue that you shouldn't, with 3 notable exceptions.

You should stick with a given set of VMFS volumes masked/accessible only by the cluster on which their VM's primary run.  The reason for this is really around management.  Do you want to have to keep a spreadsheet about what volumes are primary for what cluster? Do you want to have a new admin accidently put something on a non-preferred volume?  What if you want to isolate performance issues?  Sure, this method can strand a little bit of storage, but honestly modern dedupe and thin provisioning methods make that less of an issue.

What are the exceptions (in my mind)?

  1. A volume containing only templates/ISOs (very little risk here, and you dont want to duplicate all that).  I think that NFS also works very well here.
  2. A "swing" volume used just for moving virtual machines between clusters.
  3. Home / very small business clusters where you dont have any storage to spare and you aren't using advanced features like vMotion anyways.

So there are my thoughts.  Comments?