Configure VMware vSAN 6.7 iSCSI Target for Windows Server Failover Cluster File Server & SQL

Introduction:

With the release of vCenter 6.7, vSphere 6.7, and vSAN  6.7, a long awaited feature of vSAN that being SCSI Persistent Reservations for iSCSI shared disks  is now released. This subsequently means that iSCSI target  shared disks hosted on vSAN  can now be presented to Virtual Machines on the same vSAN cluster and do officially support Microsoft WSFC “Windows Server Failover Clusters” .

Why is this major news !?

From a technical perspective, we now have the ability to support highly available File Servers and SQL servers that required a Microsoft Failover cluster which in turn requires shared storage. DFS-R and AlwaysOn are not an option to many customers because of mainly licensing concerns when it comes to SQL and storage capacity requirements when it comes to DFS-R so supporting a Failover Cluster on the same vSAN cluster that is hosting those VMs mitigates those concerns when transitioning to VMware HCI.

From a business perspective, Dell EMC VxRail and VMware VSAN ready nodes, both of which are powered by VMware VSAN for SDS functionality, do not have built-in File and Block storage services when it comes to their HCI offering. Nutanix on the other hand with Acropolis File Services, supported this for some time now which gave them a clear advantage with SMB customers that cannot afford alternatives or are looking to fully transition to HCI without the need of other SAN/NAS devices.

When Dell EMC VxRail releases the upgrade package for 6.7 which should be around the time VMware releases 6.7 update 1 ( 90% customers wait for update 1 anyhow ), Block file services would be officially available and supported on VxRail thus putting this side of Nutanix story to bed. File storage services on the other hand will YES be available BUT with a minor difference, you as a customer would have to build and manage those File Servers based on an File Server Failover Cluster that being CIFS or use another technology for NFS.

The point here, and I have relayed this to Dell EMC recently, is that Nutanix just automated the creation of those services and eased up the management of those file services from Prism yet the result and backend setup is the same, as if you manually presented iSCSI disks and created an HA FCI File Server then managed it. Creating iSCSI shared disks is very easy with vCenter HTML5 and thus I want Dell EMC and VMware to just automate the File Services part (Orchestrator and only give us CIFS no need for NFS for now) and embedded into vSAN or VxRail Manager ( some surprises coming soon Winking smile ) to put the mgmt. side of File Services with Nutanix to bed as well.

In this post, I am going to walk you through a deployment of a two node highly available Microsoft File Server Failover Cluster using VSAN iSCSI shared disks. I have already upgraded my lab to 6.7 using this post and I highly recommend you read it before doing so, to understand the impact of the upgrade as of now. Checkout these links from VMware before using iSCSI shared disks:

Upgrade to VMware vCenter 6.7, vSphere 6.7, and VSAN 6.7

Introducing vSAN 6.7

Using SQL Server Failover Clustering on a vSphere 6.7 vSAN Datastore with vSAN iSCSI Target Service: Guidelines for supported configurations

Configuration in VM to set up iSCSI initiator and cluster shared disks for SQL Server or File Server

VMware created a script to configure the VMs that are going to be part of a failover cluster in order to correctly configure iSCSI shared disks with  the right Multipath settings but I am not going to use that just for the sake of showing you how to go about it the manual way though some PowerShell wont hurt in order to be able to verify that settings are configured correctly.

Configuration:

1- I have 2 domain joined Widows Server 2016 Virtual Machines and I am going to do the SAME exact procedure to install Failover Cluster, File Server, Multipath for iSCSI, and apply the recommended multipath settings from VMware. Do the following on all machines participating in the cluster:

1

image

Install-WindowsFeature File-Services

Install-WindowsFeature -Name Failover-Clustering –IncludeAllSubFeature

Install-WindowsFeature -Name Failover-Clustering –IncludeManagementTools

Get-WindowsOptionalFeature -Online -FeatureName MultiPathIO

Install-WindowsFeature -name Multipath-IO

Enable-WindowsOptionalFeature -Online -FeatureName MultiPathIO -NoRestart

Restart VM:

3

Enable-MSDSMAutomaticClaim -BusType iSCSI

Set-MPIOSetting -CustomPathRecovery Enabled -NewPathRecoveryInterval 20 -NewRetryCount 60 -NewPDORemovePeriod 60 -NewPathVerificationPeriod 30 -NewDiskTimeout 60

Restart VM

After applying the commands above on all cluster VMs, MPIO settings should now be enabled for iSCSI devices as shown above and recommended VMware MPIO settings applied. Now start ISCSI initiator on all VMs and note down the Initiator name:

4

5

6

2- Lets head over to VMware side of things and configure the iSCSI target service for VSAN in cluster settings (make sure your VSAN license is applied and covers iSCSI Target):

7

Dedicate a distributed network portgroup for iSCSI traffic and choose the VMkernel address accordingly ( At least should be separated from vSAN traffic ):

As shown below, the current owner of the vSAN iSCSI target service is esxi-1-01 and is running on vmk1.

10

Click on Add under vSAN iSCSI LUNs, in order to create the shared disks that are going to be part of the cluster. It is not recommended to create more than 128 LUNs in one vSAN cluster. If you have multiple LUN requirements, its also a good practice (not mandatory) to create a separate target on a different ESXi server within the vSAN cluster and create the LUN their though it would introduce management overhead.

An example would be if you have a File Server cluster and an SQL cluster, make sure to put each on a different iSCSI target servers though all would be on the same vSAN cluster ( and highly available with Failover Multipathing ). I will create 2 disks here, one for Quorum and one for Data to represent storage for my File Server Failover Cluster:

image

11

12

13

Head over to Initiator Groups in order to give the failover cluster VMs access to those two disks based on their initiator name noted earlier:

14

15

16

17

18

This is it from VMware side “Easy, Easy, Easy”, now just head over to your network configuration on every vSAN cluster ESXi server and note down the IP for vmk1 or whatever vmk that was assigned to vSAN iSCSI target earlier:

19

3- Time to correctly configure iSCSI and MultiPath on cluster VMs in order to make sure that settings are supported and optimal, do this procedure on all VMs that are part of the failover cluster:

Open ISCSI Initiator Properties and head over to Discovery then add the vmk1 IP of all ESXi servers that are part of the vSAN cluster:

20

21

22

Now head back to targets tab and you should see a discovered target in an Inactive state, click on properties and lets add 4 sessions with each session pointing to on the vmk1 IPs we just added earlier:

23

24

image

28

29

30

image

So we added a session for every ESXi server that is part of the vSAN cluster in order to make sure that if the server currently hosting the iSCSI target cluster fails, all of these server can takeover the role, and all VMs part of the failover cluster can reconnect to the new owner automatically. The active iSCSI connection will always be to the server hosting the iSCSI service and the rest would be Failover. Note that you can have multiple iSCSI target services on different servers and point the disk to that active host while putting the rest into Failover mode so this way utilizing all servers as active iSCSI target but would incur additional management overhead.

In order to support the iSCSI target HA introduced by vSAN, we need to change the MPIO policy for the iSCSI disks presented to Failover since only one ESXi host is hosting the iSCSI target service at a given time. To do so, click on devices, choose the disks (one by one), click on MPIO, and change the policy to Failover, this has to be done on all presented disks just once:

31

32

33

34

37

Head over to Volumes and Devices and click on Auto Configure which should now map the iSCSI disks as volumes:

36

4- All the above listed procedure should be conducted on ALL VMs that are part of the Microsoft Windows Server Failover Cluster. Now any configuration steps below are conducted from a Single VM:

Open Disk Management, Initialize the disks, Change to Online, and Format “ MBR NTFS”. Do not assign a drive letter to Quorum disk and assign a drive letter to Data disk:

38

43

44

Open Failover Cluster Manager and run the Create a Cluster Wizard, then add a clustered file server role. All tests should be run and the end result should be successful for the Failover Cluster to be supported by Microsoft:

39

40

41

46

48

49

51

53

54

58

59

Let us try a fast Owner Node change and Disk Simulate Failure to make sure that both VMs can take owner of shared disks and File Server role without any downtime:

60

61

image

Add a simple share to the clustered file service role to make sure that read and write are working fine. I will also run a test to make sure latency is something we don’t have to worry about:

62

63

64

65

66

A small performance test is conducted to make sure that latency is not an issue. In simple terms, diskspd will create a 1GB file and use 2 threads to issue 64K random IO with 70% read and 30% write for 2 minutes while monitoring IOPs, latency, and MB/s . Guess what !? Latency is sub 1ms Smile .

diskspd.exe -b64K -c1G -d1 -t2 -r -o2 -d120 -Sr -ft -L -w30 \\SF.diyarunited.com\VSAN\Test.Test

67

5- With the introduction of vCenter HTML5 full functionality, very cool Performance monitoring has been added to vSAN including one specifically for iSCSI targets:

68

69

70

6- So what happens if the ESXi server hosting the iSCSI target service goes down in a planned or unplanned event:

71

72

73

74

75

In both planned and unplanned downtimes, the clustered file server was not effected and iSCSI paths on the clustered VMs worked as expected with iSCSI MPIO and Failover policy so that is good to know.

Conclusion:

The VMware part of things is as easy as it gets and so is the Microsoft part of things as well though a bit of manual work has to be put in just once while creating the cluster. If VMware and Dell EMC put the effort into automating this procedure from within vCenter for both plain iSCSI block storage (the script is a step in the right way and enough for block storage) and file storage CIFS while giving us the ability to create and manage a File Server cluster from within vCenter VSAN node (Create shares, Assign Permissions, …) , it would give a tremendous boost in the war between VSAN and Nutanix.

Salam Smile .

15 thoughts

  1. A very helpful post, thanks for sharing.
    Quick question: was there any temporary device loss when you performed the failover test and ESXi 03 took over the iSCSI target services ?
    the windows event indicates that the connection to target was lost but it back online after reconnecting to the new target.

    1. Based on the settings recommended by VMware in regards to the MultiPath iSCSI settings, it should failover in 30 seconds , I did not see any service interruption while this was happening noting that I didn’t have continuous read/write on the share itself but from an accessibility perspective the iSCSI target change was not noticeable.

  2. Hi ,

    Fail over Cluster using vSAN iSCSI Target , when i try to configure Quorum 3 Gb iSCSI ( device directly connected from GOS to vSAN Target ) disk not listing .

    Is it supported that configuration ?

    vSphere 6.7 , VSAN 6.7

    1. Hi Manoj, Not quite sure by what you mean connected from GOS to vSAN Target, can you elaborate a bit more. I have physical and virtual servers connected to vSAN iSCSI target both of which are Windows based.

  3. Is it possibel to format the cluster shred disk with the new file format refs?

    1. I am not aware of any limitation when using NTFS or REFS but that needs to be tested first. Let me verify this and get back to you.

  4. to implement Windows 2016 (WSFC) failover Cluster the only way now, VSAN 6.7, is using iSCSI disks??? What about shared vmdk using multi-writer feature plus shared SCSI Controller bus????

    1. Yes that is the only way. No that is not supported and I have tried it extensively while waiting for the shared iSCSI service to be supported by VMware and it caused lots of issues because there is no SCSI persistence.

  5. Great post, in great details the topic have been covered. Many thanks for the efforts, my friend.
    Question, a customer wants to use 40TB volume for the file server, is the GPT format is supported? The example above is for 100GB formatted in MBR format.

    Regards

    Shetty

  6. Great post Saadallah, explained in great detail, thanks a ton.

    Question, I have a customer who wants to use 40TB LUN, does GPT format supports as opposed to MBR mentioned in your post?

    regards

    Shetty

    1. Thank you Prakash, let me test this soon and get back to you , I believe it should be supported but not sure on the size limits and performance stability for the same.

Comments are closed.