File Server migration strategy in Azure with Zero down time for end users without any ACL loss

Are you planning to move your on premise file servers to Azure if yes this post can help you to do a better planning for the steps required for seamless movement of the file share to Azure, before you plan the actual move let’s see what are very important factors which we need to consider before the move. The most important factors which should be considered for the file server migrations are as follows:

Does the new file Share Security, authentication and ACL (Access Control List)?

As per our testing and multiple MS articles currently the Azure File Share doesn’t meet all the above requirement, for example the Active Directory-based authentication and ACL support is not present in Azure File Share which is one of the important requirement.

How the end user are accessing the data?

In present days most of the windows based file servers enterprises are using DFSR for the file share technology. In Azure if we mount the file share by using SMB, we don’t have folder-level control over permissions instead we can use shared access signatures (SAS) to generate tokens that have specific file permissions, and which are valid for a specified time interval but that is somewhat will be a complete new to the users and will be a complete change the way how you have implemented the file share in your current on premise environment.

How many users/clients can access the file simultaneously?

The current quota in Azure File Share is 2000

What is the maximum size of the File Share?

Currently the Azure File Share supports maximum up to 5 TiB of data storage. In future it may support upto 100 TiB.

Sample USE Case:

Let’s consider a very common use case which we have considered for this Article. We have considered a large enterprise which have multiple locations around the globe and there are more than 100 file servers which is currently being used. All the file servers are not very big but total data size is around 40 TB. Now in this use case we have consolidated the data in 12 Azure VM’s in different Azure Regions instead of 100 servers on premise. We have achieved the same with the help of DFSR.

Steps we have followed:

To achieve this we have followed the below steps

Fig: Migration steps to move on premise Windows based File Servers to Azure IaaS

Why DFSR is still the best option: If you want to copy files with same set of permissions (same hash) and it should replicate files with latest changes.

DFSR components: DFSR namespace – This is used to publish the data to end users. People can access the virtual Namespace and get to the files and folders on DFSR server.

DFS Replication: This is use to replicate the data between the servers. We can also control the schedule and bandwidth of DFSR replication. We can mark servers read only too. This facility will force read only attribute to the server and no one will be able to make any changes to the specific server. DFSR replication works with Replication groups. In a replication group we define the folders to be replicated between 2 or more servers. This can be fully mesh or we can control it like Hub and Spoke via. Connections. DFSR configures some hidden folders under the replicated folders and stores internal data before processing. We should not remove or add content manually on these folders.

Comparison test between RoboCopy and AzCopy

The question came to our mind whether we will use Robocopy or AzCopy to stage the data test. To test the speed we have done the following comparison test.

Here is the test result:

Tool Size (GB) Time (Min.) Time (Sec.) ACL (Permissions)
RoboCopy

1

17

19

Intact
AzCopy

1

2

8

Lost

It’s very clear that you can’t use AzCopy since the ACL (Permissions) are lost. (Probably that is reason why DoubleTake uses Robocopy internally in their application. J)

We did Robocopy to copy the data from one server to the other to reduce the time for DFSR replication. You can read this small article to understand how fast it is to pre seed the data with Robocopy, rather letting DFSR replicate all of it.

Example command we used to prepopulate the data is:

robocopy.exe “\\WAI-FS01.whyazure.in\j$\DFSR\ABNU-FS-A” “E:\ABNU-FS-A” /e /b /copyall /r:6 /w:5 /MT:64 /xd DfsrPrivate /tee /log:E:\RobocopyLogs\servername.log

This above command is copying folder name ABNU-FS-A to Local E drive on the server from where we are running the command.

MT64 is the thread count, default is 8, and with 16 MT we can copy 200 MB in few seconds. However, as we faced some issues with the network we usually now are running 16 Threads to make sure robocopy will not hang.

Once we robocopy the data we check the file hash. Example is below:

To check the data file hash on the remote source server is:

Get-dfsrfilehash \\WAI-FS01.whyazure.in\j$\DFSR\ABNU-FS-A * – this is to check the file hash on all the folders under ABNU-FS-A.

Get-dfsrfilehash E:\ ABNU-FS-A\*

Note: we need AD PowerShell module to run above command. Once this is done, we add the E drive folder to the replication group and let it sync with DFSR. As we have already copied the data and file hash, matches it will take just few hours for GB’s of data. That’s all.

Now People may think why we have not used the new AzureFileSync which is the buzzword now a days for FileShares

Although we have not used the Azure FileSync, however let’s discuss few things about the Azure File Sync.

What is Azure FileSync?

With Azure File Sync, shares can be replicated to Windows Servers on-premises or in Azure. The users would access the file share through the Windows Server, such as through an SMB or NFS share. This is useful for scenarios in which data will be accessed and modified far away from an Azure datacenter, such as in a branch office scenario. Data may be replicated between multiple Windows Server endpoints, such as between multiple branch offices.

Why this is not the right fit for the work which we are doing?

The main use case where we can use Azure File Share is if you are having multiple branch offices with very slow network speed. The best use case is On-premises with slow network, where the Windows, Linux, and macOS clients can mount a local on-premises Windows File share as a fast cache of the Azure File share. Since we have very good bandwidth to Azure from all the branches with Site to Site connectivity this option for AzureFileSync doesn’t fit here.

Data Transfer Method which are available for the Pre Staging of the files are as follows

  • Azure Export/Import
  • RoboCopy
  • AzCopy
  • Azure FileSync

Conclusion:

There are multiple options to transfer the data from on premises to Azure for the File Servers staging but if you want a very smooth migration where end user will not see any down time this the best approach. However ACL hash can only be supported by RoboCopy and Azure FileSync, use of Azure File Share can be created without the need to manage hardware or an OS instead of what we are doing here building the Azure IaaS VM’s since this is not a possible use case here as we need to preserve the ACL and unfortunately it’s still not supported with Azure File Share at the moment.

Thanks to Archit Bahuguna for giving his input while doing this exercise.

4 Comments