Using Flexcache volumes to accelerate Windows shares data access

Starting In Ontap 9.8 release NetApp decided to add support for the Windows SMB protocol to the FlexCache technology. At last…..

In this blog, I will create a source volume as origin and a flexcache volume on a remote cluster. In the lab example I will also validate the benefit offered by the ability to extend a central CIFS share natively.

I used the NetApp documentation as a reference to define what a Flexcache volume is and what it is used for.

A FlexCache volume is a sparsely populated volume that is backed by an origin volume. The FlexCache volume can be on the same cluster as or on a different cluster than that of the origin volume. The FlexCache volume provides access to data in the origin volume without requiring that all of the data be in the FlexCache volume. Starting in ONTAP 9.8, a FlexCache volume also supports SMB protocol.

NetApp Documentation Portal

To begin with, I used as a reference the following diagram showing an Active Directory domain with two sites named Gurabo and Ponce. Both sites have an Ontap cluster with version 9.8P4. Flexcache requires the creation of “Intercluster” type interfaces..

Note: The Ontap simulator was used for the lab.

The configuration I performed on the NAS-EDGE remote <vserver> was documented in case you are interested in seeing how to create a SVM from scratch. To access it just click on the “+” icon.

Prerequisites – vserver and network setup

Step I: Add the SVM NAS-EDGE to the remote cluster.

OnPrem-EDGE::> vserver create -vserver NAS-EDGE -rootvolume NAS_EDGE_root -aggregate OnPrem_DR_01_VM_DISK_1 
[Job 577] Job succeeded: Success                                               
Vserver creation completed.

OnPrem-DR::> 

Reference: vserver create

Step II: Add the logical network interfaces (LIF).

OnPrem-EDGE::> network interface create -vserver NAS-EDGE -lif NAS_EDGE_01 -address 10.10.33.20 -netmask-length 24 -home-node OnPrem-DR-01 -home-port e0c -service-policy default-data-files    

OnPrem-EDGE::> network interface create -vserver NAS-EDGE -lif NAS_EDGE_02 -address 10.10.33.21 -netmask-length 24 -home-node OnPrem-DR-02 -home-port e0c -service-policy default-data-files

OnPrem-EDGE::> network interface show -curr-port e0c -vserver NAS-EDGE 
            Logical    Status     Network            Current       Current Is
Vserver     Interface  Admin/Oper Address/Mask       Node          Port    Home
----------- ---------- ---------- ------------------ ------------- ------- ----
NAS-EDGE
            NAS_EDGE_01  up/up    10.10.33.20/24     OnPrem-EDGE-01 e0c     true
            NAS_EDGE_02  up/up    10.10.33.21/24     OnPrem-EDGE-02 e0c     true
2 entries were displayed.

OnPrem-EDGE::> 

Reference: network interface create

Step III: Network route creation.

OnPrem-EDGE::> network route create -vserver NAS-EDGE -destination 0.0.0.0/0 -gateway 10.10.33.254

OnPrem-EDGE::> network route show -vserver NAS-EDGE
Vserver             Destination     Gateway         Metric
------------------- --------------- --------------- ------
NAS-EDGE
                    0.0.0.0/0       10.10.33.254    20

OnPrem-EDGE::> 

Reference: network route create

Step IV: Add the DNS parameters to the SVM.

OnPrem-EDGE::> vserver services dns create -domains zenprsolutions.local -name-servers 192.168.5.1 -vserver NAS-EDGE 

Warning: Only one DNS server is configured. Configure more than one DNS server
         to avoid a single-point-of-failure.

OnPrem-EDGE::> vserver services dns show -vserver NAS-EDGE 

                        Vserver: NAS-EDGE
                        Domains: zenprsolutions.local
                   Name Servers: 192.168.5.1
                 Timeout (secs): 2
               Maximum Attempts: 1

OnPrem-EDGE::> 

Reference: vserver services dns create

Step V: Configure CIFS protocol and add the vserver to the local domain.

OnPrem-EDGE::> vserver cifs create -vserver NAS-EDGE -domain zenprsolutions.local -cifs-server NAS-EDGE              

In order to create an Active Directory machine account for the CIFS server, you
must supply the name and password of a Windows account with sufficient
privileges to add computers to the "CN=Computers" container within the
"ZENPRSOLUTIONS.LOCAL" domain. 

Enter the user name: administrator

Enter the password: xxxxxxxxxxxx

Notice: SMB1 protocol version is obsolete and considered insecure. Therefore it
is deprecated and disabled on this CIFS server. Support for SMB1 might be
removed in a future release. If required, use the (privilege: advanced)
"vserver cifs options modify -vserver NAS-EDGE -smb1-enabled true" to enable
it.

OnPrem-EDGE::> vserver cifs show                                                                
            Server          Status    Domain/Workgroup Authentication
Vserver     Name            Admin     Name             Style
----------- --------------- --------- ---------------- --------------
NAS-EDGE    NAS-EDGE        up        ZENPRSOLUTIONS          domain
2 entries were displayed.

OnPrem-EDGE::>

Reference: vserver cifs create

Step VI: Validate the SVM computer object creation in Active Directory (Powershell).

PS C:\Users\Administrator> Get-ADComputer -Identity NAS-EDGE

DistinguishedName : CN=NAS-EDGE,CN=Computers,DC=zenprsolutions,DC=local
DNSHostName       : NAS-EDGE.zenprsolutions.local
Enabled           : True
Name              : NAS-EDGE
ObjectClass       : computer
ObjectGUID        : 3cfec085-1417-4bac-bff7-d734e4e30049
SamAccountName    : NAS-EDGE$
SID               : S-1-5-21-2867495315-1194516362-180967319-2665
UserPrincipalName : 

PS C:\Users\Administrator> 

Step VII: Validate connectivity and name resolution (Powershell).

PS C:\Users\Administrator> ping NAS-EDGE.zenprsolutions.local
Ping request could not find host NAS-EDGE.zenprsolutions.local. Please check the name and try again.

PS C:\Users\Administrator> Add-DnsServerResourceRecordA -Name NAS-EDGE -IPv4Address 10.10.33.20 -CreatePtr -ZoneName zenprsolutions.local

PS C:\Users\Administrator> Add-DnsServerResourceRecordA -Name NAS-EDGE -IPv4Address 10.10.33.21 -CreatePtr -ZoneName zenprsolutions.local

PS C:\Users\Administrator> 
PS C:\Users\Administrator> nslookup NAS-EDGE.zenprsolutions.local
	primary name server = 192.168.5.1
	responsible mail addr = (root)
	serial  = 0
	refresh = 28800 (8 hours)
	retry   = 7200 (2 hours)
	expire  = 604800 (7 days)
	default TTL = 86400 (1 day)
Server:  SERVER-DC-01V.zenprsolutions.local
Address:  192.168.5.1

Name:    NAS-EDGE.zenprsolutions.local
Addresses: 10.10.33.20
	   10.10.33.21


PS C:\Users\Administrator> 

In order to start with the lab it is necessary to create an peer relationship between the local and remote vserver. To achieve this i use the command <vserver peer create> specifying the “applications” as “flexcache”.

Reference: vserver peer create.

Note: Previously, a cluster level peer relationship was performed with the <cluster peer create> command.

OnPrem-HQ::> vserver peer create -vserver NAS -peer-cluster OnPrem-EDGE -peer-vserver NAS-EDGE -applications flexcache 

Info: [Job 883] 'vserver peer create' job queued 

Once the peer relationship has been created between both vservers, you can continue to validate that the source volume was created as required. To validate the volume, the <volume show> command is used from the local cluster shell. In this lab I am going to use the volume named share.

OnPrem-HQ::*> volume show -vserver NAS                
Vserver   Volume       Aggregate    State      Type       Size  Available Used%
--------- ------------ ------------ ---------- ---- ---------- ---------- -----
NAS       NAS_root     OnPrem_HQ_01_SSD_1 online RW      20MB    17.66MB    7%
NAS       share        OnPrem_HQ_01_SSD_1 online RW      10.3GB   8.04GB   20%
19 entries were displayed.

OnPrem-HQ::*> 

Once the volume is identified, you can create the flexcache volume using the command <volume flexcache create>. It is important to mention that flexcache technology uses “FlexGroup” as a dependency when creating a volume. It is for this reason that the aggr-list option is used to specify which aggregates will be used to create the “FlexGroup” type volumes.

OnPrem-EDGE::> volume flexcache create -vserver NAS-EDGE -volume share_edge -aggr-list OnPrem_EDGE_0* -origin-vserver NAS -origin-volume share -size 10GB -junction-path /share_edge
[Job 595] Job succeeded: Successful.                                           

OnPrem-EDGE::>

From the remote cluster shell you can verify the created volume by using the <vol flexcache show> command.

OnPrem-EDGE::> vol flexcache show
Vserver Volume      Size       Origin-Vserver Origin-Volume Origin-Cluster
------- ----------- ---------- -------------- ------------- --------------
NAS-EDGE share_edge 10GB       NAS            shares            OnPrem-HQ

OnPrem-EDGE::> 

From the local cluster shell you can see the source volume with the command <volume flexcache origin show-caches>. The flexcache volume previously created can be validated in the command result.

OnPrem-HQ::*> volume flexcache origin show-caches
Origin-Vserver Origin-Volume  Cache-Vserver  Cache-Volume  Cache-Cluster
-------------- -------------- -------------- ------------- --------------
NAS            share         NAS-EDGE       share_edge    OnPrem-EDGE
1 entries were displayed.

OnPrem-HQ::*> 

Now i proceed to share the share_edge cache volume using the SMB protocol. The command <vserver cifs share create> is used with the option of <-path /share_edge> to specify the “junction-path” of the flexclone volume.

OnPrem-EDGE::> vserver cifs share create -vserver NAS-EDGE -share-name share_edge -path /share_edge

OnPrem-EDGE::>

Now you can see that the “Share” was created in the share_edge volume.

OnPrem-EDGE::> vserver cifs share show -share-name share_edge
Vserver        Share         Path              Properties Comment  ACL
-------------- ------------- ----------------- ---------- -------- -----------
NAS-EDGE       share_edge    /share_edge       oplocks    -        Everyone / Full Control
                                               browsable
                                               changenotify
                                               show-previous-versions

OnPrem-EDGE::> 

I have used the smbmap tool to validate that the shared folder can be accessed over the network.

[rebelinux@blabla ~]$ smbmap.py -H 10.10.33.20 -p "XXXXX" -d ZENPRSOLUTIONS -u administrator 
[+] IP: 10.10.33.20:445	Name: NAS-EDGE.zenprsolutions.local                            
        Disk                                                  	Permissions	Comment
	----                                                  	-----------	-------
	share_edge                                        	READ, WRITE	
	ipc$                                              	NO ACCESS	
	c$                                                	READ, WRITE	
[rebelinux@blabla ~]$

In the performed test I copied the “Very_Big_File.iso” file to each site cluster “SHARE” volume.

Note: I modified the original diagram to show how the clients are connected.

In this section you can see the commands used to connect the clients to the “SHARE” volume.

Note: Ubuntu Linux 20.04 was used for this lab scenario.

CLIENT-HQ-01V
root@CLIENT-HQ-01V:/home/godadmin# mount -t cifs -o username=administrator@zenprsolutions.local,password=XXXXXXXX //nas/shares /mnt/share/
root@CLIENT-HQ-01V:/home/godadmin# cd /mnt/share/
root@CLIENT-HQ-01V:/mnt/share# ls
RecApp-2021-02-20.webm   RecApp-2021-02-27.webm   Very_Big_File.iso   WSUS-Cleanup.ps1
root@CLIENT-HQ-01V:/mnt/share#

CLIENT-EDGE-01V
root@CLIENT-EDGE-01V:/home/godadmin# mount -t cifs -o username=administrator@zenprsolutions.local,password=XXXXXXXX //nas-edge/share_edge /mnt/share_edge/
root@CLIENT-EDGE-01V:/home/godadmin# cd /mnt/share_edge/
root@CLIENT-EDGE-01V:/mnt/share_edge# ls
RecApp-2021-02-20.webm   RecApp-2021-02-27.webm   Very_Big_File.iso   WSUS-Cleanup.ps1
root@CLIENT-EDGE-01V:/mnt/share_edge#
CLIENT-EDGE-02V
root@CLIENT-EDGE-02V:/home/godadmin# mount -t cifs -o username=administrator@zenprsolutions.local,password=XXXXXXXX //nas-edge/share_edge /mnt/share_edge/
root@CLIENT-EDGE-02V:/home/godadmin# cd /mnt/share_edge/
root@CLIENT-EDGE-02V:/mnt/share_edge# ls
RecApp-2021-02-20.webm   RecApp-2021-02-27.webm   Very_Big_File.iso   WSUS-Cleanup.ps1
root@CLIENT-EDGE-02V:/mnt/share_edge#

In this last step the <cp> command was used to copy the “Very_Big_File.iso” file from the cluster to a local folder on the client. To measure the elapsed time of transfer the Linux <time> command was used.

CLIENT-HQ-01V
root@CLIENT-HQ-01V:/mnt/share# time cp Very_Big_File.iso /home/godadmin/

real	2m7.513s
user	0m0.016s
sys	0m6.236s
root@CLIENT-HQ-01V:/mnt/share#
CLIENT-EDGE-01V
root@CLIENT-EDGE-01V:/mnt/share_edge# time cp Very_Big_File.iso /home/godadmin/

real	4m2.391s
user	0m0.021s
sys	0m6.902s
root@CLIENT-EDGE-01V:/mnt/share_edge#
CLIENT-EDGE-02V
root@CLIENT-EDGE-02V:/mnt/share_edge# time cp Very_Big_File.iso /home/godadmin/

real	2m16.169s
user	0m0.054s
sys	0m6.128s
root@CLIENT-EDGE-02V:/mnt/share_edge# 

Further on, the following table shows the elapsed time transfer of each test performed. As you can see the CLIENT-HQ-01V located at the Gurabo site has direct access to the shared folder at the origin volume helping to achieve a lower transfer time of 2m7.513s. The CLIENT-EDGE-01V is connected to the Ponce site using the shared folder from the flexcache volume where you can see that since the content was not initially in the cache the transfer time was higher 4m2.391s. This behavior is due to the need to load the entire contents of “Very_Big_File.iso” from the source volume over the InterCluster LIF connection. Finally the CLIENT-EDGE-02V had a transfer time similar to CLIENT-HQ-01V (2m16.169s) since the content of the “Very_Big_File.iso” file is already in the cache of the flexcache volume.

Client NameElapsed Time
CLIENT-HQ-01V2m7.513s
CLIENT-EDGE-01V4m2.391s
CLIENT-EDGE-02V2m16.169s

till next time!

Leave a Comment

Your email address will not be published. Required fields are marked *