NetApp

NetApp Ontap Mediator Installation and Configuration

Hello everyone,

Today I will be talking a bit about how to install and configure the “Ontap Mediator” application that is used as an alternate way to validate the health status of a cluster collection. To set up the role of this application I will use as reference the NetApp portal documentation:

ONTAP Mediator provides an alternate health path to the peer cluster, with the intercluster LIFs providing the other health path. With the Mediator’s health information, clusters can differentiate between intercluster LIF failure and site failure. When the site goes down, Mediator passes on the health information to the peer cluster on demand, facilitating the peer cluster to fail over. With the Mediator-provided information and the intercluster LIF health check information, ONTAP determines whether to perform an auto failover, if it is failover incapable, continue or stop.

Role of ONTAP Mediator

This application can be used in “MetroCluster” scenarios as well as with “SnapMirror Business Continuity” (SM-BC) technology. As of ONTAP 9.8, SnapMirror Business Continuity (SM-BC) can be used to protect applications with LUNs, allowing applications to migrate transparently, ensuring business continuity in the event of a disaster. SM-BC uses “SnapMirror Synchronous” technology that allows data to be replicated to the target as soon as it is written to the source volume.

In this lab I will show you the Mediator application with the purpose of being able to perform in the future a lab on SM-BC in a VMware environment. The following image shows the role of Ontap Mediator within the SM-BC technology architecture.

As you can see the “Mediator” is constantly evaluating the Datacenter status to identify possible failures and to be able to react by migrating access to the volumes to the Datacenter that is up and running. It may be useful to understand some of the basics of SM-BC recovery and restoration.

Planned recovery:

A manual operation to change the access roles to volumes in an SM-BC relationship. The primary becomes the secondary and the secondary becomes the primary. The ALUA status report is also modified according to the status of the relationship.

Automatic unplanned recovery (AUFO):

An automatic operation to perform a failover to the mirror copy. The operation requires the assistance of the Ontap Mediator to detect that the primary copy is not available.

Here are the requirements to install the application. To view the content, just click on the “+” icon.

Requirements

To validate the complete list of requirements you can visit the documentation of “Ontap Mediator”

For this lab I am going to use Red Hat Enterprise Linux 8.1 running on a vSphere VM. The first thing to do is to download the application installation package. This is done by accessing the NetApp support portal as shown in the following image.

Link to Ontap Mediator:

https://mysupport.netapp.com/site/products/all/details/ontap-mediator/downloads-tab

After downloading the installation package, copy the “ONTAP-MEDIATOR-1.3” file to the server to be used for this purpose. Then proceed to change the installation file to executable mode with the command chmod +x.

[root@NTAPMED-01V ~]# ls
anaconda-ks.cfg  ONTAP-MEDIATOR-1.3
[root@NTAPMED-01V ~]# chmod +x ONTAP-MEDIATOR-1.3 
[root@NTAPMED-01V ~]#

Next, proceed to install the application dependencies with the yum install command as shown below.

[root@NTAPMED-01V ~]# yum install openssl openssl-devel kernel-devel gcc libselinux-utils make redhat-lsb-core patch bzip2 python36 python36-devel perl-Data-Dumper perl-ExtUtils-MakeMaker python3-pip elfutils-libelf-devel policycoreutils-python-utils -y
Last metadata expiration check: 0:13:59 ago on Tue 29 Jun 2021 10:01:36 PM AST.
Package openssl-1:1.1.1g-15.el8_3.x86_64 is already installed.
Package libselinux-utils-2.9-5.el8.x86_64 is already installed.
Dependencies resolved.
...............
Installed:

Really long Output                                                              

Complete!
[root@NTAPMED-01V ~]#

Once all dependencies are installed, you can start running the application installation file. To do this use the command ./ONTAP-MEDIATOR-1.3.

Note: This command must be executed in the location where the installation file was stored.

[root@NTAPMED-01V ~]# ./ONTAP-MEDIATOR-1.3 

ONTAP Mediator: Self Extracting Installer

ONTAP Mediator requires two user accounts. One for the service (netapp), and one for use by ONTAP to the mediator API (mediatoradmin).
Would you like to use the default account names: netapp + mediatoradmin? (Y(es)/n(o)): Yes
Enter ONTAP Mediator user account (mediatoradmin) password: XXXXXX 

Re-Enter ONTAP Mediator user account (mediatoradmin) password: XXXXX

Checking if SELinux is in enforcing mode
SELinux is set to Enforcing. ONTAP Mediator server requires modifying the SELinux context of the file
/opt/netapp/lib/ontap_mediator/pyenv/bin/uwsgi from type 'lib_t' to 'bin_t'.
This is neccessary to start the ONTAP Mediator service while SELinux is set to Enforcing.
Allow SELinux context change?  Y(es)/n(o): Yes
The installer will change the SELinux context type of
/opt/netapp/lib/ontap_mediator/pyenv/bin/uwsgi from type 'lib_t' to 'bin_t'.




Checking for default Linux firewall
Linux firewall is running. Open ports 31784 and 3260? Y(es)/n(o): Yes
success
success
success


###############################################################
Preparing for installation of ONTAP Mediator packages.


Do you wish to continue? Y(es)/n(o): 

The installer will ask several questions about the password of the users used by the ONTAP Mediator service and the TCP ports that will be opened in the local “Firewall” of the server. Once everything is properly specified the installer will validate that all the application prerequisites are installed.

Do you wish to continue? Y(es)/n(o): Y


+ Installing required packages.


Updating Subscription Management repositories.

Really long Output                                                              

Dependencies resolved.
Nothing to do.
Complete!
OS package installations finished
+ Installing ONTAP Mediator. (Log: /tmp/ontap_mediator.7atkl8/ontap-mediator/install_20210709162016.log)
    This step will take several minutes. Use the log file to view progress.
#includedir /etc/sudoers.d
Sudo include verified
ONTAP Mediator logging enabled
+ Install successful. (Moving log to /opt/netapp/lib/ontap_mediator/log/install_20210709162016.log)
+ Note: ONTAP Mediator uses a kernel module compiled specifically for the current
        system OS. Using 'yum update' to upgrade the kernel may cause a service
        interruption.
    For more information, see /opt/netapp/lib/ontap_mediator/README
[root@NTAPMED-01V ~]#

After installing the application it is important to validate that the services of the Ontap Mediator are activated and functional. To validate the services use the command <systemctl status ontap_mediator mediator-scst>.

[root@NTAPMED-01V ~]# systemctl status ontap_mediator mediator-scst
 ontap_mediator.service - ONTAP Mediator
   Loaded: loaded (/etc/systemd/system/ontap_mediator.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2021-07-09 14:21:31 AST; 11min ago
  Process: 1296 ExecStop=/bin/kill -s INT $MAINPID (code=exited, status=0/SUCCESS)
 Main PID: 1298 (uwsgi)
   Status: "uWSGI is ready"
    Tasks: 3 (limit: 23832)
   Memory: 61.4M
 Started ONTAP Mediator.

 mediator-scst.service
   Loaded: loaded (/etc/systemd/system/mediator-scst.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2021-07-09 14:21:30 AST; 11min ago
  Process: 1164 ExecStart=/etc/init.d/scst start (code=exited, status=0/SUCCESS)
 Main PID: 1250 (iscsi-scstd)
    Tasks: 1 (limit: 23832)
   Memory: 3.3M
 Started mediator-scst.service.
[root@NTAPMED-01V ~]# 

Additionally, it is important to ensure that the services are using the correct tcp ports. With the command <netstat -anlt | grep -E ‘3260|31784’> you can validate that ports 3260 and 31784 are in “LISTEN” mode.

[root@NTAPMED-01V ~]# netstat -anlt | grep -E '3260|31784'
tcp        0      0 0.0.0.0:3260            0.0.0.0:*               LISTEN     
tcp        0      0 0.0.0.0:31784           0.0.0.0:*               LISTEN     
tcp6       0      0 :::3260                 :::*                    LISTEN     
[root@NTAPMED-01V ~]# 

With the command firewall-cmd –list-all you can validate that the rules for ports 31784/tcp and 3260/tcp are properly configured in the server’s local firewall.

[root@NTAPMED-01V ~]# firewall-cmd --list-all
public (active)
  target: default
  icmp-block-inversion: no
  interfaces: ens192
  sources: 
  services: cockpit dhcpv6-client ssh
  ports: 31784/tcp 3260/tcp
  protocols: 
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules: 
	
[root@NTAPMED-01V ~]# 

Once the installation process has been successfully completed, add the Ontap Mediator to the configuration of the clusters where you have selected to use the “SnapMirror Business Continuity” (SM-BC) technology. To add the configuration, go to [Protection] => [Overview] => [Mediator] => [Configure]. Then you have to add the configuration as shown in the following images. It is important to mention that the certificate to be added in this configuration is the one of the CA located in:

/opt/netapp/lib/ontap_mediator/ontap_mediator/server_config/ca.crt

Note: It is important to mention that for this configuration to work there must be a “cluster peer” and “vserver peer” relationship previously established.

Through the Ontap console you can also validate that the Ontap Mediator configuration is working correctly. With the <snapmirror mediator show> command you can validate that the Connection Status is “connected” and the Quorum Status is “true”.

Note: This command must be used in both clusters to validate that the connection is correctly established.

OnPrem-HQ::> snapmirror mediator show                    
Mediator Address Peer Cluster     Connection Status Quorum Status
---------------- ---------------- ----------------- -------------
192.168.6.16     OnPrem-DR       connected         true

OnPrem-HQ::*> 
OnPrem-DR::> snapmirror mediator show
Mediator Address Peer Cluster     Connection Status Quorum Status
---------------- ---------------- ----------------- -------------
192.168.6.16     OnPrem-HQ       connected       true

OnPrem-DR::> 

Here is how to add the Ontap Mediator to the cluster through Ontap’s console. To see the content, just click on the “+” icon.

Ontap Mediator CLI Setup

With the snapmirror mediator add command you can add the Ontap Mediator with the IP address 192.168.6.16 to the Onprem-HQ cluster. It is important to mention that for this configuration to work there must be a “Cluster peer” and “Vserver peer” relationship previously established.

OnPrem-HQ::> snapmirror mediator add -mediator-address 192.168.7.167 -peer-cluster OnPrem-DR -username mediatoradmin 

Notice: Enter the mediator password.

Enter the password: XXXXX
Enter the password again: XXXXX

Info: [Job: 171] 'mediator add' job queued 

OnPrem-HQ::> 

With the snapmirror mediator show command you can validate that the Connection Status is “connected” and the Quorum Status is set to “true”.

OnPrem-HQ::> snapmirror mediator show                    
Mediator Address Peer Cluster     Connection Status Quorum Status
---------------- ---------------- ----------------- -------------
192.168.6.16     OnPrem-DR       connected         true

OnPrem-HQ::*> 
OnPrem-DR::> snapmirror mediator show
Mediator Address Peer Cluster     Connection Status Quorum Status
---------------- ---------------- ----------------- -------------
192.168.6.16     OnPrem-HQ       connected       true

OnPrem-DR::> 

Additionally I show you how to replace the SSL certificate of the Ontap Mediator service with one generated from a Microsoft Certificate Authority. To see the content, just click on the “+” icon.

Optional SSL Certificate Replacement

Step 1: Generate a configuration file to create the Certificate Signing Request (CSR). In this step it is important to set the CN and DNS with the fully qualified domain name (FQDN) of the server name. In my case the server name is NTAPMED-01V.

[root@NTAPMED-01V ~]# nano -w req.conf 
[req]
distinguished_name = req_distinguished_name
req_extensions = v3_req
prompt = no
[req_distinguished_name]
C = US
ST = PR
L = SJ
O = Zen PR Solutions
OU = IT
CN = NTAPMED-01V
[v3_req]
keyUsage = keyEncipherment, dataEncipherment
extendedKeyUsage = serverAuth
subjectAltName = @alt_names
[alt_names]
DNS.1 = NTAPMED-01V.zenprsolutions.local

Step 2: Use the openssl command to generate the CSR file that will be used as a template to create the certificate that the Ontap Mediator service will use.

Note: If the openssl command is not available on your system you can use the yum install openssl command to install the necessary packages

[root@NTAPMED-01V ~]# openssl req -new -out ntapmed.csr -newkey rsa:2048 -nodes -sha256 -keyout ntapmed.key -config req.conf

Once the openssl command has finished, two files will be created, the ntapmed.csr is the template that will be used to generate the certificate and the ntapmed.key is the private key.

[root@NTAPMED-01V ~]# ls -al ntapmed.*
-rw-r--r-- 1 root      root      1123 Jul  9 16:53 ntapmed.csr #Certificate Signing Request
-rw-r--r-- 1 rebelinux rebelinux 1704 Jul  9 16:53 ntapmed.key #Private Key
[root@rebelpc rebelinux]# 

Step 3: Access Microsoft’s Certificate Authority server and use the certreq.exe command to generate the certificate using the ntapmed.csr file as template.

C:\>certreq.exe -submit -attrib "CertificateTemplate:WebServer" ntapmed.csr ntapmed.cer

Once the process is completed, a file will be created with the name ntapmed.cer that is used for the Ontap Mediator service.

Step 4: To replace the SSL certificate it is also necessary to change the public certificate of the CA. To obtain this certificate from the CA use the command certutil -ca.cert ca.cert which will produce the certificate in the ca.cer file.

C:\>certutil -ca.cert ca.cer

Once this process is completed simply copy all the files (ca.cer, ntapmed.cer and ntapmed.key) to the Ontap Mediator server.

Step 5: Move to the /opt/netapp/lib/ontap_mediator/ontap_mediator/server_config/ folder and modify the certificate files as shown below.

[root@NTAPMED-01V ~]# cd /opt/netapp/lib/ontap_mediator/ontap_mediator/server_config/
[root@NTAPMED-01V server_config]# ls
ca.crt  ca.srl            config.pyc    logging.conf.yaml  ontap_mediator.config.yaml     ontap_mediator_schema.yaml  ontap_mediator_server.csr  ontap_mediator.user_config.yaml
ca.key  config_migration  __init__.pyc  netapp_sudoers     ontap_mediator.constants.yaml  ontap_mediator_server.crt   ontap_mediator_server.key
[root@NTAPMED-01V server_config]# cp -R /opt/netapp/lib/ontap_mediator/ontap_mediator/server_config /root/
[root@NTAPMED-01V server_config]#
[root@NTAPMED-01V server_config]# nano -w ca.crt
[root@NTAPMED-01V server_config]# openssl x509 -noout -serial -in ca.crt 
serial=5D2E25D9AFFDE4904A05D70BEB7ACBD2
[root@NTAPMED-01V server_config]# 
[root@NTAPMED-01V server_config]# nano -w ontap_mediator_server.crt
[root@NTAPMED-01V server_config]# nano -w ontap_mediator_server.key

After making the changes, it is necessary to restart the services using the command systemctl restart ontap_mediator mediator-scst

[root@NTAPMED-01V server_config]# systemctl restart ontap_mediator mediator-scst
[root@NTAPMED-01V server_config]# systemctl status ontap_mediator mediator-scst
 ontap_mediator.service - ONTAP Mediator
   Loaded: loaded (/etc/systemd/system/ontap_mediator.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2021-07-09 20:31:48 AST; 8s ago
  Process: 22222 ExecStop=/bin/kill -s INT $MAINPID (code=exited, status=0/SUCCESS)
 Main PID: 22232 (uwsgi)
   Status: "uWSGI is ready"
    Tasks: 3 (limit: 23832)
   Memory: 56.5M

 mediator-scst.service
   Loaded: loaded (/etc/systemd/system/mediator-scst.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2021-07-09 20:31:50 AST; 5s ago
  Process: 22223 ExecStop=/etc/init.d/scst stop (code=exited, status=0/SUCCESS)
  Process: 22309 ExecStart=/etc/init.d/scst start (code=exited, status=0/SUCCESS)
 Main PID: 22389 (iscsi-scstd)
    Tasks: 1 (limit: 23832)
   Memory: 1.0M

Summary

In this lab I showed you how to install and configure Ontap Mediator. In the future I will use this service to do a lab on “SnapMirror Business Continuity” (SM-BC) together with VMware. I hope you liked this lab. If you have any doubts or questions about it, leave them in the comments. Regards.

vSphere 7 Update 2 NFS Array Snapshots Offload Support

The vSphere 7.0 U2 release provides the ability to use native snapshot when using the NFS protocol as the access mechanism. As described on the VMware blog:

NFS Improvements

NFS required a clone to be created first for a newly created VM and the subsequent ones could be offloaded to the array. With the release of vSphere 7.0 U2, we have enabled NFS array snapshots of full, non-cloned VMs to not use redo logs but instead use the snapshot technology of the NFS array in order to provide better snapshot performance. The improvement here will remove the requirement/limitation of creating a clone and enables the first snapshot also to be offloaded to the array.

What’s New in vSphere 7 Update 2 Core Storage

In this blog I explain the configuration needed to test this new feature. To start we should validate the prerequisites to be able to implement this solution. According to the VMware documentation portal the prerequisites are as follows:

  • Verify that the NAS array supports the fast file clone operation with the VAAI NAS program.
  • On your ESXi host, install vendor-specific NAS plug-in that supports the fast file cloning with VAAI.
  • Follow the recommendations of your NAS storage vendor to configure any required settings on both the NAS array and ESXi.

The NFS configuration will be done in NetApp Ontap using the “NetApp NFS Plug-in for VMware VAAI” plugin that recently added native NFS snapshot offload support.

Note: The plug-in can be downloaded from the NetApp support portal at the following link “NetApp Support”.

© 2021 NetApp

Once we are in the NetApp support portal we must download version 2.0 of the plugin as shown in the following image. To install the plugin we need to unzip the downloaded file and rename the file inside the folder named vib20 with the extension .vib to the new name NetAppNasPlugin.vib.

© 2021 NetApp

In the next step I used the NetApp Ontap Tools to install the plugin but it can also be installed using VMware Lifecycle Manager.

To install the plugin go to [ONTAP tools => Settings => NFS VAAI tools] and in the “Existing version:” section press “BROWSE” to select where the “NetAppNasPlugin.vib” file is stored. Once the file is located press “UPLOAD” to load and install the plugin.

In this step we can see how to install the plugin to the ESXi servers by pressing the “INSTALL” button.

The following image shows that the installation of the plugin was successful. An advantage of the new version of the plugin is that no reboot of the ESXi server is required.

After installing the plugin we will proceed to validate that the Ontap Storage has support for VMware vStorage APIs for Array Integration (VAAI) features in the NFS environment. This can be verified with the command <vserver nfs show -fields vstorage>. As you can see the vStorage function is currently disabled in the SVM called NFS. To enable the vStorage function use the <vserver nfs modify -vstorage enabled> command.

OnPrem-HQ::> vserver nfs show -fields vstorage 
vserver vstorage 
------- -------- 
NFS     disabled  

OnPrem-HQ::> vserver nfs modify -vstorage enabled -vserver NFS 

OnPrem-HQ::> vserver nfs show -fields vstorage                 
vserver vstorage 
------- -------- 
NFS     enabled  

OnPrem-HQ::> 

The next requirement to be able to use native snapshot offload is the creation of an advanced setting in the VM configuration called snapshot.alwaysAllowNative. To add this value you have to go to the VM properties then to [VM Options => Advanced => EDIT CONFIGURATION].

The following image shows the value of the <snapshot.alwaysAllowNative> variable that according to VMware documentation must have a value equal to “TRUE”. You can use the following link as reference “VMware Documentation”

Now i start testing to validate that the native snapshot is working in Ontap. First i will create a snapshot with the <snapshot.alwaysAllowNative> function set to FALSE. Then i will make changes to the VM so that i can measure the speed of deleting and applying the snapshot changes to the base disk. In the example shown below the command <New-Snapshot> in PowerCLI was used to create a snapshot of the VM named RocaWeb

PS /home/rebelinux> get-vm -Name RocaWeb | New-Snapshot -Name PRE_Native_Array_Snapshot | Format-Table -Wrap -AutoSize  
                                                                                                                                                                                                                                               Name                      Description PowerState                                                                                                                                                                                               ----                      ----------- ----------                                                                                                                                                                                               PRE_Native_Array_Snapshot             PoweredOff                                                                                                                                                                                                                                                                                                                                                                                                                                              
PS /home/rebelinux> 

In this step a 10GB file was copied to grow the snapshot so that i can measure how fast the changes are applied to the base disk when the snapshot is deleted. In this example the file “RocaWeb_2-000001-delta.vmdk” represents the delta where the snapshot changes are saved. This represents a traditional VMware snapshot.

[root@comp-01a:/vmfs/volumes/55ab62ec-2abeb31b/RocaWeb] ls -alh
total 35180596
drwxr-xr-x    2 root     root        4.0K May 31 23:40 .
drwxr-xr-x    7 root     root        4.0K May 31 19:02 ..
-rw-------    1 root     root      276.0K May 31 23:40 RocaWeb-Snapshot15.vmsn
-rw-------    1 root     root        4.0G May 31 23:40 RocaWeb-a03f2017.vswp
-rw-------    1 root     root      264.5K May 31 23:40 RocaWeb.nvram
-rw-------    1 root     root         394 May 31 23:40 RocaWeb.vmsd
-rwxr-xr-x    1 root     root        3.4K May 31 23:40 RocaWeb.vmx
-rw-------    1 root     root       10.0G May 31 23:51 RocaWeb_2-000001-delta.vmdk #Delta (VMFS Based Snapshot)
-rw-------    1 root     root         301 May 31 23:40 RocaWeb_2-000001.vmdk
-rw-------    1 root     root      500.0G May 31 23:37 RocaWeb_2-flat.vmdk
-rw-------    1 root     root         631 May 31 23:37 RocaWeb_2.vmdk
[root@comp-01a:/vmfs/volumes/55ab62ec-2abeb31b/RocaWeb]

The following image shows the time it took to apply the snapshot changes to the base disk when the snapshot was removed. In summary the operation took 9 minutes in total using traditional VMware snapshot.

Note: Ontap simulator was used for this lab.

In this last example the <New-Snapshot> command was also used to create the snapshot but with the <snapshot.alwaysAllowNative> option set to “TRUE”. In that way i can test the use of Native Snapshot Offload in NFS. Here again, a 10GB file was copied to the VM to grow the snapshot, so i can measure how quickly changes are applied to the base disk when the snapshot is deleted.

PS /home/rebelinux> get-vm -Name RocaWeb | New-Snapshot -Name POST_Native_Array_Snapshot | Format-Table -Wrap -AutoSize
                                                                                                                                                                                                                                               Name                       Description PowerState                                                                                                                                                                                              ----                       ----------- ----------                                                                                                                                                                                              POST_Native_Array_Snapshot             PoweredOff                                                                                                                                                                                                                                                                                                                                                                                                                                             
PS /home/rebelinux> 

Here we can see that there is no “-delta.vmdk” file but there is a file named “RocaWeb_2-000001-flat.vmdk” with the same size of 500GB as the “RocaWeb_2-flat.vmdk” file. This allows us to confirm that the NFS Native Snapshot Offload feature is enabled in Ontap.

[root@comp-01a:/vmfs/volumes/55ab62ec-2abeb31b/RocaWeb] ls -alh
total 49419672
drwxr-xr-x    2 root     root        4.0K Jun  1 00:07 .
drwxr-xr-x    7 root     root        4.0K May 31 19:02 ..
-rw-------    1 root     root      276.0K Jun  1 00:07 RocaWeb-Snapshot16.vmsn
-rw-------    1 root     root        4.0G Jun  1 00:07 RocaWeb-a03f2017.vswp
-rw-------    1 root     root      264.5K Jun  1 00:07 RocaWeb.nvram
-rw-------    1 root     root         393 Jun  1 00:07 RocaWeb.vmsd
-rwxr-xr-x    1 root     root        3.4K Jun  1 00:07 RocaWeb.vmx
-rw-------    1 root     root      500.0G Jun  1 00:09 RocaWeb_2-000001-flat.vmdk #No Delta (Array Based Snapshot OffLoad)
-rw-------    1 root     root         650 Jun  1 00:07 RocaWeb_2-000001.vmdk
-rw-------    1 root     root      500.0G Jun  1 00:03 RocaWeb_2-flat.vmdk
-rw-------    1 root     root         631 Jun  1 00:07 RocaWeb_2.vmdk
[root@comp-01a:/vmfs/volumes/55ab62ec-2abeb31b/RocaWeb] 

The following image shows the time it took to apply the snapshot changes to the base disk when the snapshot was removed using the NFS Native Snapshot Offload. In summary, you can see that applying the snapshot changes to the base disk took no time at all to finish.

Summary

NFS native snapshot offload operations are so fast because ONTAP references metadata when it creates a Snapshot copy, rather than copying data blocks, that why Snapshot copies are so efficient. Doing so eliminates the seek time that other systems incur in locating the blocks to copy, as well as the cost of making the copy itself.

Using Flexcache volumes to accelerate Windows shares data access

Starting In Ontap 9.8 release NetApp decided to add support for the Windows SMB protocol to the FlexCache technology. At last…..

In this blog, I will create a source volume as origin and a flexcache volume on a remote cluster. In the lab example I will also validate the benefit offered by the ability to extend a central CIFS share natively.

I used the NetApp documentation as a reference to define what a Flexcache volume is and what it is used for.

A FlexCache volume is a sparsely populated volume that is backed by an origin volume. The FlexCache volume can be on the same cluster as or on a different cluster than that of the origin volume. The FlexCache volume provides access to data in the origin volume without requiring that all of the data be in the FlexCache volume. Starting in ONTAP 9.8, a FlexCache volume also supports SMB protocol.

NetApp Documentation Portal

To begin with, I used as a reference the following diagram showing an Active Directory domain with two sites named Gurabo and Ponce. Both sites have an Ontap cluster with version 9.8P4. Flexcache requires the creation of “Intercluster” type interfaces..

Note: The Ontap simulator was used for the lab.

The configuration I performed on the NAS-EDGE remote <vserver> was documented in case you are interested in seeing how to create a SVM from scratch. To access it just click on the “+” icon.

Prerequisites – vserver and network setup

Step I: Add the SVM NAS-EDGE to the remote cluster.

OnPrem-EDGE::> vserver create -vserver NAS-EDGE -rootvolume NAS_EDGE_root -aggregate OnPrem_DR_01_VM_DISK_1 
[Job 577] Job succeeded: Success                                               
Vserver creation completed.

OnPrem-DR::> 

Reference: vserver create

Step II: Add the logical network interfaces (LIF).

OnPrem-EDGE::> network interface create -vserver NAS-EDGE -lif NAS_EDGE_01 -address 10.10.33.20 -netmask-length 24 -home-node OnPrem-DR-01 -home-port e0c -service-policy default-data-files    

OnPrem-EDGE::> network interface create -vserver NAS-EDGE -lif NAS_EDGE_02 -address 10.10.33.21 -netmask-length 24 -home-node OnPrem-DR-02 -home-port e0c -service-policy default-data-files

OnPrem-EDGE::> network interface show -curr-port e0c -vserver NAS-EDGE 
            Logical    Status     Network            Current       Current Is
Vserver     Interface  Admin/Oper Address/Mask       Node          Port    Home
----------- ---------- ---------- ------------------ ------------- ------- ----
NAS-EDGE
            NAS_EDGE_01  up/up    10.10.33.20/24     OnPrem-EDGE-01 e0c     true
            NAS_EDGE_02  up/up    10.10.33.21/24     OnPrem-EDGE-02 e0c     true
2 entries were displayed.

OnPrem-EDGE::> 

Reference: network interface create

Step III: Network route creation.

OnPrem-EDGE::> network route create -vserver NAS-EDGE -destination 0.0.0.0/0 -gateway 10.10.33.254

OnPrem-EDGE::> network route show -vserver NAS-EDGE
Vserver             Destination     Gateway         Metric
------------------- --------------- --------------- ------
NAS-EDGE
                    0.0.0.0/0       10.10.33.254    20

OnPrem-EDGE::> 

Reference: network route create

Step IV: Add the DNS parameters to the SVM.

OnPrem-EDGE::> vserver services dns create -domains zenprsolutions.local -name-servers 192.168.5.1 -vserver NAS-EDGE 

Warning: Only one DNS server is configured. Configure more than one DNS server
         to avoid a single-point-of-failure.

OnPrem-EDGE::> vserver services dns show -vserver NAS-EDGE 

                        Vserver: NAS-EDGE
                        Domains: zenprsolutions.local
                   Name Servers: 192.168.5.1
                 Timeout (secs): 2
               Maximum Attempts: 1

OnPrem-EDGE::> 

Reference: vserver services dns create

Step V: Configure CIFS protocol and add the vserver to the local domain.

OnPrem-EDGE::> vserver cifs create -vserver NAS-EDGE -domain zenprsolutions.local -cifs-server NAS-EDGE              

In order to create an Active Directory machine account for the CIFS server, you
must supply the name and password of a Windows account with sufficient
privileges to add computers to the "CN=Computers" container within the
"ZENPRSOLUTIONS.LOCAL" domain. 

Enter the user name: administrator

Enter the password: xxxxxxxxxxxx

Notice: SMB1 protocol version is obsolete and considered insecure. Therefore it
is deprecated and disabled on this CIFS server. Support for SMB1 might be
removed in a future release. If required, use the (privilege: advanced)
"vserver cifs options modify -vserver NAS-EDGE -smb1-enabled true" to enable
it.

OnPrem-EDGE::> vserver cifs show                                                                
            Server          Status    Domain/Workgroup Authentication
Vserver     Name            Admin     Name             Style
----------- --------------- --------- ---------------- --------------
NAS-EDGE    NAS-EDGE        up        ZENPRSOLUTIONS          domain
2 entries were displayed.

OnPrem-EDGE::>

Reference: vserver cifs create

Step VI: Validate the SVM computer object creation in Active Directory (Powershell).

PS C:\Users\Administrator> Get-ADComputer -Identity NAS-EDGE

DistinguishedName : CN=NAS-EDGE,CN=Computers,DC=zenprsolutions,DC=local
DNSHostName       : NAS-EDGE.zenprsolutions.local
Enabled           : True
Name              : NAS-EDGE
ObjectClass       : computer
ObjectGUID        : 3cfec085-1417-4bac-bff7-d734e4e30049
SamAccountName    : NAS-EDGE$
SID               : S-1-5-21-2867495315-1194516362-180967319-2665
UserPrincipalName : 

PS C:\Users\Administrator> 

Step VII: Validate connectivity and name resolution (Powershell).

PS C:\Users\Administrator> ping NAS-EDGE.zenprsolutions.local
Ping request could not find host NAS-EDGE.zenprsolutions.local. Please check the name and try again.

PS C:\Users\Administrator> Add-DnsServerResourceRecordA -Name NAS-EDGE -IPv4Address 10.10.33.20 -CreatePtr -ZoneName zenprsolutions.local

PS C:\Users\Administrator> Add-DnsServerResourceRecordA -Name NAS-EDGE -IPv4Address 10.10.33.21 -CreatePtr -ZoneName zenprsolutions.local

PS C:\Users\Administrator> 
PS C:\Users\Administrator> nslookup NAS-EDGE.zenprsolutions.local
	primary name server = 192.168.5.1
	responsible mail addr = (root)
	serial  = 0
	refresh = 28800 (8 hours)
	retry   = 7200 (2 hours)
	expire  = 604800 (7 days)
	default TTL = 86400 (1 day)
Server:  SERVER-DC-01V.zenprsolutions.local
Address:  192.168.5.1

Name:    NAS-EDGE.zenprsolutions.local
Addresses: 10.10.33.20
	   10.10.33.21


PS C:\Users\Administrator> 

In order to start with the lab it is necessary to create an peer relationship between the local and remote vserver. To achieve this i use the command <vserver peer create> specifying the “applications” as “flexcache”.

Reference: vserver peer create.

Note: Previously, a cluster level peer relationship was performed with the <cluster peer create> command.

OnPrem-HQ::> vserver peer create -vserver NAS -peer-cluster OnPrem-EDGE -peer-vserver NAS-EDGE -applications flexcache 

Info: [Job 883] 'vserver peer create' job queued 

Once the peer relationship has been created between both vservers, you can continue to validate that the source volume was created as required. To validate the volume, the <volume show> command is used from the local cluster shell. In this lab I am going to use the volume named share.

OnPrem-HQ::*> volume show -vserver NAS                
Vserver   Volume       Aggregate    State      Type       Size  Available Used%
--------- ------------ ------------ ---------- ---- ---------- ---------- -----
NAS       NAS_root     OnPrem_HQ_01_SSD_1 online RW      20MB    17.66MB    7%
NAS       share        OnPrem_HQ_01_SSD_1 online RW      10.3GB   8.04GB   20%
19 entries were displayed.

OnPrem-HQ::*> 

Once the volume is identified, you can create the flexcache volume using the command <volume flexcache create>. It is important to mention that flexcache technology uses “FlexGroup” as a dependency when creating a volume. It is for this reason that the aggr-list option is used to specify which aggregates will be used to create the “FlexGroup” type volumes.

OnPrem-EDGE::> volume flexcache create -vserver NAS-EDGE -volume share_edge -aggr-list OnPrem_EDGE_0* -origin-vserver NAS -origin-volume share -size 10GB -junction-path /share_edge
[Job 595] Job succeeded: Successful.                                           

OnPrem-EDGE::>

From the remote cluster shell you can verify the created volume by using the <vol flexcache show> command.

OnPrem-EDGE::> vol flexcache show
Vserver Volume      Size       Origin-Vserver Origin-Volume Origin-Cluster
------- ----------- ---------- -------------- ------------- --------------
NAS-EDGE share_edge 10GB       NAS            shares            OnPrem-HQ

OnPrem-EDGE::> 

From the local cluster shell you can see the source volume with the command <volume flexcache origin show-caches>. The flexcache volume previously created can be validated in the command result.

OnPrem-HQ::*> volume flexcache origin show-caches
Origin-Vserver Origin-Volume  Cache-Vserver  Cache-Volume  Cache-Cluster
-------------- -------------- -------------- ------------- --------------
NAS            share         NAS-EDGE       share_edge    OnPrem-EDGE
1 entries were displayed.

OnPrem-HQ::*> 

Now i proceed to share the share_edge cache volume using the SMB protocol. The command <vserver cifs share create> is used with the option of <-path /share_edge> to specify the “junction-path” of the flexclone volume.

OnPrem-EDGE::> vserver cifs share create -vserver NAS-EDGE -share-name share_edge -path /share_edge

OnPrem-EDGE::>

Now you can see that the “Share” was created in the share_edge volume.

OnPrem-EDGE::> vserver cifs share show -share-name share_edge
Vserver        Share         Path              Properties Comment  ACL
-------------- ------------- ----------------- ---------- -------- -----------
NAS-EDGE       share_edge    /share_edge       oplocks    -        Everyone / Full Control
                                               browsable
                                               changenotify
                                               show-previous-versions

OnPrem-EDGE::> 

I have used the smbmap tool to validate that the shared folder can be accessed over the network.

[rebelinux@blabla ~]$ smbmap.py -H 10.10.33.20 -p "XXXXX" -d ZENPRSOLUTIONS -u administrator 
[+] IP: 10.10.33.20:445	Name: NAS-EDGE.zenprsolutions.local                            
        Disk                                                  	Permissions	Comment
	----                                                  	-----------	-------
	share_edge                                        	READ, WRITE	
	ipc$                                              	NO ACCESS	
	c$                                                	READ, WRITE	
[rebelinux@blabla ~]$

In the performed test I copied the “Very_Big_File.iso” file to each site cluster “SHARE” volume.

Note: I modified the original diagram to show how the clients are connected.

In this section you can see the commands used to connect the clients to the “SHARE” volume.

Note: Ubuntu Linux 20.04 was used for this lab scenario.

CLIENT-HQ-01V
root@CLIENT-HQ-01V:/home/godadmin# mount -t cifs -o username=administrator@zenprsolutions.local,password=XXXXXXXX //nas/shares /mnt/share/
root@CLIENT-HQ-01V:/home/godadmin# cd /mnt/share/
root@CLIENT-HQ-01V:/mnt/share# ls
RecApp-2021-02-20.webm   RecApp-2021-02-27.webm   Very_Big_File.iso   WSUS-Cleanup.ps1
root@CLIENT-HQ-01V:/mnt/share#

CLIENT-EDGE-01V
root@CLIENT-EDGE-01V:/home/godadmin# mount -t cifs -o username=administrator@zenprsolutions.local,password=XXXXXXXX //nas-edge/share_edge /mnt/share_edge/
root@CLIENT-EDGE-01V:/home/godadmin# cd /mnt/share_edge/
root@CLIENT-EDGE-01V:/mnt/share_edge# ls
RecApp-2021-02-20.webm   RecApp-2021-02-27.webm   Very_Big_File.iso   WSUS-Cleanup.ps1
root@CLIENT-EDGE-01V:/mnt/share_edge#
CLIENT-EDGE-02V
root@CLIENT-EDGE-02V:/home/godadmin# mount -t cifs -o username=administrator@zenprsolutions.local,password=XXXXXXXX //nas-edge/share_edge /mnt/share_edge/
root@CLIENT-EDGE-02V:/home/godadmin# cd /mnt/share_edge/
root@CLIENT-EDGE-02V:/mnt/share_edge# ls
RecApp-2021-02-20.webm   RecApp-2021-02-27.webm   Very_Big_File.iso   WSUS-Cleanup.ps1
root@CLIENT-EDGE-02V:/mnt/share_edge#

In this last step the <cp> command was used to copy the “Very_Big_File.iso” file from the cluster to a local folder on the client. To measure the elapsed time of transfer the Linux <time> command was used.

CLIENT-HQ-01V
root@CLIENT-HQ-01V:/mnt/share# time cp Very_Big_File.iso /home/godadmin/

real	2m7.513s
user	0m0.016s
sys	0m6.236s
root@CLIENT-HQ-01V:/mnt/share#
CLIENT-EDGE-01V
root@CLIENT-EDGE-01V:/mnt/share_edge# time cp Very_Big_File.iso /home/godadmin/

real	4m2.391s
user	0m0.021s
sys	0m6.902s
root@CLIENT-EDGE-01V:/mnt/share_edge#
CLIENT-EDGE-02V
root@CLIENT-EDGE-02V:/mnt/share_edge# time cp Very_Big_File.iso /home/godadmin/

real	2m16.169s
user	0m0.054s
sys	0m6.128s
root@CLIENT-EDGE-02V:/mnt/share_edge# 

Further on, the following table shows the elapsed time transfer of each test performed. As you can see the CLIENT-HQ-01V located at the Gurabo site has direct access to the shared folder at the origin volume helping to achieve a lower transfer time of 2m7.513s. The CLIENT-EDGE-01V is connected to the Ponce site using the shared folder from the flexcache volume where you can see that since the content was not initially in the cache the transfer time was higher 4m2.391s. This behavior is due to the need to load the entire contents of “Very_Big_File.iso” from the source volume over the InterCluster LIF connection. Finally the CLIENT-EDGE-02V had a transfer time similar to CLIENT-HQ-01V (2m16.169s) since the content of the “Very_Big_File.iso” file is already in the cache of the flexcache volume.

Client NameElapsed Time
CLIENT-HQ-01V2m7.513s
CLIENT-EDGE-01V4m2.391s
CLIENT-EDGE-02V2m16.169s

till next time!

NetApp Aggregate Encryption (NAE) in ONTAP

© 2021 NetApp

Previously in a post I explained how to set up an encrypted volume using an encryption key manager (KMS) specifically from the company HyTrust. In this specific case each volume is encrypted individually using independent keys. A disadvantage of this method is that it affects the possibility of increasing the efficiency levels of data reduction such as compression, compaction and de-duplication (cross-volume-dedupe).

To eliminate this disadvantage the NetApp gurus came up with the idea of applying the encryption feature at the aggregate level by allowing volumes residing within the same aggregate to share the encryption key. This technology is known as “NetApp Aggregate Encryption” (NAE). This allows customers the option to take advantage of storage efficiency technologies in conjunction with the encryption process.

Now it’s time to talk about how we can create an encrypted aggregate in Ontap but first of all… What is an aggregate within Ontap?

Using the NetApp Knowledge Base portal as a reference:

An aggregate is a collection of disks (or partitions) arranged into one or more RAID groups.  It is the most basic storage object within ONTAP and is required to allow for the provisioning of space for connected hosts.

NetApp Knowledge Base
© 2021 flackbox.com

Step 1: Validate Ontap requirements.

In order to use the encryption option at the aggregate level, it is required to have a version of Ontap 9.6 or higher also make sure the required licenses are installed in the cluster. In this case we use the command <version> to validate the current version of the cluster and the command <license show -package VE> to display the license information.

OnPrem-HQ::> version
NetApp Release 9.9.1RC1: Fri Apr 30 06:35:11 UTC 2021
 
OnPrem-HQ::> license show -package VE -fields package,owner,description,type  
  (system license show)
serial-number                  package owner         description               type    
------------------------------ ------- ------------- ------------------------- ------- 
X-XX-XXXXXXXXXXXXXXXXXXXXXXXXX VE      OnPrem-HQ-01 Volume Encryption License license 
X-XX-XXXXXXXXXXXXXXXXXXXXXXXXX VE      OnPrem-HQ-02 Volume Encryption License license 
2 entries were displayed.

OnPrem-HQ::> 

Note: I previously done the external KMS setup in Ontap. Link

Step 2: Validate the available “Spare” discs.

To begin with, there are two ways to encrypt an aggregate; initially when it is created or the live conversion of an existing one. Initially I will be creating a new aggregate and then in another tutorial I will show you how easy is to convert an existing one. To create an aggregate you need to have disk drives available or in the “spare” state as NetApp commonly calls it.

The <storage aggregate show-spare-disks> command allows us to see how many partitioned disks are available on the node where i will create the new encrypted aggregate. In this particular case you can see that there are 24 partitioned disks using the “Root-Data1-Data2” option. To learn more about this disk strategy please follow the link below:

ADP(v1) and ADPv2 in a nutshell, it’s delicious!

 © 2021 Chris Maki
OnPrem-HQ::> storage aggregate show-spare-disks -original-owner OnPrem-HQ-01 
                                                                      
Original Owner: OnPrem-HQ-01
 Pool0
  Root-Data1-Data2 Partitioned Spares
                                                              Local    Local
                                                               Data     Root Physical
 Disk             Type   Class          RPM Checksum         Usable   Usable     Size Status
 ---------------- ------ ----------- ------ -------------- -------- -------- -------- --------
 VMw-1.1          SSD    solid-state      - block           11.63GB       0B  26.67GB zeroed
 VMw-1.2          SSD    solid-state      - block           11.63GB       0B  26.67GB zeroed
 VMw-1.3          SSD    solid-state      - block           11.63GB       0B  26.67GB zeroed
 VMw-1.4          SSD    solid-state      - block           11.63GB       0B  26.67GB zeroed
 VMw-1.5          SSD    solid-state      - block           11.63GB       0B  26.67GB zeroed
 VMw-1.6          SSD    solid-state      - block           11.63GB       0B  26.67GB zeroed
 VMw-1.7          SSD    solid-state      - block           11.63GB       0B  26.67GB zeroed
 VMw-1.8          SSD    solid-state      - block           11.63GB       0B  26.67GB zeroed
 VMw-1.9          SSD    solid-state      - block           11.63GB       0B  26.67GB zeroed
 VMw-1.10         SSD    solid-state      - block           11.63GB       0B  26.67GB zeroed
 VMw-1.11         SSD    solid-state      - block           11.63GB   3.35GB  26.67GB zeroed
 VMw-1.12         SSD    solid-state      - block           11.63GB   3.35GB  26.67GB zeroed
 VMw-1.13         SSD    solid-state      - block           11.63GB       0B  26.67GB zeroed
 VMw-1.14         SSD    solid-state      - block           11.63GB       0B  26.67GB zeroed
 VMw-1.15         SSD    solid-state      - block           11.63GB       0B  26.67GB zeroed
 VMw-1.16         SSD    solid-state      - block           11.63GB       0B  26.67GB zeroed
 VMw-1.17         SSD    solid-state      - block           11.63GB       0B  26.67GB zeroed
 VMw-1.18         SSD    solid-state      - block           11.63GB       0B  26.67GB zeroed
 VMw-1.19         SSD    solid-state      - block           11.63GB       0B  26.67GB zeroed
 VMw-1.20         SSD    solid-state      - block           11.63GB       0B  26.67GB zeroed
 VMw-1.21         SSD    solid-state      - block           11.63GB       0B  26.67GB zeroed
 VMw-1.22         SSD    solid-state      - block           11.63GB       0B  26.67GB zeroed
 VMw-1.23         SSD    solid-state      - block           11.63GB       0B  26.67GB zeroed
 VMw-1.24         SSD    solid-state      - block           11.63GB       0B  26.67GB zeroed
24 entries were displayed.

OnPrem-HQ::> 

Step 3: Create an encrypted aggregate.

To create the encrypted aggregate we use the <storage aggregate create> command with the option <encrypt-with-aggr-key true> turned on. In this case we create a secure aggregate composed of 23 disks “partitions”.

Note: For this example the RAID type “Dual Parity” was used.

OnPrem-HQ::> storage aggregate create -aggregate OnPrem_HQ_01_SSD_1 -diskcount 23 -node OnPrem-HQ-01 -raidtype raid_dp -encrypt-with-aggr-key true 

Info: The layout for aggregate "OnPrem_HQ_01_SSD_1" on node "OnPrem-HQ-01"
      would be:
      
      First Plex
      
        RAID Group rg0, 23 disks (block checksum, raid_dp)
                                                            Usable Physical
          Position   Disk                      Type           Size     Size
          ---------- ------------------------- ---------- -------- --------
          shared     VMw-1.1                   SSD               -        -
          shared     VMw-1.2                   SSD               -        -
          shared     VMw-1.3                   SSD         11.61GB  11.64GB
          shared     VMw-1.4                   SSD         11.61GB  11.64GB
          shared     VMw-1.5                   SSD         11.61GB  11.64GB
          shared     VMw-1.6                   SSD         11.61GB  11.64GB
          shared     VMw-1.7                   SSD         11.61GB  11.64GB
          shared     VMw-1.8                   SSD         11.61GB  11.64GB
          shared     VMw-1.9                   SSD         11.61GB  11.64GB
          shared     VMw-1.10                  SSD         11.61GB  11.64GB
          shared     VMw-1.18                  SSD         11.61GB  11.64GB
          shared     VMw-1.16                  SSD         11.61GB  11.64GB
          shared     VMw-1.13                  SSD         11.61GB  11.64GB
          shared     VMw-1.14                  SSD         11.61GB  11.64GB
          shared     VMw-1.15                  SSD         11.61GB  11.64GB
          shared     VMw-1.19                  SSD         11.61GB  11.64GB
          shared     VMw-1.20                  SSD         11.61GB  11.64GB
          shared     VMw-1.21                  SSD         11.61GB  11.64GB
          shared     VMw-1.17                  SSD         11.61GB  11.64GB
          shared     VMw-1.22                  SSD         11.61GB  11.64GB
          shared     VMw-1.11                  SSD         11.61GB  11.64GB
          shared     VMw-1.12                  SSD         11.61GB  11.64GB
          shared     VMw-1.23                  SSD         11.61GB  11.64GB
      
      Aggregate capacity available for volume use would be 219.5GB.
      
Do you want to continue? {y|n}: y
[Job 817] Job succeeded: DONE                                                  

OnPrem-HQ::> 

Once created it is required to validate the aggregate, to do so you must use the command <storage aggregate show> by filtering the result with the <encrypt-with-aggr-key> option.

OnPrem-HQ::> storage aggregate show -fields aggregate,size,availsize,usedsize,state,node,raidstatus,encrypt-with-aggr-key 
aggregate           node          availsize raidstatus      size    state  usedsize encrypt-with-aggr-key 
------------------- ------------- --------- --------------- ------- ------ -------- --------------------- 
OnPrem_HQ_01_SSD_1 OnPrem-HQ-01 219.5GB   raid_dp, normal 219.5GB online 480KB    true                  
OnPrem_HQ_02_SSD_1 OnPrem-HQ-02 209.3GB   raid_dp, normal 219.5GB online 10.12GB  false                 
aggr0_OnPrem_HQ_01 OnPrem-HQ-01 1.11GB    raid_dp, normal 22.80GB online 21.69GB  false                 
aggr0_OnPrem_HQ_02 OnPrem-HQ-02 1.11GB    raid_dp, normal 22.80GB online 21.69GB  false                 
4 entries were displayed.

OnPrem-HQ::> 

In the command result you can see that the aggregate was created with encryption capability enabled.

Step 4: Create a volume within the encrypted aggregate.

Unlike volume-level encryption NVE, when using aggregate-level encryption it is not required to specify the encrypt option to create the volume. The command <vol create> creates an encrypted volume by default when the volume resides in an aggregate configured with NAE.

OnPrem-HQ::> vol create -vserver SAN -volume Secure_Vol -aggregate OnPrem_HQ_01_SSD_1 -size 10GB -space-guarantee none 
[Job 818] Job succeeded: Successful                                            

OnPrem-HQ::>

By using the <vol show> command with the <encryption-state full> filter option you can see the volume was created encrypted by default.

OnPrem-HQ::> vol show -encryption-state full -aggregate OnPrem_HQ_01_SSD_1 -fields Vserver,Volume,encrypt,encryption-type,encryption-state 
vserver volume     encryption-type encrypt encryption-state 
------- ---------- --------------- ------- ---------------- 
SAN     Secure_Vol aggregate       true    full             

OnPrem-HQ::>

Summary

In this tutorial I showed you how to configure the aggregate level encryption technology within Ontap that allows us to use a unique security key to create encrypted volumes. This allows us to use data reduction technologies in conjunction with security mechanisms that enhance or strengthen the security posture of the organization.

NetApp Volume Encryption Setup with External Key Manager

Using NetApp documentation as a reference:

NetApp Volume Encryption (NVE) is a software-based technology for encrypting data at rest one volume at a time. An encryption key that can only be accessed by the storage system ensures that the data on the volume cannot be read if the underlying device is reused, returned, lost or stolen.

NetApp Documentation

In this tutorial I explain how easy it is to configure and manage this impressive security feature.

Before starting to configure this feature it is necessary to have an existing “Key Management Service”. For the purpose of this tutorial I will use the “HyTrust KeyControl” KMS which I previously explained in the blog. If you want to know more about it, read the following post.

Step 1: Create a client certificate for authentication purposes:

Go to [KMIP > Client Certificate] and select Create Certificate in the Actions menu.

Select a name for the certificate and press Create.

Once the certificate has been created, it should be stored in a safe place.

The downloaded file contains the Client and Root CA certificate needed to configure the KMS option in Ontap.

Step 2: Validation of the Ontap cluster

The important things to consider before you can configure this security feature are the cluster status and the necessary license to support the encryption feature.

The cluster show command displays the overall status of the Ontap cluster.

OnPrem-HQ::> cluster show 
Node                  Health  Eligibility
--------------------- ------- ------------
OnPrem-HQ-01         true    true
OnPrem-HQ-02         true    true
2 entries were displayed.

The system node show command displays the node health and the system model.

OnPrem-HQ::> system node show
Node      Health Eligibility Uptime        Model       Owner    Location  
--------- ------ ----------- ------------- ----------- -------- ---------------
OnPrem-HQ-01 true true           00:18:10 SIMBOX
OnPrem-HQ-02 true true           00:18:08 SIMBOX
2 entries were displayed.

With the system license show command you can validate the license installed on the cluster. Here you can see that the volume encryption license is installed on both nodes.

OnPrem-HQ::> system license show -package VE

Serial Number: X-XX-XXXXXXXXXXXXXXXXXXXXXXXXXX
Owner: OnPrem-HQ-01
Installed License: Legacy Key
Capacity: -
Package           Type     Description           Expiration
----------------- -------- --------------------- -------------------
VE                license  Volume Encryption License 
                                                 -

Serial Number: X-XX-XXXXXXXXXXXXXXXXXXXXXXXXXX
Owner: OnPrem-HQ-02
Installed License: Legacy Key
Capacity: -
Package           Type     Description           Expiration
----------------- -------- --------------------- -------------------
VE                license  Volume Encryption License 
                                                 -
2 entries were displayed.

OnPrem-HQ::> 

Step 3: Configuration of the KMS certificate in Ontap:

As stated in the NetApp documentation:

The cluster and the KMIP server use KMIP SSL certificates to verify each other’s identity and establish an SSL connection. Before setting up the SSL connection to the KMIP server, you must install the KMIP client SSL certificates for the cluster and the public SSL certificate for the KMIP server’s root Certificate Authority (CA).

NetApp KMIP Documentation

To install the KMIP client certificate on the NetApp cluster, run the following commands:

Note: The contents of the certificates are extracted from the previously downloaded files. The ONTAPEncryption.pem and cacert.pem files contain the necessary information.

OnPrem-HQ::> security certificate install –vserver OnPrem-HQ -type client –subtype kmip-cert
Please enter Certificate: Press <Enter> when done
-----BEGIN CERTIFICATE----- 
Certificate Content
-----END CERTIFICATE-----
Please enter Private Key: Press <Enter> when done
-----BEGIN PRIVATE KEY-----  
Certificate Private Key Content
-----END PRIVATE KEY-----
You should keep a copy of the private key and the CA-signed digital certificate for future
reference.

The installed certificate's CA and serial number for reference:
CA: HyTrust KeyControl Certificate Authority
serial: C9A148B9

The certificate's generated name for reference: ONTAPEncryption
OnPrem-HQ::> security certificate install -vserver OnPrem-HQ -type server-ca -subtype kmip-cert 

Please enter Certificate: Press <Enter> when done
-----BEGIN CERTIFICATE-----  
Certificate Content
-----END CERTIFICATE-----
You should keep a copy of the CA-signed digital certificate for future reference.

The installed certificate's CA and serial number for reference:
CA: HyTrust KeyControl Certificate Authority
serial: 60A148B6

The certificate's generated name for reference: HyTrustKeyControlCertificateAuthority

Step 4: Configure the NetApp Volume Encryption solution:

For this tutorial it is necessary to configure an external “KMS” key management server so that the storage system can securely store and retrieve authentication keys for the NetApp Volume Encryption (NVE) solution.

Note: NetApp recommends a minimum of two server for redundancy and disaster recovery.

The following command allows you to add the KMS server to the Ontap system using the IP address 192.168.7.201, port TCP/5696 and using the previously configured certificates.

OnPrem-HQ::> security key-manager external enable -vserver OnPrem-HQ -key-servers 192.168.7.201:5696 -client-cert ONTAPEncryption -server-ca-certs HyTrustKeyControlCertificateAuthority 

OnPrem-HQ::> security key-manager external show                                                                                                                                           

                  Vserver: OnPrem-HQ
       Client Certificate: ONTAPEncryption
   Server CA Certificates: HyTrustKeyControlCertificateAuthority
          Security Policy: -

Key Server
------------------------------------------------
192.168.7.201:5696

It is important to validate that the KMS service is “available” before proceeding to create encrypted volumes. The security key-manager external show-status command does allow you to validate the status of the service.

OnPrem-HQ::> security key-manager external show-status

Node  Vserver  Key Server                                   Status
----  -------  -------------------------------------------  ---------------
OnPrem-HQ-01
      OnPrem-HQ
               192.168.7.201:5696                           available
OnPrem-HQ-02
      OnPrem-HQ
               192.168.7.201:5696                           available
2 entries were displayed.

OnPrem-HQ::> 

Step 5: Create an encrypted volume (NVE)

In this part of the tutorial we must validate that the configuration is correct by creating an encrypted volume inside Ontap. For this step I will use the vol create command using the encrypt true option.

OnPrem-HQ::> vol create TEST_Encryption -vserver SAN -size 10G -aggregate OnPrem_HQ_01_SSD_1 -encrypt true 
  
[Job 763] Job succeeded: Successful 

With the vol show command i can verify that the volume has been created with the encryption option.

OnPrem-HQ::> vol show -encryption -vserver SAN -encryption-state full 
Vserver   Volume       Aggregate    State      Encryption State
--------- ------------ ------------ ---------- ----------------
SAN       TEST_Encryption OnPrem_HQ_01_SSD_1 online full

OnPrem-HQ::> 

Step 6: Validate the Encryption information on the KMS server.

In the last step we must enter the administration portal of the “HyTrust KeyControl” application to validate that the encryption keys are stored in the platform. To validate this information go to the menu [KMIP > Objects] where you can validate that the keys were created after the creation of the volume within Ontap.

Summary

In this tutorial I show you how to configure the KMS service within Ontap that allows us to create encrypted volumes. This simple solution allows us to increase or improve the security posture in our organization.

NetApp PowerShell Toolkit – Get Lun Mapping Information

Recently in a post on the NetApp forum a user asked for help to create a function in Powershell using the DataOntap libraries. Here I show you how we use these libraries to join multiple objects with information related to the LUNs assigned in NetApp.

A curious fact about this request is that natively Ontap libraries do not allow you to filter the required information and that it can be displayed in a single table. For this we create an object within PowerShell where we can build the format of the information and that this has a more logical sense.

NetApp libraries can be installed from the PowerShell Gallery:

https://www.powershellgallery.com/packages/DataONTAP/9.8.0

[Code of the <get-luninfo> function.]

Import-Module dataontap



#Connect to Ontap Storage

Connect-NcController -Name <cluster> -Vserver <vserver>



#Get the list of LUNs

$luntable = Get-NcLun | Select-Object Path -ExpandProperty Path



#Declare Function

function get-luninfo {

 param(

     #Declare the required variable

     [string]$lunpath

 )

 #Check if the lun is mapped to any Host (IGROUP)

 if (get-nclunmap $lunpath) {

     #get the lun information

     $lunid = Get-NcLunmap $lunpath | Select-Object LunId -ExpandProperty LunId

     $lunigroup = get-nclunmap $lunpath | Select-Object InitiatorGroup -ExpandProperty InitiatorGroup

     $vserver =  Get-NcLun $lunpath | Select-Object Vserver -ExpandProperty Vserver

     $lunigrouptype = Get-NcIgroup -Name $lunigroup | Select-Object InitiatorGroupType -ExpandProperty InitiatorGroupType

     $lunigrouptypeOS = Get-NcIgroup -Name $lunigroup | Select-Object InitiatorGroupOsType -ExpandProperty InitiatorGroupOsType

     $lunigroupAluaEna = Get-NcIgroup -Name $lunigroup | Select-Object InitiatorGroupAluaEnabled -ExpandProperty InitiatorGroupAluaEnabled

     $initiators = Get-NcIgroup -Name $lunigroup | Select-Object Initiators -Unique -ExpandProperty Initiators

     $initiatorstatus = @()

     #Loop to find the initiators online status

     foreach ($in in $initiators.Initiators.InitiatorName) {

         $status = Confirm-NcLunInitiatorLoggedIn -VserverContext $vserver -Initiator $in | Select-Object Value -ExpandProperty Value

         $initiatorstatus += @(@{Initiator="$in";Online="$status"})

         }

     foreach ($object in $initiatorstatus) {

         $initiatoronline += $object.ForEach({[PSCustomObject]$_})

         }

     #Create a Object to better display and Glue the Information

     $obj = New-Object -TypeName PSObject

     $obj | add-member -MemberType NoteProperty -Name "vServer" -Value $vserver

     $obj | add-member -MemberType NoteProperty -Name "Lun ID" -Value $lunid

     $obj | add-member -MemberType NoteProperty -Name "IGROUP Name" -Value $lunigroup

     $obj | add-member -MemberType NoteProperty -Name "IGROUP TYPE" -Value $lunigrouptype

     $obj | add-member -MemberType NoteProperty -Name "IGROUP TYPE OS" -Value $lunigrouptypeOS

     $obj | add-member -MemberType NoteProperty -Name "IGROUP ALUA ENABLE" -Value $lunigroupAluaEna

     $obj | add-member -MemberType NoteProperty -Name "Lun Path" -Value $lunpath

     #$obj | add-member -MemberType NoteProperty -Name "Initiator Info" -Value $initiatoronline

    

     #Return the Formated Information

     Write-Output $obj | FT

     Write-Output $initiatoronline

 }

 # If the LUN isnt mapped to any HOST, display the available information.

 else {

     $vserver =  Get-NcLun $lunpath | Select-Object Vserver -ExpandProperty Vserver

     $obj = New-Object -TypeName PSObject

     $obj | add-member -MemberType NoteProperty -Name "vServer" -Value $vserver

     $obj | add-member -MemberType NoteProperty -Name "Lun Path" -Value $lunpath

     $obj | add-member -MemberType NoteProperty -Name "Lun Mapping" -Value "Lun Not Mapped"

     Write-Output $obj | FT -Wrap -AutoSize

 }

}

#Calling the Function

foreach ($lun in $luntable) {

 get-luninfo($lun)

 }

Example of the retrieved information