ONTAP is considered as a unified storage system, meaning it supports both block (FC, FCoE, NVMeoF and iSCSI) & file (NFS, pNFS, CIFS/SMB) protocols for its clients. SDS versions of ONTAP (ONTAP Select & Cloud Volumes ONTAP) do not support FC, FCoE or FC-NVMe protocols because of their software-defined nature.
Physical ports, VLANs and ifgroups considered as “ports” in ONTAP. Each port can run multiple LIFs. If a port has at least one VLAN, then LIFs can be created only on VLANs, not anymore on the port itself. If a port part of an ifgroup, LIFs can be created only on top of the ifgroup, not on the port itself anymore. If a port part of ifgroup on top of which created one or a few VLANs, LIFs can be created only on top of those VLANs, not on ifgroup or physical port anymore.
It is very common configuration when two ports are part of an ifgroup and a few VLANs created for protocols. Here are two very popular examples, in this examples storage system configured with:
- SMB for PC users (MTU 1500)
- Popular example: User home directories
- SMB for Windows Servers (MTU 9000)
- Use case: MS SQL & Hyper-V
- NFS for VMware & Linux Servers (MTU 9000)
- Use-case: VMware NFS Datastore
- iSCSI for VMware, Linux & Windows Servers (MTU 9000)
- Block-deice for OS boot and some other configs, like Oracle ASM
Example 1 is for customers who want to use all of the Ethernet-based protocols, but have only 2 ports per node. Example 2 is more preferable with dedicated Ethernet ports for iSCSI traffic if a storage node have sufficient number of ports.
Notice Example 1, has two iSCSI VLANs A & B, which are not necessary, I would use two, in order to increase number of connections over ifgrp to increase load-balancing but it is up to storage & network admins. Normally each, iSCSI-A & iSCSI-B would use a separate IP subnet. See ifgroup section for the network load-balancing explanation.
VLANs in ONTAP allow to separate two IP networks one from another and often used with NFS, CIFS and iSCSI protocols, though multiple IP addresses allowed on a single port or VLAN. A VLAN can be added on a physical Ethernet port or to an ifgroup.
Interface group is a collection of a few ports (typically with the same speed). Ports from a single ifgroup must be located on a single node. An ifgroup provide network redundancy to ethernet. Each ifgroup perceived and used as a physical port. One of the most notable & used type of ifgroup is Dynamic multimode. Dynamic multimode enables LACP protocol on ports so in case one port in a group dies another will be used fully transparently to upper-level protocols like NFS, SMB and iSCSI. Most notable in LACP is the ability to distribute data across links in attempt to equally load all the links in the ifgroup.
Ports starting with latter “e“, means it is a physical port, then number of PCIe bus (0 means on-board ports), and then another latter starting with “a” which represents index of the port on the PCIe bus. While ifgroup (virtual aggregated) port names starts with “a” (can be any latter), then number, and then another latter, for example a1a, to keep same format as physical ports for naming convention, even though number and the ending latter no longer represents anything in particular (i.e. PCIe bus or port position on the bus), but used only as index to distinguish from other ports.
Unfortunately LACP load distribution is far from perfect: the more hosts in network communicate with an ifgroup, the more probability to equally distribute traffic across network ports. LACP uses a single static formula depending on source and destination information of a network packet, there is no intellectual analysis and decision-making as for example in SAN protocols and there is no feedback from lower Ethernet level to upper-level protocols. Also LACP often used in conjunction of Multi-Chassis Ether Channel (vPC is another commercial name) functionality, to distribute links across a few switches and provide switch redundancy, which require some additional efforts from switch configuration and the switches themselves. While on another hand SAN protocols do not need switches to provide this type of redundancy because it done on protocol upper level. This is the primary reason why SMB and NFS protocols developed their extensions to provide similar functionality: to more intellectually and equally distribute load across links and be aware of network path status.
One day these protocols will fully remove necessity for Ethernet LACP & Multi-Chassis Ether Channel: pNFS, NFS Session Trunking (NFS Multipathing), SMB Multichannel and SMB Continuous Availability. Until then we going to use ifgroups with configurations which do not support those protocols.
NFS was the first protocol available in ONTAP. The latest versions of ONTAP 9 support NFSv2, NFSv3, NFSv4 (4.0 and 4.1) and pNFS. Starting with 9.5, ONTAP support 4-byte UTF-8 sequences in names for files and directories.
Network switch technically is not required for NFS traffic and direct host connection is possible, but network switch is used in all the configurations to provide additional level of network redundancy and be able easier add new hosts when needed.
ONTAP supports SMB 2.0 and higher up to SMB 3.1. Starting with ONTAP 9.4 SMB Multichannel, which provides functionality similar to multipathing in SAN protocols, is supported. Starting with ONTAP 8.2 SMB protocol supports Continuous Availability (CA) with SMB 3.0 for Microsoft Hyper-V and SQL Server. SMB is a session-based protocol and by default does not tolerate session brakes, so SMB CA helps to tolerate unexpected session loss, for example in case a network port went down. ONTAP supports SMB encryption, which is also known as sealing. Sped up AES instructions (Intel AES NI) encryption is supported in SMB 3.0 and later. Starting with ONTAP 9.6 FlexGroup volume supports SMB CA and thus support MS SQL & Hyper-V on FlexGroup.
Network switch technically is not required for SMB traffic and direct host connection is possible, but network switch is used in all the configurations to provide additional level of network redundancy and be able easier add new hosts when needed.
Here is the hierarchy visualization & logical representation of NAS protocols in ONTAP cluster and corresponding commands (highlighted in grey):
Each NAS LIF can accept NFS & SMB traffic, but usually engineers tend to separate them on separate LIFs.
ONTAP on physical appliances supports SCSI-based FCoE as well as FC protocols, depending on HBA port speed. Both FC & FCoE are known under a single umbrella name FCP. An iGroup is a collection of WWPN address from initiator hosts which allowed to access storage LUNs. WWPN address is interface on FC or FCoE port. typically you need to add all the initiator host ports to iGroup to allow multipathing work properly. ONTAP uses N_Port ID Virtualization (NPIV) for FC data and therefore require a FC network switch (typically at least two) which also supports NPIV, direct FC connections from host to storage are not supported.
Here is the hierarchy visualization & logical representation of FCP protocols in ONTAP cluster and corresponding commands (highlighted in grey):
Read more about ONTAP Zoning here.
iSCSI is another SCSI-based protocol encapsulated in IP/Ethernet transport. NetApp also supports Data Center Bridging (DCB) protocol for some models, depending on Ethernet port chips. Network switch technically is not required and direct host connection is possible, but network switch is recommended to be able easier add new hosts when needed.
Network switch technically is not required for iSCSI traffic and direct host connection is possible if number of storage ports allows, though network switch can be used to easier add new hosts when needed. Network switch is recommended to have with iSCSI.
Here is the hierarchy visualization & logical representation of iSCSI protocol in ONTAP cluster and corresponding commands (highlighted in grey):
NVMe over Fabrics (NVMeoF) refers to the ability to use NVMe protocol over existing network infrastructure like Ethernet (Converged or traditional), TCP, Fiber Channel or InfiniBand for transport (as opposite to run NVMe over PCIe without additional encapsulation). NVMe is SAN block data protocol. In contrast to NVMe (just) where an extra layer of transport is not used and devices connected directly to PCIe bus.
Starting with ONTAP 9.5, NVMe ANA protocol supported which provide, similarly to ALUA, multipathing functionality to NVMe. ANA for NVMe currently supported only with SUSE Enterprise Linux 15, VMware 6.7 and Windows Server 2012/2016. FC-NVMe without ANA supported with SUSE Enterprise Linux 12 SP3 and RedHat Enterprise Linux 7.6.
NVMeoF supported only on All-Flash FAS systems and not for Entry level A200 and A220 systems due to the lack of FC 32 Gb ports. Subsystem in NVMe used for the same purpose as iGroups, it allows initiator host NQN addresses which allowed to access a namespace. A namespace in this context is very similar to a LUN in FCP or iSCSI. Do not mix up namespace term in NVMe with a single namespace in ONTAP cluster.
Here is the hierarchy visualization & logical representation of FC-NVMe protocol in ONTAP cluster and corresponding commands (highlighted in grey):
Spirit of this article
This article explains principles, architecture, NetApp’s unique approaches and maybe even spirit of ONTAP clusterization. Each configuration, model and appliance have their nuances, which left out of the scope of this article. I’ve tried to give a general direction of the ideas behind NetApp innovative technologies, while (trying) not putting too many details to the equation to keep it simpler but not to lose important system architecture details. This is a far not complete story about ONTAP; for example, I didn’t mention about 7-Mode Transition Tool (7MTT) required for the transition to Clustered ONTAP to make complex information easier to consume nor didn’t go to WAFL detail explanation. Therefore some things might be a bit different in your case.
Clustered ONTAP first time was introduced around 2010 in version 8. After almost 10 years of hardening ONTAP (cluster) become mature, highly scalable and flexible solution with not just unique, unprecedented on the market functionalities in a single product but also impressive performance thanks to its WAFL versatility & clusterization capabilities.
Continue to read
Please note in this article I described my own understanding of the internal organization of ONTAP systems. Therefore, this information might be either outdated, or I simply might be wrong in some aspects and details. I will greatly appreciate any of your contribution to make this article better, please leave any of your ideas and suggestions about this topic in the comments below.
All product names, logos, and brands are property of their respective owners. All company, product and service names used in this website are for identification purposes only.