How does the ONTAP cluster work? (part 2)

This article is part of the series How does the ONTAP cluster work? Also previous series of articles How ONTAP Memory work will be a good addition to this one.

High Availability

I will call HA the first type of clusterization and its main purpose is data availability. Even though a single HA pair consists of two nodes (or controllers), NetApp has designed it in such a way it appears as a single storage system from clients perspective of view. HA configurations in ONTAP use several techniques to present the two nodes of the pair as a single system. This allows the storage system to provide its clients with nearly uninterrupted access to their data should a node fail unexpectedly.

For example: on the network level, ONTAP will temporarily migrate the IP address of the failed node to the surviving node, and where applicable it will also temporarily switch ownership of disk drives from the downed node to the surviving node. On the data level, the contents of the disks that are assigned to the downed node will automatically be available for use via the surviving node.

An aggregate can include only disks owned by a single node; therefore, each aggregate owned by a node and an upper objects such as FlexVol volumes, LUNs, File Shares served within a single controller (until FlexGroup). Since each node in HA pair can have its own disks and aggregates and serve them independently; therefore, such configurations are called Active/Active where both nodes are utilized simultaneously even though they are not serving the same data. In case one node fails, another will take over and serve its partner’s and its own disks, aggregates, and FlexVol volumes. HA configurations were only one controller has data aggregates, called Active/Passive, were passive node has only root aggregate and simply waiting to take over in case active will fail.

Once the downed node of the HA pair has been booted, up and running, a “giveback” command issued by defaul, to return disks, aggregates and FlexVol resources back to the original node.

Share nothing architecture

There are storage architectures each node serves the same data with symmetrical client access; ONTAP is not one of them. Only one node serves each disk/aggregate/FlexVol at a time, and if that node fails, another takes over in ONTAP. ONTAP using architecture known as share nothing, meaning no special equipment really needed to such architecture. Even though in hardware appliances we can see ”special” devices like NVRAM/NVDIMM, and disks with dual ports, each node in an HA pair runs its own instance of ONTAP on two separate controllers were only NVLogs shared over HI-CI Ethernet connections between HA partners. Though ONTAP is using these special devices in its hardware appliances, SDS version of ONTAP can work without them perfectly well: NVLogs are still replicated between HA partners, and instead of having two data ports and access the same disk drives with two controllers, ONTAP SDS can simply replicate data and keep two copies of data, like in MCC configurations. Share nothing architectures are particularly useful in scale-out clusters: you can add controllers with different models, configurations, disks, and even with slightly different version of OS if needed.

On the contrary, storage systems with symmetrical data access often build on monolithic architectures, which in turn suitable only to SAN protocols. While symmetrical access and monolithic architecture might sound ”cool” and seem to give more performance from the first sight, on practice share nothing architectures shoved no less performance. Also, the monolithic architectures showed plenty of inflexibility and disadvantages. For example, when the industry moved to flash media turns out disks are no longer a performance bottleneck, but controllers and CPUs are. Which means you need to add more nodes to your storage system to increase performance. This problem can be solved with scale-out clusters but monolithic architectures particularly very bad on that front. Let me explain why.

First of all, if you have a monolithic architecture with symmetric data access, each controller needs to have access to each disk, and when you are adding new controllers you need to rearrange disk shelves connections to facilitate that need, second, all the controllers in such a cluster have to be the same model with the same firmware, these clusters become very hard to maintain and add new nodes, plus such architectures usually very limited with maximum number of controllers you can add to such a cluster. Another example, closer to practice then to theory: imagine after a 3 years you need to add new node to your cluster to increase performance, while on the market there most probably available more powerful controllers with the same price you’ve bought your old controllers, but you can add only old controllers to the cluster.

Due to its monolithic nature, it becomes a very complex architecture to scale, most vendors of cause trying to hide this underlying complexity from its customers to simplify the usage of such systems, and some A-Brand systems are excellent on that front. But still, monolithic inflexibility makes such systems complex on low-level and thus very expensive, because it requires specially designed hardware, main boards, and special buses. While on another hand, share nothing architectures, needs no modifications for commodity servers to be used as storage controllers and hardware storage appliances, while no scalability nor performance is the problem for them.

HA interconnect

High-availability clusters (HA clusters) is the first type of clusterization introduced in ONTAP systems (that’s why I call it the first). The first, the second and the third type of ONTAP clusterization are not official or well-known industry terms, – I’m using them only to differentiate ONTAP capabilities while keeping them under the same umbrella because on some level they all are clusterization technologies. HA aimed to ensure an agreed level of operations. People often confuse HA with the horizontal scaling ONTAP clusterization that came from the Spinnaker acquisition; therefore, NetApp, in its documentation for Clustered ONTAP systems, refers to an HA configuration as an HA pair rather than as an HA cluster. I will reference to horizontal scaling ONTAP (Spinnaker) clusterization as the third type of clusterization to make it even more difficult. I am just kidding, with doing so I’m drawing parallels with all three types of customization so you will easily find differences between them.

An HA pair uses network connectivity between the pairs called a High Availability interconnect (HA-IC). The HA interconnect can use Ethernet ( in some older systems you might find InfiniBand) as the communication medium. The HA interconnect used for non-volatile memory log (NVLogs) replication between two nodes in an HA pair configuration using RDMA technology to ensure an agreed level of operations during events like unexpected reboots. Usually, ONTAP assigns dedicated, non-sharable HA ports for HA interconnect which could be external or built-in to storage chassis (and not visible from the outside). We should not confuse the HA-IC with the inter-cluster or intra-cluster interconnects that are used for SnapMirror. Inter-cluster and intra-cluster interfaces can coexist with interfaces used for data protocols on data ports. Also, HA-CI traffic should not be confused with Cluster Interconnect traffic used for horizontal scaling & online data migration across the multi-node cluster, and usually these two interfaces live on two different ports. HA-IC interfaces are visible only on the node shell level. Starting with A320 HA-IC and Cluster interconnect traffic to use the same ports.


MetroCluster is a free functionality for ONTAP systems for metro high availability with synchronous replication between two sites; this configuration might require some additional equipment. There can be only two sites. To distinguish between “old” MetroCluster in 7-Mode and “new” MetroCluster in Cluster-Mode, last one shortened as MCC. I will call MCC as the second type of ONTAP clusterization. The primary purpose of MCC clusterization is to provide data protection and data availability across two geographical locations and switch clients from one site to another in case of a disaster to continue access to the data.

MetroCluster (MCC) is an additional level of data availability to HA configurations and supported initially only with FAS and AFF storage systems, later SDS version of MetroCluster was introduced with ONTAP Select & Cloud Volumes ONTAP products. An MCC configuration consists of two sites (each site can have a single node or HA pair), both form MetroCluster. The distance between sites can reach up to 300 km (186 miles) or even 700 km (436 miles), therefore, called geo-distributed system. Plex and SyncMirror are the critical underlying technologies for MetroCluster which synchronize data between two sites. In MCC configurations NVLogs are also replicated among storage systems between sites in this article I will refer this traffic as metrocluster traffic, to distinguish it from HA interconnect, and Cluster interconnect traffic.

MetroCluster uses RAID SyncMirror (RSM) and plex technique where on one site number of disks form one or more RAID groups aggregated in a plex, while the second site have the same amount of disks with the same type and RAID configuration aggregated into the second plex where one plex replicate data to another. Alongside with NVLogs ONTAP replicates Configuration Replication Service (CRS) metadata. NVLogs are replicated from one system to another as part of SyncMirror process and then on destination system NVLogs restored to MBUF and dumped to disks as part of next CP process, while from the logical perspective of view it looks like data synchronously replicated between two plexes groups. To simplify things, NetApp usually shows that one plex synchronously replicates to another, but in reality, NVLogs synchronously replicated between non-volatile memories of two sites. Two plexes form an aggregate and in case of a disaster on one site, the second site provides read-write access to the data. MetroCluster Support FlexArray technology and ONTAP SDS.

As part of the third type of clusterization, individual data volumes, LUNs and LIFs could online migrate across storage nodes in the MetroCuster only within a single site where data originated from: it is not possible to migrate individual volumes, LUNs or LIFs using cluster capabilities across sites unless MetroCluster switchover operation (the second type of clusterization) is used to switch entire half of the cluster with all the data, volumes, LIFs and storage configuration from one, so clients and applications access to all the data from another location.

With MCC it is possible to have one or more storage nodes per site, so one node per site known as 2-node configuration (or two-pack configuration), 2-node per site known as 4-node configuration and 8-node configuration with 4-nodes per site. Local HA partner (if exists) and remote partner must be the same model: in 2 or 4-node configurations, all nodes must be the same model & configuration. In MCC configuration each one remote and one local storage node form a Disaster Recovery Pare (DR Pare) across two sites while two local nodes (if there is partner) form local HA pair, thus each node synchronously replicates data in non-volatile memory two nodes: one remote and one local (if there is one). In other words, the 4-node configuration consists of two HA pares and in this case NVLogs replicated to a remote site and local HA partner as in normal non-MCC HA system, while 2-node configuration NVLogs replicated only to its remote partner.

MCC with one node on each site called two-pack (or 2-node) MCC configuration.

8-Node MCC

8-node MCC configuration consists of two almost independent 4-node MCC (each 4-node with two HA pair), as in 4-node configuration, each storage node have only one remote partner and only one local HA partner. The only difference between two completely independent 4-node MCC and 8-node configuration MetroCluster is that 8-node share cluster interconnect switches therefore entire 8-node cluster seen by clients as a single namespace system administrator can move data online between all the nodes in MetroCluster within a local site. Example of 8-node MCC is four nodes of AFF A700 and four nodes of FAS8200, where two nodes of A700 and two nodes of FAS8200 on one site and the second half on the another site.

MCC network transport: FC & IP

MCC can use two network transports for synchronization: FC or IP. Most FC configurations require dedicated FC-VI ports usually located on an FC-VI card but some FAS/AFF models can convert on-board FC ports to FC-VI mode. IP requires iWRAP interfaces which can live on ethernet ports (25 GbE or higher), which usually available on an iWRAP card. Some models like Entry-level A220 can use onboard ports and share ports with cluster interconnect traffic, while MCC-FC do not support Entry systems.

MCC: Fabric & Stretched

Fabric configurations are configurations with switches, while stretched configurations are configs without a switch. Both Fabric & Stretched terms usually applies only to FC network transport because IP transport always require a switch. Stretched configs can use only 2-nodes in a MetroCluster. With MCC FC stretched configs it is possible to build 2-node cluster stretched up to 300 meters (984 feet) without a switch, such configurations require special optical cables with multiple fibers in it, because of necessity to cross-connect all controllers and all disk shelves. To reduce the number of fibers stretched configurations can use FC-SAS bridges used to connect disk shelves to it, then cross-connect controllers and the FC-SAS bridges and the second option to reduce the number of required fiber links is to use FlexArray technology instead of NetApp disk shelves.

Fabric MCC-FC

FAS and AFF systems with ONTAP software versions 9.2 and older utilize FC-VI ports and for long distances require dedicated only for MetroCuster four Fibre Channel switches (2 on each site) and 2 FC-SAS bridges per each disk shelf stack, thus minimum 4 total for 2 sites and minimum 2 dark fiber ISL links with optional DWDMs for long distances. Fabric MCC require FC-SAS bridges. 4-node and 8-node configurations require a pair of cluster interconnect switches.


Starting with ONTAP 9.3 MetroCluster over IP (MCC-IP) was introduced with no need for a dedicated back-end Fibre Channel switches, no FC-SAS bridges and no dedicated dark fiber ISL which previously were needed for MCC-FC configurations. In such configuration disk shelves directly connected to controllers and cluster switches used for MetroCluster (iWRAP) and Cluster interconnect traffic. Initially, only A700 & FAS9000 systems supported MCC-IP. MCC-IP available only in 4-node configurations: 2-node Highly Available system on each site with two sites total. With ONTAP 9.4, MCC-IP supports A800 system and Advanced Drive Partitioning in the form of Rood-Data-Data (RD2) partitioning for AFF systems, also known as ADPv2. ADPv2 supported only on all-flash systems. MCC-IP configurations support single disk shelf per site where SSD drives partitioned in ADPv2. MetroCluster over IP requires Ethernet cluster switches with installed ISL SFP modules to connect with the remote location and utilize iWRAP cards in each storage controller for synchronous replication. Starting with ONTAP 9.5 MCC-IP supports distance up to 700 km and SVM-DR feature, AFF A300, and FAS8200 systems. Beginning with ONTAP 9.6 MCC-IP supports Entry-level systems A220 and FAS2750, also in these systems MCC (iWRAP), HA, and Cluster interconnect interfaces lives on the cluster interconnect onboard ports, while mid-range and high-end systems still require a dedicated iWRAP card.


Similar to RAID-1, plexes in ONTAP systems can keep mirrored data in two places, but while conventional RAID-1 must exist within the bounds of one storage system, two plexes could be distributed between two storage systems. Each aggregate consists of one or two plexes. Ordinary HA or single-node storage systems have only one plex for each aggregate while SyncMirror local or MetroCluster configurations have two plexes for each aggregate. Each plex includes underlying storage space from one or more NetApp RAID groups or LUNs from third-party storage systems (see FlexArray) in a single plex similarly to RAID-0. If an aggregate consists of two plexes, one plex is considered a master and the second as a slave; slaves must consist of the same RAID configuration and drives. For example, if we have an aggregate consisting of two plexes where the master plex consists out of 21 data and three parity 1.8 TB SAS 10k drives in RAID-TEC, then slave plex must consist of 21 data and 3 parity 1.8 TB SAS 10k drives in RAID-TEC. Second example with hybrid aggregates, if we have an aggregate consisted from two plexes where master plex consists of one RAID 17 data and 3 parity SAS drives 1.8 TB SAS 10k configured as RAID-TEC and second RAID in the master plex is RAID-DP with 2 data and 2 parity SSD 960 GB, then the second plex must have the same configuration: one RAID 17 data and 3 parity SAS 10k drives 1.8 TB configured as RAID-TEC and the second RAID in the slave plex in RAID-DP with 2 data and 2 parity SSD 960 GB. MetroCluster configurations use SyncMirror technology for synchronous data replication.

There are two SyncMirror options: MetroCluster and Local SyncMirror; both using the same plex technique for synchronous replication of data between two plexes. Local SyncMirror creates both plexes in a single controller and is often used for additional security to prevent failure for an entire disk shelf in a storage system. MetroCluster allows data to be replicated between two storage systems.

MetroCluster SDS

Is a feature of ONTAP Select software, similarly to MetroCluster on FAS/AFF systems MetroCluster SDS (MC SDS) allows to synchronously replicate data between two sites using Plex & SyncMirror and automatically switch to survived node transparently to its users and applications. MetroCluster SDS work as ordinary HA pair so data volumes, LUNs and LIFs could be moved online between aggregates and controllers on both sites, which is different from traditional MetroCluster on FAS/AFF systems where data cloud be moved across storage cluster nodes only within the site where data located initially. In traditional MetroCluster the only way for applications to access data locally on the remote site is to disable one entire site, this process called switchover wherein MC SDS the HA process occurs. MCC supports 2,4 and even 8-node configurations, while MC SDS sports only 2-node configuration. MetroCluster SDS uses ONTAP Deploy as the mediator (in FAS and AFF world this built-in software known as MetroCluster tiebreaker) which came with ONTAP Select as a bundle and generally used for deploying clusters, installing and
monitoring licenses.

Continue to read

How ONTAP Memory work

Zoning for ONTAP Cluster


Please note in this article I described my own understanding of the internal organization of ONTAP systems. Therefore, this information might be either outdated, or I simply might be wrong in some aspects and details. I will greatly appreciate any of your contribution to make this article better, please leave any of your ideas and suggestions about this topic in the comments below.

All product names, logos, and brands are property of their respective owners. All company, product and service names used in this website are for identification purposes only.

How does the ONTAP cluster work? (part 1)

In my previous series of articles I explained how ONTAP system memory works and talked about:
NVRAM/NVMEM, NVLOGs, Memory Buffer, HA & HA interconnects, Consistency Points, WAFL iNodes, MetroCluster data availability, Mailbox disks, Takeover, Active/Active and Active/Passive configurations, Write-Through, Write Allocation, Volume Affinities (Wafinity), FlexGroup, RAID & WAFL interaction, Tetris and IO redirection, Read Cache, NVRAM size, role of the system battery and boot flash drive, Flash media and WAFL compatibility. Those articles going be a good addition & help to understand this one, so go & check them too.

First I need to tell that clusterization is a very broad term, and in many vendors and technologies, it has a different meaning. ONTAP uses three different types of clusterization and one of the primary purpose of this article to explain each and how they are a different one from another and how they can complement each other and what additional benefits ONTAP can get out of them. Before we go to clusterization, we need to go deeper and explore other ONTAP components to understand how it works.

When someone is speaking about ONTAP cluster, they most probably mean horizontal scaling clusterization used to scale-out storage (the third type of clusterization).


There are a few platforms ONTAP support: FAS appliance, AFF appliance, and SDS virtual appliance. NetApp FAS storage systems which contain only SSD drives with installed SSD-optimized ONTAP OS called All-Flash FAS (AFF). There is also a Lenovo DM line of products which using ONTAP. NetApp HW appliances are using either SATA, Fibre Channel, SAS, SSD disk drives or LUNs from 3rd party storage arrays, which it groups into RAID groups and then into Aggregates to combine disk performance and capacity into a single pool of storage resources. SDS appliances can use space from the hypervisor as virtual disks and join space into aggregates or can use physical disk drives passed through to ONTAP and build RAID out of them and then aggregates.


FlexVol volume is a logical space that placed on top of an aggregate, each volume can expand or shrink in size plus we can apply and change performance limits to each volume. A FlexVol helps to separate performance & capacity from an aggregate pool of resources and flexibly distribute them as needed. Some volumes need to be big but slow, and some very fast but small; volumes can be re-sized and performance re-balanced, – FlexVol is the technology which achieve this goal in ONTAP architecture. Clients are accessing storage from FlexVol volumes over SAN & NAS protocols. Each volume exists on a single aggregate and served by a single storage node (controller).

If two FlexVol volumes created, each on two aggregates and those aggregates owned by two different controllers, and system admin needs to use space from these volumes through a NAS protocol, then admin will create two file shares, one on each volume. In this case, admin most probably even will create different IP addresses; each will be used to access a dedicated file share. Each volume will have single write waffinity and there will be two buckets of space. Though even if two volumes reside on a single controller, and for example on a single aggregate (thus if the second aggregate exists, it will not be used in this case) and both volumes will be accessed through a single IP address, still there will be two write affinities, one on each volume and there still will be two separate buckets of space. So the more volumes you have, the more write waffinities you’ll have (better parallelization and thus more even CPU utilization, which is good), but then you’ll have multiple volumes (and multiple buckets for space thus multiple file shares).


FlexGroup is a free feature introduced in version 9, which uses the clustered architecture of the ONTAP operating system. FlexGroup provides cluster-wide scalable NAS access with NFS and CIFS protocols. A FlexGroup creates multiple write affinities but on another hand, unlike FlexVol, combines space and performance from all the volumes underneath (thus from multiple aggregates and nodes). A FlexGroup Volume is a collection of constituent FlexVol volumes distributed across nodes in the cluster called “Constituents,” which are transparently joined in a single space. FlexGroup volume combines performance and capacity from all the constituent volumes and thus from all nodes of the cluster where they located. For the end user, each FlexGroup Volume is represented by a single, ordinary file-share with single space equal to the summary of space from all the constituent volumes (not visible to clients) and multiple reads & write waffinities.

NetApp will reveal the full potential of FlexGroup with technologies like NFS multipathing, SMB multichannel, pNFS, SMB CA, and VIP.

Technology NameFlexVolFlexGroup
1. NFS multipathing (session trunking)NoNo
2. SMB multichannelNoNo
3. pNFSYesNo
4. VIP (BGP)YesYes
5. SMB Continuous Availability (SMB CA)YesYes*

*Added in ONTAP 9.6

The FlexGroup feature in ONTAP 9 allows to massively scale in a single namespace to over 20 PB with over 400 billion files, while evenly spreading the performance across the cluster. Starting with ONTAP 9.5 FabricPool supported with FlexGroup, in this case, it is recommended to have all the constituent volumes to back up to a single S3 object storage bucket. FlexGroup supports SMB features for native file auditing, FPolicy, Storage Level Access Guard (SLA), copy offload (ODX) and inherited watches of changes notifications; Quotas and Qtree. SMB Contiguous Availability (CA) supported with FlexGroup in ONTAP 9.6 allows running MS SQL & Hyper-V. FlexGroup also supported on MetroCluster.

Clustered ONTAP

Today’s OS for NetApp AFF, FAS, Lenovo DM line and cloud appliances are known just as ONTAP 9, but before version 9 there was Clustered ONTAP (or Cluster-Mode, Clustered Data ONTAP or cDOT) and 7-mode ONTAP. 7-mode is the old firmware which had capabilities of the first and the second type of clusterization (High Availability and MetroCluster), while Clustered ONTAP 9 has all three HA, MCC, plus horizontal scaling clusterization). The reason why two existed for in parallel was Clustered ONTAP 8 didn’t have all reach functionality from 7-mode, so for a while, it was possible to run both modes (one at a time) on the same NetApp hardware. NetApp spent some time to bring all the functionality to Cluster-Mode, once finished the transition, 7-Mode was deprecated and with that milestone Clustered ONTAP updated to the next version and become ”just” ONTAP 9. ONTAP 8.2.4 was the last version of 7-Mode. Both 7-Mode & Cluster-Mode share a lot of similarities, for example, WAFL file system was used on both, but most were not compatible one with another, in previous example WAFL versions and functionality were different and thus incompatible, only limited compatibility was introduced mostly for migration purposes from 7-mode to Cluster-Mode. The last version of 7-Mide ONTAP 8.2.4 contains WAFL compatible with Cluster-Mode, to introduce fast but offline in-place upgrade to the newest versions of ONTAP.

In version 9, nearly all the features from 7-mode were successfully implemented in ONTAP (Clustered) including SnapLock, FlexCache, MetroCuster, SnapMirror Synchronous, while many new features that not available in 7-Mode were introduced, including features such as FlexGroup, FabricPool, and new capabilities such as fast-provisioning workloads and Flash optimization, NDAS, data compaction, AFF or many others. The uniqueness of NetApp’s Clustered ONTAP is in the ability to add heterogeneous systems (where all systems in a single cluster do not have to be of the same model or generation) to a single (third type) cluster. This provides a single pane of glass for managing all the nodes in a cluster, and non-disruptive operations such as adding new models to a cluster, removing old nodes, online migration of volumes, and LUNs while data is continuously available to its clients.

A node and controller (head or physical appliance) are very similar terms and often used interchangeably. The difference between them is that controller is a physical server with CPU, main board, Memory and NVRAM, while node is an instance of ONTAP OS running on top of controller. A node can migrate, for example when a controller replaced with a new one. ONTAP (the third type) Cluster consist of nodes.

FAS appliances

Are NetApp custom build & OEM hardware. Controllers in FAS systems are computers which running ONTAP OS. FAS systems are used with HDD and SSD drives. SSD often used for caching, but can be used in all-SSD aggregates as well. FAS systems can use NetApp disk shelves to add capacity to the storage or a 3rd party arrays. Each disk shelf connected to one storage system which consists out of one or two controllers (HA pair).

AFF appliances

All-Flash FAS appliance also known as AFF. Usually, NetApp All-Flash systems based on the same hardware as FAS but first one’s OS ONTAP optimized and works only with SSD media on the back end while FAS appliance can use HDD and SSD and SSD as cache. Hare are pairs of appliances which using the same hardware: AFF A700 & FAS9000, A300 & FAS8200, A200 & FAS2600, A220 & FAS2700, but AFF systems do not include FlashCache cards since there is no sense in caching operation from flash media on the flash media. Also, AFF systems do not support FlexArray third-party storage array virtualization functionality. Both AFF & FAS using the same firmware image and nearly all noticeable functionality for the end user are the same for both. However internally data processed and handled differently in ONTAP on AFF systems, for example, used different Write Allocation algorithms than on FAS systems. Because AFF systems have faster underlying SSD drives Inline data deduplication in ONTAP systems nearly not noticeable (no more than 2% performance impact on low-end systems).


FAS and AFF systems are using enterprise level HDD, and SSD (i.e., NVMe SSD) physical drives with two ports, each port connected to each controller in an HA pair. HDD and SSD drives can only be bought from NetApp and installed in NetApp’s Disk Shelves for FAS/AFF platform. Physical HDD and SSD drives, partitions on disk drives, and even LUNs imported from third-party arrays with FlexArray functionality are considered in ONTAP as a Disk. In SDS systems like ONTAP Select & ONTAP Cloud, logical block storage like virtual disk or RDM inside ONTAP also considered as a Disk. Do not confuse the general term “disk drive” and “disk drive term used in ONTAP system” because with ONTAP it could be entire physical HDD or SSD drive, a LUN or a partition on a physical HDD or SSD drive. A LUN imported from third-party arrays with FlexArray functionality in HA pair configuration must be accessible from both nodes of the HA pair as HDD or SSD drive. Each ONTAP disk has ownership on it to show which controller owns and serve the disk. An Aggregates can include only disks owned by a single node, therefore each aggregate owned by a node and any objects on top of it, as FlexVol volumes, LUNs, File Shares are served within a single controller. Each node have its own disks and aggregates and serve them. Where both nodes can be utilized simultaneously even though they not serving the same data.


Advanced Drive Partitioning (ADP) can be used in AFF & FAS systems depending on the platform and use-case. FlexArray technology does not support ADP. This technique mainly used to overcome some architectural requirements and reduce the number of disk drives in a NetApp FAS & AFF storage systems. There are three types of ADP:

  • Root-Data partitioning
  • Root-Data-Data partitioning (RD2 also known as ADPv2)
  • and Storage Pool.

Root-Data partitioning used in FAS & AFF systems to create small root partitions on drives to use them to create system root aggregates and therefore not to spend entire two physical disk drives for that purpose while the bigger portion of the disk drive will be used for data aggregate. Root-Data-Data partitioning is used in All-Flash systems only, it used for the same reason as Root-Data partitioning with the only difference that bigger portion of the drive left after root partitioning divided equally by two partitions, each partition assigned to one of the two nodes, therefore reducing the minimum number of drives required for an All-Flash system and reducing waste for expensive SSD space. Storage Pool partitioning technology used in FAS systems to equally divide each SSD drive by four pieces which later can be used for (only) FlashPool cache acceleration, with Storage Pool only a few SSD drives can be divided by up to 4 data aggregates which will benefit from FlashPool caching technology reducing minimally required SSD drives for FlashPool.


In NetApp ONTAP systems, RAID and WAFL are tightly integrated. There are several RAID types available within NetApp FAS and AFF systems:

  • RAID-4 with 1 dedicated parity disk allowing any 1 drive to fail in a RAID group.
  • RAID-DP with 2 dedicated parity disks allowing any 2 drives to fail simultaneously in a RAID group.
  • RAID-TEC US patent 7640484 with 3 dedicated parity drives, allows any 3 drives to fail simultaneously in a RAID group.

RAID-DP’s double parity leads to a disk loss resiliency similar to that of RAID-6. NetApp overcomes the write performance penalty of traditional RAID-4 style dedicated parity disks via WAFL and innovative use of its nonvolatile memory (NVRAM) within each storage system. Each aggregate consists of one or two plexes, and a plex consists of one or more RAID groups. Typical NetApp FAS or AFF storage system have only one plex in each aggregate, two plexes used in local SyncMirror or MetroCluster configurations. Therefore in systems without MetroCluster or local SyncMirror engineers might say “aggregates consist of RAID groups” to simplify things a bit because plex does not play a vital role in such configurations, while in reality an aggregate always have one or two plex and a plex consists of one or more RAID groups (see the picture with aggregate diagram). Each RAID group usually consists of disk drives of the same type, speed, geometry, and capacity. Though NetApp Support could allow a user to install a drive to a RAID group with same or bigger size and different type, speed and geometry for a temporary basis. RAID can be used with partitions too. Any data aggregates if containing more than one RAID group must have same RAID groups across the aggregate, same RAID group size is recommended, but NetApp allows to have an exception in the last RAID group and configure it as small as half of the RAID group size across aggregate. For example, such an aggregate might consist of 3 RAID groups: RG0:16+2, RG1:16+2, RG2:7+2. Within aggregates, ONTAP sets up flexible volumes (FlexVol) to store data that users can access. The reason ONTAP has ”default” RAID group size and that number is smaller than max RAID group size is to allow admin in the future to add only a few disk drives to existing RAID groups instead of adding a new RAID group with the full set of drives

Aggregates enabled as FlashPool consists of both HDD and SSD drives called hybrid aggregates and used in FAS systems. In FlashPool aggregates the same rules applied to the hybrid aggregate as to ordinary aggregates but separately to HDD and SSD drives, thus it is allowed to have two different RAID types: one RAID type for all HDD drives and one RAID type for all SSD drives in a single hybrid aggregate. For example SAS HDD with RAID-TEC (RG0:18+3, RG1:18+3) and SSD with RAID-DP (RG3:6+2). NetApp storage systems running ONTAP combine underlying RAID groups similarly to RAID-0 in plexes and aggregates, while in Hybrid aggregates SSD portion used for cache and therefore capacity from flash media not contributing to overall aggregate space. Also in NetApp FAS systems with FlexArray feature third party LUNs could be combined in a plex/aggregate similarly as in RAID-0. NetApp storage systems running ONTAP can be deployed in MetroCluster and local SyncMirror configurations which are using technique comparably to RAID-1 with mirroring data between two plexes in an aggregate.

Note that ADPv2 does not support RAID-4. RAID-TEC is recommended if the size of the disks used in an aggregate is greater than 4 TiB. RAID type in storage pool cannot be changed. RAID minimums for root aggregate (with force-small-aggregate true) are:

  • RAID-4 is 2 drives (1d + 1p)
  • RAID-DP is 3 drives (1d + 2p)
  • RAID-TEC is 5 drives (2d + 3p)


One or multiple RAID groups form an “aggregate,” and within aggregates ONTAP operating system sets up “flexible volumes” (FlexVol) to store data that hosts can access.

Similarly, to RAID-0, each aggregate merges space from underlying protected RAID groups to provide one logical piece of storage for flexible volumes therefore Aggregate does not provide data protection mechanisms but rather another layer of abstraction. Alongside with aggregates consisted out of disks and RAID groups, other aggregates could consist of LUNs already protected with third-party storage systems and connected to ONTAP with FlexArray, and in similar way it works in ONTAP Select or Cloud Volumes ONTAP. Each aggregate could consist of either LUNs or NetApp RAID groups. Flexible volumes offer the advantage that many of them can be created on a single aggregate and resized at any time. Smaller volumes can then share all the space & disk performance available to the underlying aggregate, and QoS allows to change the performance of flexible volumes on the fly. Aggregates can only be expanded, never downsized. Current maximum physical useful space size in an aggregate is 800 TiB for All-Flash FAS Systems. the limit applies on space in the aggregate rather then number of disk drives and may be different on AFF & FAS systems.


NetApp FlashPool is a feature on hybrid NetApp FAS systems which allows creating a hybrid aggregate with HDD drives and SSD drives in a single data aggregate. Both HDD and SSD drives form separate RAID groups. Since SSD also used for write operations, it requires RAID redundancy contrary to FlashCache which accelerate only read operations. In hybrid aggregate the system allows to use different RAID types for HDD and SSD, for example, it is possible to have 20 HDD 8TB in RAID-TEC while 4 SSD in RAID-DP or even RAID-4 with 960GB in a single aggregate. SSD RAID used as cache and improved performance for read-write operations for FlexVol volumes on the aggregate where SSD added as the cache. FlashPool cache similarly to FlashCache have policies for reading operations but also include write operations, and system administrator could apply those policies for each FlexVol volume located on the hybrid aggregate, therefore could be disabled on some volumes while others could benefit from SSD cache. Both FlashCache & FlashPool can be used simultaneously to cache data from a single FlexVol. To enable an aggregate with FlashPool technology minimum 4 SSD disks required (2 data, 1 parity, and 1 hot spare), it is also possible to use ADP technology to partition SSD into 4 pieces (Storage Pool) and distribute those pieces between two controllers so each controller’s aggregates could benefit from SSD cache when there is a small amount of SSD. FlashPool is not available with FlexArray and is available only with NetApp FAS native disk drives in NetApp’s disk shelves.


FabricPool technology available for all-SSD aggregates in FAS/AFF systems or in Cloud Volumes ONTAP on SSD media. Starting with ONTAP 9.4 FabricPool supported on ONTAP Select platform. Cloud Volumes ONTAP also supports HDD + S3 FabricPool configuration. FabricPool provides automatic storage tiering capability for cold data blocks from fast media (hot tier) on ONTAP storage to cold media to an S3 object storage (cold tier) and back. Each FlexVol volumes on a FabricPool-enabled all-SSD aggregates can have one out of four policies:

  • None – Does not tier data from a volume
  • Snapshot – Migrate cold data blocks captured in snapshots
  • Auto – Migrates cold data blocks from an active file system and snapshots to cold tier
  • All – this policy tiers all the data writing through directly to S3 object storage, metadata though always stays on SSD hot tier.

FabricPool preserves offline deduplication & offline compression savings. FabricPool tier-off blocks from active file system (by default 31-day data not been accessed) & support data compaction savings. Trigger for tiering from hot tier can be adjusted. The recommended ratio is 1:10 for inodes to data files. For clients connected to the ONTAP storage system, all the FabricPool data-tiering operations are completely transparent, and in case data blocks become hot again, they are copied back to fast media to the ONTAP. FabricPool is compatible with the

  • NetApp StorageGRID
  • Amazon S3 and Amazon Commercial Cloud Services (C2S)
  • Google Cloud
  • Alibaba object storage services
  • Azure Blob supported
  • IBM Cloud Object Storage (ICOS) in the cloud
  • IBM Cleversafe (on-prem object storage)

Other object-based SW & services could be used if requested by the customer and that service will be validated by NetApp. The FabricPool feature in FAS/AFF systems is free for use with NetApp StorageGRID external object storage. For other object storage systems such as Amazon S3 & Azure Blob, FabricPool must be licensed per TB to function (alongside costs for FabricPool licensing, the customer needs also to pay for consumed object space). While with the Cloud Volumes ONTAP storage system, FabricPool does not require licensing, costs will apply only for consumed space on the object storage. FlexGroup volumes and SVM-DR supported with FabricPool, also SVM-DR supported with FlexGroups.


NetApp storage systems running ONTAP can have FlashCache cards which can reduce read operations latency and allows the storage systems to process more read intensive work without adding any additional disk drives to the underlying RAID. Usually, one FlashCache module installed per controller, no mirroring performed between nodes and entire space from FlashCache used by a single node only, since read operations do not require redundancy in case of FlashCache failure, but chip-level data protection is available in FlashCache. If the system unexpectedly rebooted, read chance will be lost, but will restore over the time during regular node operation. FlashCache works on node level, by default accelerates any volumes on that node and only read operations. FlashCache caching policies applied on FlexVol level: system administrator can set cache policy on each individual volume on the controller or disable read cache at all. FlashCache technology is compatible with the FlexArray feature. Starting with 9.1 a single FlexVol volume can benefit from both FlashPool & FlashCache cache simultaneously.


FlexArray is NetApp FAS functionality allows to virtualize third-party storage systems, and other NetApp storage systems over SAN protocols and use them instead of NetApp’s disk shelves. With FlexArray functionality RAID protection must be provided with third-party storage array thus NetApp’s RAID-4, RAID-DP and RAID-TEC not used in such configurations. One or many LUNs from third-party arrays could be added to a single aggregate similarly to RAID-0. FlexArray is licensed feature.

NetApp Storage Encryption

NetApp Storage Encryption (NSE) is using specialized purpose-build disks with low level Hardware-based full disk encryption (FDE/SED) chip, some disks are FIPS-certified self-encrypted drives. NSE & FIPS drives compatible nearly with all NetApp ONTAP features and protocols but except for MetroCluster. NSE feature does overall nearly zero performance impact on the storage system. NSE feature similarly to NetApp Volume Encryption (NVE) in storage systems running ONTAP can store encryption key locally in Onboard Key Manager which stores keys in onboard TPM module or through KMIP protocol on dedicated key manager systems like IBM Security Key Lifecycle Manager and SafeNet KeySecure. NSE is data at rest encryption which means it protects only from physical disks theft and does not give an additional level of data security protection. In a standard operational and running ONTAP system, this feature does not encrypt data over the wire. When OS shuts disks down, they lose encryption key and becomes locked and if KeyManager not available or locked, ONTAP couldn’t boot. NetApp has passed NIST Cryptographic Module Validation Program for its NetApp CryptoMod (TPM) with ONTAP 9.2.

Continue to read

How ONTAP Memory work

Zoning for ONTAP Cluster


Please note in this article I described my own understanding of the internal organization of ONTAP systems. Therefore, this information might be either outdated, or I simply might be wrong in some aspects and details. I will greatly appreciate any of your contribution to make this article better, please leave any of your ideas and suggestions about this topic in the comments below.

All product names, logos, and brands are property of their respective owners. All company, product and service names used in this website are for identification purposes only.

New NetApp platform & Licensing improvements in ONTAP 9.6 (Part 1)


All flash A320 2U platform introduced, here are a few important details for this new AFF system:

  • From the performance perspective of view most notable is ~100 microseconds latency on SQL SLOB workload. If true, that is a notable improvement because previously we’ve seen only sub 1 millisecond (1,000 microseconds) latency and new latency basically a few times (in the best-case scenario ~10 times) faster
    • About 20% better IOPS performance than A300
  • NVDIMM instead of traditional NVRAM in high-end/mid-range platforms. This is the second NetApp AFF platform after A800 system which adopted NVDIMM instead of PCIe-based NVRAM. Strictly speaking, NVDIMM has been around in entry FAS/AFF systems for an extended period of time, but only because of luck of PCIe slots & space in the controllers
  • No disk drives in the controller chassis
  • No RoCE support for hosts. Yet
  • End to End NVMe
  • Rumors from Insight 2018 confirmed about new disk shelves
    • NS224 directly connected over RoCE
    • 2 disk shelves maximum
    • 1.9 TB, 3.6 TB, and 7.6 TB drives supported
    • With an upcoming ONTAP release disk shelves connected to controllers over a switch will be supported and thus more disk shelves than just two
  • Not very important to customers, but interesting update from engineer theoretical perspective: with this new platform HA and Cluster Interconnect connectivity now combined, unlike in any other appliances before.
  • 8x Onboard 100 GbE ports per controller:
    • 2 for cluster interconnect 100 GbE ports (and HA)
    • 2 for the first disk shelf and optionally another 2 for the second disk shelf
    • it leaves 2 or 4 100 GbE ports for host connection
  • 2 optional PCIe cards per controller with next ports:
    • FC 16/32 Gb ports
    • RoCE capable 100/40 GbE
    • RoCE capable 25 GbE
    • Or 10 Gb BASE-T ports

Entry Level Systems

Previously released A220 system now available with 10G BASE-T ports, thanks to increase popularity of 10G BASE-T switches.

MCC IP for low-end platforms

MCC IP becomes available for low-end platforms: A220 & FAS2750 (not for 2720 though) in ONTAP 9.6 and requires a 4-node configuration (as all MCC-IP configs). New features made in a way to reduce cost for such small configurations.

  • All AFF systems with MCC-IP supports partitioning, including A220
  • Entry-level systems do not require special iWRAP cards/ports like other storage systems
  • Mixing MCC IP & other traffic allowed (with all the MCC-IP configs?)
    • NetApp wants to ensure customers to get great experience with their solutions so there will be some requirements your switch must meet to maintain high performance to be qualified for such MCC IP configuration.

Brief history of MCC IP:

  • In ONTAP 9.5 mid-range platforms FAS8200 & A300 added support for MCC IP
  • In ONTAP 9.4 MCC IP becomes available on high-end A800
  • And initially MCC IP introduced in ONTAP 9.3 for high-end A700 & FAS9000 systems.

New Cluster Switches

Two new port-dense switches from Cisco and Brocade with 48x 10/25 GbE SFP ports and a few 40 GbE or 100GbE QSFP ports. You can use same switches for MCC IP. Here is Brocade-based BES-53248 which will replace CN1610:

And new Cisco Nexus 92300YC with 1.2U height.


New OS supported with ONTAP 9.6: Oracle Linux, VMware 6.7, and Windows Server 2012/2016. Previously in ONTAP 9.5 were supported SUSE Linux 15 and RedHat Enterprise Linux 7.5/7.6, RedHat still doesn’t have ANA support. New FlexPod config with A800 connected over FC-NVMe to SUSE Linux. Volume move now available with NVMe namespaces.

NVMe protocol becomes free. Again

In ONTAP 9.6 NVMe protocol become free. It was free when firstly introduced in 9.4 without ANA (Analog for SAN ALUA multipathing), and then it became not free in 9.5.

SnapMirror Synchronous licensing adjusted

In 9.6 simplified licensing, SM-S included in Premium Bundle. NetApp introduced SM-S in ONTAP 9.5 and previously licensed it by TB. If you not going to use a secondary system as the source to another system, SM-S do not need licensing on the secondary system.

New services

  • SupportEdge Prestige
  • Basic, Standard and Advanced Deployment options
  • Managed Upgrade Service

Read more


All product names, logos, and brands are property of their respective owners. All company, product and service names used in this website are for identification purposes only. No one is sponsoring this article.

ONTAP improvements in version 9.6 (Part 2)

Starting with ONTAP 9.6 all releases are long-term support (LTS). Network auto-discovery from a computer for cluster setup, no need to connect with the console to set up IP. All bug fixes available in P-releases (9.xPy), where “x” is a minor ONTAP version and “y” is P-version with a bunch of bug fixes. P-releases going to be released each 4 weeks.

New OnCommand System Manager based on APIs

First, System Manager no longer carrying OnCommand last name now it is ONTAP System Manager. ONTAP System Manager shows failed disk position in a disk shelf and network topology. Like some other All-Flash vendors, the new dashboard shows storage efficiency with a single number, which includes clones and snapshots, but you still can find information separately for each efficiency mechanism.

Two system managers available simultaneously for ONTAP 9.6:

  • The old one
  • New API-based one (on the image below)
    • Press “Try the new experience” button from the “old” system manager

NetApp will base system Manager and all new Ansible modules on REST APIs only which means NetApp is taking it rather seriously. With 9.6 ONTAP NetApp brought proprietary ZAPI functionality via REST APIs access for cluster management (see more here & here). ONTAP System manager shows the list of ONTAP REST APIs that have been invoked for the performed operations which allows to understand how it works and use APIs in day to day basis. REST APIs available through System Manager web interface at https://ONTAP_ClusterIP_or_Name/docs/API, the page includes:

  • Try it out feature
  • Generate the API token to authorize external use
  • And built-in documentation with examples.

List of cluster management available through REST APIs in ONTAP 9.6:

  • Cloud (object storage) targets
  • Cluster, nodes, jobs and cluster software
  • Physical and logical network
  • Storage virtual machines
  • SVM name services such as LDAP, NIS, and DNS
  • Resources of the storage area network (SAN)
  • Resources of Non-Volatile Memory Express.

APIs will help service providers and companies where ONTAP deployed many instances in an automated fashion. System Manager will save historical performance info, while before 9.6 you can see only data from the moment you have opened the statistic window and after you close it, it would lose statistics. See ONTAP guide for developers.

Automation is the big thing now

All new Ansible modules will use only REST APIs. Python SDK will be available soon as well for some other languages.


On Command Unified Manager renamed to ActiveIQ Unified Manager. Renaming show Unified Manager going to work with ActiveIQ in NetApp cloud more tightly.

  • In this tandem Unified Manager gives a detailed, real-time analytics, simplifies key performance indicator and metrics so IT generalists can understand what’s going on, it allows to troubleshoot and to automate and customize monitoring and management
  • While ActiveIQ is cloud-based intelligence engine, to provide predictive analytics, actionable intelligence, give recommendations to protect, and optimize NetApp environment.

Unified Manager 9.6 provides REST APIs, not just proactively identifying risks but, most importantly, now provide remediation recommendations. And also gives recommended to optimize workload performance and storage resource utilization:

  • Pattern recognition eliminates manual efforts
  • QoS monitoring and management
  • Realtime events and maps key components
  • Built-in analytics for storage performance optimizations


SnapMirror Synchronous (SM-S) do not have automatic switchover yet as MetroCluster (MCC), and this is the key difference, which still keeps SM-S as a DR solution rather than HA.

  • New configuration supported: SM-S and then cascade SnapMirror Async (SM-A)
  • Automatic TLS encryption over the wire between ONTAP 9.6 and higher systems
  • Workloads that have excessive file creation, directory creation, file permission changes, or directory permission changes are suitable (these are referred to as high-metadata workloads) for SM-S
  • SM-S now supports additional protocols:
    • SMB v2 & SMB v3
    • NFS v4
  • SM-S now support qtree & fpolicy.


Nearly all important FlexGroup limitations compare FlexVols now removed:

  • SMB Continuous Availability (CA) support allows running MS SQL & Hyper-V on FlexGroup
  • Constituent volume (auto-size) Elastic sizing & FlexGroup resize
    • If one constituent out of space, the system automatically take space from other constituent volumes and provision it to the one needs it the most. Previously it might result at the end of space error, while some space was available in other volumes. Though it means you probably short in space, and it might be a good time to add some more 😉
  • FlexGroup on MCC (FC & IP)
  • FlexGroup rename & re-size in GUI & CLI


Alibaba and Google Cloud object storage support for FabricPool and in GUI now you can see cloud latency of the volume.

Another exciting for me news is a new “All” policy in FabricPool. It is excited for me because I was one of those whom many times insisted it is a must-have feature for secondary systems to write-through directly to cold tier. The whole idea in joining SnapMirror & FabricPool on the secondary system was about space savings, so the secondary system can also be All Flash but with many times less space for the hot tier. We should use secondary system in the role of DR not as Backup because who wants to pay for the backup system as for flash, right? Then if it is a DR system, it assumes someday secondary system might become primary and once trying to run production on the secondary you most probably going to have not enough space on that system for hot tier, which means your DR no longer working. Now once we get this new “All” policy, this idea of joining FabricPool with SnapMirror while getting space savings and fully functional DR going to work.

This new “All” policy replaces “backup” policy in ONTAP 9.6, and you can apply it on primary storage, while the backup policy was available only on SnapMirror secondary storage system. With All policy enabled, all data written to FabricPool-enabled volume written directly to object storage, while metadata remains on performance tier on the storage system.

SVM-DR now supported with FabricPool too.

No more fixed ratio of max object storage compare to hot tier in FabricPool

FabricPool is a technology for tiering cold data to object storage either to the cloud or on-prem, while hot data remain on flash media. When I speak about hot “data,” I mean data and metadata, where metadata ALWAYS HOT = always stays on flash. Metadata stored in inode structure which is the source of WAFL black magic. Since FabricPool introduced in ONTAP till 9.5 NetApp assumed that hot tier (and in this context, they mostly were thinking not about hot data itself but rather metadata inodes) will always need at least 5% on-prem which means 1:20 ratio of hot tier to object storage. However, turns out it’s not always the case and most of the customers do not need that much space for metadata, so NetApp re-thought that and removed hard-coded 1:20 ratio and instead introduced 98% aggregate consumption model which gives more flexibility. For instance, if storage will need only 2% for metadata, then we can have a 1:50 ratio, this is of the cause will be the case only with low-file-count environments & SAN. That means if you have 800 TiB aggregate, you can store 39.2 PiB in cold object storage.


  • Aggregate-level encryption (NAE), help cross-volume deduplication to gain savings
  • Multi-tenant key management allows to manage encryption keys within SVM, only external managers supported, previously available on cluster admin level. That will be great news for service providers. Require Key-manager license on ONTAP
  • Premium XL licenses for ONTAP Select allows consuming more CPU & memory to ONTAP which result in approximately 2x more performance.
  • NetApp support 8000 series and 2500 series with ONTAP 9.6
  • Automatic Inactive Data Reporting for SSD aggregates
  • MetroCluster switchover and switchback operations from GUI
  • Trace File Access in GUI allows to trace files on NAS accessed by users
  • Encrypted SnapMirror by default: Primary & Secondary 9.6 or newer
  • FlexCache volumes now managed through GUI: create, edit, view, and delete
  • DP_Optimized (DPO) license: Increases max FlexVol number on a system
  • QoS minimum for ONTAP Select Premium (All-Flash)
  • QoS max available for namespaces
  • NVMe drives with encryption which unlike NSE drives, you can mix in a system
  • FlashCache with Cloud Volumes ONTAP (CVO)
  • Cryptographic Data Sanitization
  • Volume move now available with NVMe namespaces.

Implemented SMB 3.0 CA witness protocol by using a node’s HA (SFO) partner LIF, which improve switchover time:

If two FabricPool aggregates share a single S3 bucket, volume migration will not rehydrate data and move only hot tier

We expect 9.6RC1 around the second half of May 2019, and GA comes about six weeks later.

Read more


All product names, logos, and brands are property of their respective owners. All company, product and service names used in this website are for identification purposes only. No one is sponsoring this article.

MAX Data: two primary use-cases

Thanks to the last NetApp Tech ONTAP Podcast. Episode 185 – Oracle on MAX Data I noted to myself two main use-cases for MAX Data software when used with a Database (no really matter whether with Oracle DB or any other). Here they are:

First configuration

When used without Max Recovery functionality, NetApp recommends to place DB data files to MAX FS and keep snapshots (MAX Snap) enabled there, then place DB logs on a separate LUN B on ONTAP system. In this case, if persistent memory or the server will be damaged, it will be possible to fully restore data with recovering from a MAX Data snapshot on LUN A and then roll-out latest transactions in the logs from the LUN B to the DB.

  • Pros & Cons: In this case, transactions executed fastly but confirmed to clients with speed of logs stored on the LUN B, also restoration process might take some time due to storage LUN speed usually much slower than persistent memory. Cheaper since only one server with persistent memory required.

Second configuration

When logs need to be placed on fast MAX Data FS with DB data files to increase overall performance (decrease latency) of the transaction (execution time + confirmation time to clients), NetApp recommends using Max Recovery Functionality which copies data from the primary server’s persistent memory synchronously to a recovery server’s persistent memory.

  • Pros & Cons: In this case, if a primary server will lose its data due to a malfunction, data can be fastly recovered back to primary server over RDMA connection from the Persistent Memory Tier and restore the primary server normal functioning which takes less time than the first configuration. If data going to be restored completely from a storage it might take a few hours on 1 TB of data versus 5-10 minutes with Max Recovery. Transaction execution latency a bit worse in this configuration for a few microseconds due to added network latency for synchronous replication, but overall transaction latency (execution + client confirmation) is much better than in the first configuration because entire DB including data files and logs stored on the fast persistent memory tier. Those few additional microseconds latency to transaction execution time is a relatively small price in terms of overall transaction latency. Max Recovery requires the other server with the same or greater amount of persistent memory & RDMA connection thus adds costs to the solution, but provide better protection and restoration speed in case of the primary server malfunction. The second configuration provide much better overall transaction latency than if logs placed on a storage LUN.

Some thoughts about RAM

Speaking about MAX Data configuration with enabled MAX Snap where you are putting your DB logs on a dedicated LUN (first configuration). It put me to thinking, what if we use this configuration with ordinary memory instead of Optane?

Of cause, there will be disadvantages, same as in first configuration, but there will be some pros as well:
1) In case of a disaster, all data in the RAM will be lost, so we will need to restore from MAX Snap and then roll out DB logs from the LUN, which will take some time
2) Transaction confirmation speed will be equal to the speed of LUN with logs. However, Transaction execution will be done with the speed of RAM
3) Price for RAM is higher. However, on another hand, you do not need new “special” servers with special CPUs

I wouldn’t do second configuration on RAM though.

SolidFire in a new HCI form factor

Did you know that if you buy four half-width storage nodes in one 2U NetApp HCI chassis without server nodes, they will act as ordinary SolidFire storage?

All new SolidFire storage nodes are part of NetApp HCI architecture, but it doesn’t mean you can’t use them just as SolidFire storage without building an actual HCI. Yea, you can do that, even though naming might confuse you a bit.

SolidFire form factor comparison:


689C0B93-8958-4088-B410-F6DF350F1723.png 78E28023-579F-4627-9375-A3B05AEFD403
Individual 1U storage nodes Storage nodes in 2U HCI chassis
Minimum nodes for a cluster 4 nodes 4 nodes
Rack units minimum  4U 2U
Each node 1U Half-width 1U
Drives 12 SSD in each node, 48 minimum for 4 nodes 6 SSD in each node, 24 minimum for 4 nodes
Drive size 960GB / 1.92GB / 3.84TB 480GB / 960GB / 1.92TB
IOPS per node 100.000 50.000 / 100.000
Space with 4 node cluster 11.52TB (960) / 23.04TB (1.92) / 46.08TB (3.84) 2.88TB (460) / 5.76TB (960) / 11.52TB (1.92)


Both form factors are fully compatible and can be mixed together. 2U HCI chassis allows you to use it for customers who need a smaller amount of initial space and grow as needed with either 1U nodes or 1U half-width nodes. Both 1U and 1U half-width storage nodes can be used as storage-only, no compute nodes required.

NCDA auto extension with NCIE

Did you know, if you’ll pass NCIE (DP or SAN) AND you still have not expired yet NCDA, you automatically get certification extension (from higher-level certification) for NCDA with the same dates as NCIE?

To check:

In my case, both NCDA ONTAP & NCIE SAN ONTAP get 2018-Oct-23:

ncie san ontap


ncda ontap


  • NetApp Certified Data Administrator, ONTAP (NCDA ONTAP)
  • NetApp Certified Implementation Engineer – Data Protection Specialist (NCIE DP)
  • NetApp Certified Implementation Engineer – SAN Specialist, ONTAP (NCIE SAN)