Zoning for cluster storage in pictures

NetApp AFF & FAS storage systems can combine into a cluster up to 12 nodes (6 HA pairs) for SAN protocols. Let’s take a look on zoning and connections on an example with 4 nodes (2 HA pairs) in the image below.

For simplicity we will discuss connection of a single host to the storage cluster. In this example we connect each node to each server. Each storage node connected with double links for reliability reasons.

It is easy to notice only two paths going to be Preferred in this example (solid yellow arrows).

Since NetApp FAS/AFF systems implement “share nothing” architecture we have disk drives assigned to a node, then disks on a node  grouped into a RAID group, then one or a few RAID groups combined into a plex, usually one plex form an aggregate (in some cases two plexes form an aggregate, in this case both plexes must have identical RAID configuration, think of it as an analogy to RAID 1). On aggregates you have FlexVol volumes. Each FlexVol volume is a separate WAFL file system and can serve for NAS files (SMB/NFS) or SAN LUNs (iSCSI, FC, FCoE) or Namespaces (NVMeoF). A FlexVol can have multiple Qtrees and each Qtree can store an LUN or files. Read more in series of articles How ONTAP Memory works.

Each drive belongs to & served by a node. A RAID group belongs to and served by a node. All objects on top of those are belong to and are served by a single node, including Aggregates, FlexVols, Qtrees, LUNs & Namespaces.

At a given time a disk can belong to a single node and in case of a node failure, HA partner takes over disks, aggregates, and all the objects on top of that. Note that a “disk” in ONTAP can be entire physical disk as well as a partition on a disk. Read more about disks and ADP (disk partitioning) here.

Though an LUN or Namespace belong to a single node, it is possible to access them through the HA partner or even from other nodes. The most optimal path is always through a node which owns the LUN or Namespace. If a node has more than one port, all ports to that node are considered as optimal paths (also known as Non-Primary paths) through that node. Normally it is a good idea to have more optimal paths to a LUN.

ALUA & ANA

ALUA (Asymmetric Logic Unit Access) is a protocol which help hosts to access LUNs through optimal paths, it also allows to automatically change paths to a LUN if it moved to another controller. ALUA is used in both FCP and iSCSI protocols. Similarly to ALUA, ANA ( Asymmetric Namespace Access) is a protocol for NVMe over Fabrics protocols like FC-NVMe, iNVMe, etc.

Host can use one or a few paths to an LUN and that is depended on the host multipathing configuration and Portset configuration on the ONTAP cluster.

Since an LUN belong to a single storage node and ONTAP provide online migration capabilities between nodes, your network configuration must provide access to the LUN from all the nodes, just in case. Read more in series of articles How ONTAP cluster works.

According to NetApp best practices, zoning is quite simple:

  • Create one zone for each initiator (host) port on each fabric
  • Each zone must have one initiator port and all the target (storage node) ports.

Keeping one initiator per zone reduces “cross talks” between initiators to 0.

Example for Fabric A, Zone for “Host_1-A-Port_A”:

NodePortWWPN
Host 1Port APort A
ONTAP Node1Port ALIF1-A (NAA-2)
ONTAP Node2Port A LIF2-A (NAA-2)
ONTAP Node3Port A LIF3-A (NAA-2)
ONTAP Node4Port A LIF4-A (NAA-2)

Example for Fabric B, Zone for “Host_1-B-Port_B”:

NodePortWWPN
Host 1Port BPort B
ONTAP Node1Port BLIF1-B (NAA-2)
ONTAP Node2Port B LIF2-B (NAA-2)
ONTAP Node3Port BLIF3-B (NAA-2)
ONTAP Node4Port BLIF4-B (NAA-2)

Here is how zoning from tables above it looks like:

Vserver or SVM

An SVM in ONATP cluster lives on all the nodes in the cluster. Each SVM separated one from another and used for creating a multi-tenant environment. Each SVM can be managed by a separate group of people or companies and one will not interfere with another. In fact they will not know about other existence at all, each SVM is like a separate physical storage system box. Read more about SVM, Multi-Tenancy and Non-Disruptive Operations here.

Logical Interface (LIF)

Each SVM has its own WWNN in case of FCP, own IQN in case of iSCSI or Namespace in case of NVMeoF. Each SVM can share a physical storage node port. Each SVM assigns its own range of network addresses (WWPN, IP, or Namespace ID) to a physical port and normally each SVM assigns one network address to one physical port. Therefore one physical port might have a few WWPN network addresses on a single physical storage node port each assigned to a different SVM, if a few SVM exists. NPIV is a crucial functionality which must be enabled on a FC switch for ONTAP cluster with FC protocol to function properly.

Unlike ordinary virtual machines (i.e. ESXi or KVM), each SVM “exists” on all the nodes in the cluster, not just on a single node, the picture below shows two SVMs on a single node just for simplification.

Make sure that each node has at least one LIF, in this case host multipathing will be able to find an optimal path and always access an LUN through optimal route even if a LUN will migrate to another node. Each port has its own assigned “physical address” which you cannot change and network addresses. Here is an example of network & physical addresses looks like in case of iSCSI protocol. Read more about SAN LIFs here and about SAN protocols like FC, iSCSI, NVMeoF here.

Zoning recommendations

For ONTAP 9, 8 & 7 NetApp recommends having a single initiator and multiple targets.

For example in case of FCP, each physical port has its own physical WWPN (WWPN 3 in the image above) which should not be used at all, but rather WWPN addresses assigned to an LIF (WWPN 1 & 2 in the image above) must be used for zoning and host connections. Physical addresses looks like 50:0A:09:8X:XX:XX:XX:XX, this type of addresses numbered according to NAA-3 (IEEE Network Address Authority 3), assigned to a physical port, and should not be used at all. Example: 50:0A:09:82:86:57:D5:58. You can see addresses numbered according to NAA-3 listed on network switches, but they should not be used.

When you create zones on a Fabric, you should use 2X:XX:00:A0:98:XX:XX:XX, this type of addresses numbered according to NAA-2 (IEEE Network Address Authority 2) and assigned to your LIFs. Thanks to NPIV technology, the physical N_Port can register additional WWPNs which means your switch must be enabled in NPV mode in order ONTAP to serve data over FCP protocol to your servers. Example 20:00:00:A0:98:03:A4:6E

  • Block 00:A0:98 is the original OUI block for ONTAP
  • Block D0:39:EA is the newly added OUI block for ONTAP
  • Block 00:A0:B8 is used on NetApp E-Series hardware
  • Block 00:80:E5 is reserved for future use.

Read more

Disclaimer

Please note in this article I described my own understanding of the internal organization of ONTAP systems. Therefore, this information might be either outdated, or I simply might be wrong in some aspects and details. I will greatly appreciate any of your contribution to make this article better, please leave any of your ideas and suggestions about this topic in the comments below.

All product names, logos, and brands are property of their respective owners. All company, product and service names used in this website are for identification purposes only.

Why use NetApp snapshots even when you do not have Premium bundle software?

If you are extremely lazy and do not want to read any farther, the answer is “use snapshots to improve RPO and use ndmpcopy to restore files, LUNs and SnapCreator for app-consistent snapshots.

Premium bundle includes a good deal of software besides Base software in each ONTAP system, like:

  • SnapCenter
  • SnapRestore
  • FlexClone
  • And others.

So, without Premium bundle, with only Basic software we have two issues:

  • You can create snapshots, but without SnapRestore or FlexClone you cannot restore them quickly
  • And without SnapCenter you cannot make application consistent snapshot.

And some people asking, “Do I need to use NetApp snapshots in such circumstances?”

And my answer is: Yes, you can, and you should use ONTAP snapshots.

Here is the explanation of why and how:

Snapshots without SnapRestore

Why use NetApp storage hardware snapshots? Because they have no performance penalty and also no such a thing as snapshot consolidation which causes a performance impact. NetApp snapshots work pretty well and they also have other advantages. Even though it is not that fast as with SnapRestore or FlexClone to restore your data captured in snapshots, you can create snaps very fast. And most times, you need to restore something very seldom, so fast creation of snapshots with slow restoration will give you better RPO compare to a full backup. Of course, I have to admit that you improved RPO only for cases when your data were logically corrupted, and no physical damage was done to the storage because if your storage physically damaged, snapshots will not help. With ONTAP you can have up to 1023 snapshots per volume, and you can create them as fast as you need with no performance degradation whatsoever, which is pretty awesome.

Snapshots with NAS 

If we are speaking about NAS environment without SnapRestore license, you always can go to the .snapshot folder and copy any previous version of a file you need to restore. Also, you can use the ndmpcopy command to perform file, folder or even volume restoration inside storage without involving a host.

Snapshots with SAN 

If we are speaking about SAN environment without SnapRestore license, you do not have such ability as copying a file on your LUN and restore it. There are two stages in case you need to restore something on a LUN:

  1. You copy entire LUN from a snapshot
  2. And then you can either:
    • Restore entire LUN on the place of the last active version of your LUN
    • Or you can copy data from copied LUN to the active LUN.

To do that, you can use either ndmpcopy or lun copy commands to perform the first stage. And if you want to restore only some files from an old version of the LUN from a snapshot, you need to map that copy to a host and copy required data back to active LUN.

Application consistent storage snapshots 

Why do you need application consistency in the first place? Sometimes, in an environment like the NAS file share with doc files, etc., you do not need that at all. But if you are using applications like Oracle DB, MS SQL or VMWare you’d better have application consistency. Imagine you have a Windows machine and you are pulling hard drive while Windows is running, let’s forget for a moment that your Windows will stop working, this is not the point here, and let’s focus on data protection side of that. The same happens when you are creating a storage snapshot, data captured in that snapshot will be similarly not complete. Will the pulled off hard drive be a proper copy of your data? Kind of, right? Because some of the data will be lost in host memory and your FS probably will not be consistent, and even though you’ll be able to restore logged file system, your application data will be damaged in a way it hard to restore, because against of the data lost from host memory. Similarly, snapshots will contain probably damaged FS, if you try to restore from such a copy, your Windows might not start, or it might start after FS recheck, but your applications especially Data Bases definitely will not like such a backup. Why? Because most probably you’ll get your File System corrupted because applications and OS which were running on your machine didn’t have a chance to destage data from memory to your hard drive. So, you need someone who will prepare your OS & applications to create a backup. As you may know, application consistent storage hardware snapshots can be created by backup software like Veeam, Commvault, and many others, or you even can trigger a storage snapshot creation yourself with relatively simple Ansible or PowerShell script. Also, you can do application-consistent snapshots with free NetApp SnapCreator software framework, unlike SnapCenter, it does not have a simplistic and straight-forward application GUI wizards which help to walk you through with the process of integration with your app. Most times, you have to write a simple script for your application to benefit online & application-consistent snapshots, another downside that SnapCreator is not officially supported software. But at the end of the day, it is relatively easy setup, and it will definitely pay you off once you finish setting up.

List of other software features available in Basic software

This Basic ONTAP functionality also might be useful: 

  • Horizontal scaling, nod-disruptive operations such as online volume & LUN migration, non-disruptive upgrade with adding new nodes to the cluster
  • API automation
  • FPolicy file screening
  • Create snapshots to improve RPO
  • Storage efficiencies: Deduplication, Compression, Compaction
  • By default ONTAP deduplicate data across active file system and all the snapshots on the volume. Savings from the snapshot data sharing is a magnitude of number of snapshots: the more snapshots you have, the more savings you’ll have
  • Storage Multi-Tenancy
  • QoS Maximum
  • External key manager for Encryption
  • Host-based MAX Data software which works with ONTAP & SAN protocols
  • You can buy FlexArray license to virtualize 3rd party storage systems
  • If you have an All Flash system, then you can purchase additional FabricPool license which is useful especially with snapshots, because it is destaged cold data to cheap storage like AWS S3, Google Cloud, Azure Blob, IBM Cloud, Alibaba Cloud or on-premise StorageGRID system, etc.

Summary

Even Basic software has a reach functionality on your ONTAP system, you definitely should use NetApp snapshots, and set up application integration to make your snapshot application consistent. With hardware NetApp storage snapshots, you can have 1023 snapshots per volume, create them as fast as you need without sacrificing storage performance, so snapshots will increase your RPO. Application consistency with SnapCreator or any other 3rd party backup software will build confidence that all the snapshots can be restorable when needed.

ONTAP & Antivirus NAS protection

NetApp with ONTAP OS supports antivirus integration known as Off-box Antivirus Scanning or VSCAN. With VSCAN ability, the storage system will scan each new file with an antivirus system. VSCAN allows increasing corporate data security.

ONTAP supports the next list of antivirus software:

  • Symantec
  • Trend Micro
  • Computer Associates
  • Kaspersky
  • McAfee
  • Sophos

Also, ONTAP supports FPolicy technology which can prevent a file been written or read based on file extension or file content header.

This time I’d like to discuss an example of CIFS (SMB) integration with antivirus system McAfee.

AV-1

In this example im going to show how to set up integration with McAfee. Here are the minimum requirements for McAfee but approximately the same with other AVs:

  • MS Windows Server 2008 or higher
  • NetApp storage with ONTAP 8 or higher
  • SMB v2 or higher (CIFS v1.0 not supported)
  • NetApp ONTAP AV Connector (Download page)
  • McAfee VirusScan Enterprise for Storage (VSEfS)
  • For more details see NetApp Support Matrix Tool.
AV-2

Diagram of antivirus integration with ONTAP system.

Preparation

To set up such an integration, we will need to configure the next software components:

AV-3

VSEfS

We need to set up McAfee VSEfS, which can work in two modes: as an independent product or as managed by McAfee ePolicy Orchestrator (McAfee ePO). In this article, I will discuss how to configure it as an independent product. To set up & configure VSEfS we will need already installed and configured:

  • McAfee VirusScan Enterprise (VSE). Download VSE
  • McAfee ePolicy Orchestrator (ePO), not needed if VirusScan used as an independent product.

SCAN Server

At first, we need to configure few SCAN servers to balance the workload between them. I will install each SCAN server on a separate Windows Server with McAfee VSE, McAfee VSEfS, and ONTAP AV Connector. In this article, we will create three SCAN servers: SCAN1, SCAN2, SCAN3.

Active Directory

At the next step, we need to create user scanuser in our domain, in this example domain will be NetApp.

ONTAP

After ONTAP been started, we need to create Cluster management LIF and SVM management LIF; set up AD integration and configure file shares and data LIFs for SMB protocol. Here, we will have NCluster-mgmt LIF for cluster management and SVM01-mgmt for SVM management.

NCluster::> network interface create -vserver NCluster -home-node NCluster-01 -home-port e0M -role data -protocols none -lif NCluster-mgmt -address 10.0.0.100 -netmask 255.0.0.0
NCluster::> network interface create -vserver SVM01 -home-node NCluster-01 -home-port e0M -role data -protocols none -lif SVM01-mgmt -address 10.0.0.105 -netmask 255.0.0.0
NCluster::> domain-tunnel create -vserver SVM01
NCluster::> security login create -username NetApp\scanuser -application ontapi -authmethod domain -role readonly -vserver NCluster
NCluster::> security login create -username NetApp\scanuser -application ontapi -authmethod domain -role readonly -vserver SVM01

ONTAP AV Connector

On each SCAN server, we will install the ONTAP AV Connector. At the end of the installation, I will add AD logging & password for the user scanuser.

AV-4

Then configure management LIFs

Start → All Programs → NetApp → ONTAP AV Connector → Configure ONTAP Management LIFs

In the field “Management LIF” we will add DNS name or IP address for the NCluster-mgmt or SVM01-mgmt. In the Account field, we will fill with NetApp\scanuser. Also, then pressing “Test,” “Update” or “Save” if test finished.

AV-5

McAfee Network Appliance Filer AV Scanner Administrator Account

Assuming you already installed McAfee on three SCAN servers, on each SCAN server, we are logging as an administrator and in Windows taskbar opening VirusScan Console and then open Network Appliance Filer AV Scanner and choosing tab called “Network Appliance Filers.” So, in the field “This Server is processing scan request for these filers” press the “Add button” and put to the address field “127.0.0.1”, and then also add scanuser credentials.

AV-6

Returning to ONTAP console

Configuring off-box scanning, then enabling it, creating and applying scan policies. SCAN1, SCAN2, and SCAN3 are the Windows servers with installed McAfee VSE, VSEfS, and ONTAP AV Connector.
First, we create a pool of AV servers:

NCluster::> vserver vscan scanner-pool create -vserver SVM01 -scanner-pool POOL1 -servers SCAN1,SCAN2,SCAN3 -privileged-users NetApp\scanuser 
NCluster::> vserver vscan scanner-pool show
Scanner Pool Privileged Scanner Vserver Pool Owner Servers Users Policy 
-------- ---------- ------- ------------ ------------ ------- 
SVM01 POOL1 vserver SCAN1, NetApp\scanuser idle SCAN2, SCAN3

NCluster::> vserver vscan scanner-pool show -instance
Vserver: SVM01 Scanner Pool: 
POOL1 Applied Policy: idle 
Current Status: off 
Scanner Pool Config Owner: vserver 
List of IPs of Allowed Vscan Servers: SCAN1, SCAN2, SCAN3 
List of Privileged Users: NetApp\scanuser

Second, we apply a scanner policy:

NCluster::> vserver vscan scanner-pool apply-policy -vserver SVM01 -scanner-pool POOL1 -scanner-policy primary
NCluster::> vserver vscan enable -vserver SVM01
NCluster::> vserver vscan connection-status show
Connected Connected Vserver Node Server-Count Servers 
--------- -------- ------------ ------------------------ 
SVM01 NClusterN1 3 SCAN1, SCAN2, SCAN3

NCluster::> vserver vscan on-access-policy show
Policy Policy File-Ext Policy Vserver Name Owner Protocol Paths Excluded Excluded Status 
--------- --------- ------- -------- ---------------- ---------- ------ 
NCluster default_ cluster CIFS - - off CIFS SVM01 default_ cluster CIFS - - on CIFS 

Licensing

There is no other licensing needed on ONTAP side to enable and use FPolicy & off-box anti-virus scanning; this is a basic functionality available in any ONTAP system. However, you might need to license additional functionality from the antivirus side, so please check it with your antivirus vendor.

Summary

Here are some advantages in integration storage system with your corporate AV: NAS integration with antivirus allows you to have one of the antivirus systems on your desktops and another for your NAS share. There is no need to do NAS scanning on workstations and waste their limited resources. All NAS data protected, there is no way for a user with advanced privileges to connect to the file share without antivirus protection and put there some unscanned files.

NetApp in containerization era

It’s not really a technical article as I usually do, but rather a short list of topics for one direction NetApp developing a lot recently, called “Containers”. Containerization getting more and more popular nowadays and I noticed NetApp heavily invests efforts in it, so I identified four main directions in that field. Let’s name a few NetApp products using containerization technology:

  1. E-Series with built-in containers
  2. Containerization of existing NetApp software:
    • ActiveIQ PAS
  3. Trident plugin for ONTAP, SF & E-Series (NAS & SAN):
    • NetApp Trident plug-in for Jenkins
    • CI & HCI:
      • ONTAP AI: NVIDIA, AFF systems, Docker containers & Trident with NFS
      • FlexPod DataCenter & FlexPod SF with Docker EE
      • FlexPod DataCenter with IBM Cloud Private (ICP) with Kubernetes orchestration
      • NetApp HCI with RedHat OpenShift Container Platform
      • NetApp integration with Robin Systems Container Orchestration
    • Oracle, PostgreSQL & MongoDB in containers with Trident
    • Integration with Moby Project
    • Integration with Mesosphere Project
  4. Cloud-native services & Software:
    • NetApp Kubernetes Services (NKS)
      • NKS in public clouds: AWS, Azure, GCP
      • NKS on NetApp HCI platform
    • SaaS Backup for Service Providers

Documents, Howtos, Architectures, News & Best Practices:

Is it a full list of NetApp’s efforts towards containerization?

I bet that is far not complete. Post your thoughts and links with documents, news, howtos, architectures and best practice guides in the comments below to expand this list if I missed something!

New NetApp platform for ONTAP 9.6 (Part 3) AFF C190

NetApp introduced C190 for Small Business, following the new platform A320 with ONTAP 9.6.

C190

This new All-Flash system has:

  • Fixed format, with no ability to connect additional disk shelves:
    • Only 960 SSD drives installed only in the controller chassis
    • Only 4 configs: with 8, 12, 18 or 24 drives
      • Effective capacity respectively: 13, 24, 40 or 55 TB
    • Supports ONTAP 9.6 GA and higher
    • C190 build with the same chassis as A220, so per HA pair you’ll get:
      • 4x 10Gbps SFP cluster ports
      • 8x UTA ports (10 Gbps or FC 16Gbps)
      • There is a model with 10GBASE-T ports instead of UTA & cluster interconnect ports (12 ports total). Obviously BASE-T ports do not support FCP protocol
  • There will be no more “useful capacity”, NetApp will provide only “Effective capacity”:
    • With dedup, compression, compaction and 24 x 960 GB drives the system provide ~50 TiB Effective capacity. 50 TiB is pretty reliable conservative number because it is even less than ~3:1 data reduction
    • Deduplication snapshot sharing functionality introduced in previous ONTAP versions allows gaining even better efficiency
    • And of course FabricPool tiering can help to save much space
  • C190 comes with Flash bundle which adds to Basic software:
    • SnapMirror/SnapVault for replication
    • SnapRestore for fast restoration from snapshots
    • SnapCenter for App integration with storage snapshots
    • FlexClone for thing cloning.

Fixed configuration with built-in drives, I personally think, is an excellent idea in general, taking into account we have such a wide variety of capacity in SSD drives nowadays and even more to come. Is this the future format for all storage systems with flash? Though C190 supports only 960 GB SSD drives, and new Mid-range A320 system, can have more than one disk shelf.

Fixed configuration allows to manufacture & deliver the systems to clients faster and reduce costs. C190 will cost sub $25k with min config according to NetApp.

Also, in my opinion, C190 can more or less cover market place left after the announcement for the end of sale (EOS) of hardware and virtual AltaVault (AVA, and recently know under a new name “Cloud Backup”) appliances thanks to FabricPool tiering. Cloud Backup appliances still available through AWS & Azure market places. Especially now it is the case after FabricPool in ONTAP 9.6 no longer have a hard-coded ratio for how many data system can store in the cloud compare to hot tier & allows wright-through with “All” policy.

Turns out information about storage capacity “consumed” more comfortable in the form of effective capacity. All this useful capacity, garbage collector and other storage overheads, RAIDs and system reserves are too complicated, so hey, why not? I bet idea of showing only effective capacity influenced by vendors like Pure, which have very effective marketing for sure.

Cons

  • MetroCluster over IP is not supported in C190, while Entry-level A220 & FAS2750 systems support MCC-IP with ONTAP 9.6
  • C190 require ONTAP 9.6, and ONTAP 9.6 do not support 7MTT.

Read more

Disclaimer

All product names, logos, and brands are the property of their respective owners. All company, product, and service names used in this website are for identification purposes only. No one is sponsoring this article.

MAX Data: two primary use-cases

Thanks to the last NetApp Tech ONTAP Podcast. Episode 185 – Oracle on MAX Data I noted to myself two main use-cases for MAX Data software when used with a Database (no really matter whether with Oracle DB or any other). Here they are:

First configuration

When used without Max Recovery functionality, NetApp recommends to place DB data files to MAX FS and keep snapshots (MAX Snap) enabled there, then place DB logs on a separate LUN B on ONTAP system. In this case, if persistent memory or the server will be damaged, it will be possible to fully restore data with recovering from a MAX Data snapshot on LUN A and then roll-out latest transactions in the logs from the LUN B to the DB.

  • Pros & Cons: In this case, transactions executed fastly but confirmed to clients with speed of logs stored on the LUN B, also restoration process might take some time due to storage LUN speed usually much slower than persistent memory. Cheaper since only one server with persistent memory required.

Second configuration

When logs need to be placed on fast MAX Data FS with DB data files to increase overall performance (decrease latency) of the transaction (execution time + confirmation time to clients), NetApp recommends using Max Recovery Functionality which copies data from the primary server’s persistent memory synchronously to a recovery server’s persistent memory.

  • Pros & Cons: In this case, if a primary server will lose its data due to a malfunction, data can be fastly recovered back to primary server over RDMA connection from the Persistent Memory Tier and restore the primary server normal functioning which takes less time than the first configuration. If data going to be restored completely from a storage it might take a few hours on 1 TB of data versus 5-10 minutes with Max Recovery. Transaction execution latency a bit worse in this configuration for a few microseconds due to added network latency for synchronous replication, but overall transaction latency (execution + client confirmation) is much better than in the first configuration because entire DB including data files and logs stored on the fast persistent memory tier. Those few additional microseconds latency to transaction execution time is a relatively small price in terms of overall transaction latency. Max Recovery requires the other server with the same or greater amount of persistent memory & RDMA connection thus adds costs to the solution, but provide better protection and restoration speed in case of the primary server malfunction. The second configuration provide much better overall transaction latency than if logs placed on a storage LUN.

Some thoughts about RAM

Speaking about MAX Data configuration with enabled MAX Snap where you are putting your DB logs on a dedicated LUN (first configuration). It put me to thinking, what if we use this configuration with ordinary memory instead of Optane?

Of cause, there will be disadvantages, same as in first configuration, but there will be some pros as well:
1) In case of a disaster, all data in the RAM will be lost, so we will need to restore from MAX Snap and then roll out DB logs from the LUN, which will take some time
2) Transaction confirmation speed will be equal to the speed of LUN with logs. However, Transaction execution will be done with the speed of RAM
3) Price for RAM is higher. However, on another hand, you do not need new “special” servers with special CPUs

I wouldn’t do second configuration on RAM though.

SolidFire in a new HCI form factor

Did you know that if you buy four half-width storage nodes in one 2U NetApp HCI chassis without server nodes, they will act as ordinary SolidFire storage?

All new SolidFire storage nodes are part of NetApp HCI architecture, but it doesn’t mean you can’t use them just as SolidFire storage without building an actual HCI. Yea, you can do that, even though naming might confuse you a bit.

SolidFire form factor comparison:

 

689C0B93-8958-4088-B410-F6DF350F1723.png 78E28023-579F-4627-9375-A3B05AEFD403
Individual 1U storage nodes Storage nodes in 2U HCI chassis
Minimum nodes for a cluster 4 nodes 4 nodes
Rack units minimum  4U 2U
Each node 1U Half-width 1U
Drives 12 SSD in each node, 48 minimum for 4 nodes 6 SSD in each node, 24 minimum for 4 nodes
Drive size 960GB / 1.92GB / 3.84TB 480GB / 960GB / 1.92TB
IOPS per node 100.000 50.000 / 100.000
Space with 4 node cluster 11.52TB (960) / 23.04TB (1.92) / 46.08TB (3.84) 2.88TB (460) / 5.76TB (960) / 11.52TB (1.92)

Summary

Both form factors are fully compatible and can be mixed together. 2U HCI chassis allows you to use it for customers who need a smaller amount of initial space and grow as needed with either 1U nodes or 1U half-width nodes. Both 1U and 1U half-width storage nodes can be used as storage-only, no compute nodes required.