What you might miss about NetApp from Aug-Nov 2019, including Insight in Las Vegas? Part 3

NetApp & Rubrik

NetApp & Rubrik announced collaboration. First StorageGRID can be a target for Rubrik archives. And second Rubrik now supports NetApp SnapDiff API. SnapDiff API is a technology in ONTAP which compares two snaps and gives a list of files changed so Rubrik can copy only changed files. While Rubrik is not the first in working with NetApp SnapDiff APIs, others like Catalogic, Commvault, IBM (TSM) and Veritas (NetBackup) can work with it as well, but Rubrik is the first one with backing up data to a public cloud. Will be available in Rubrik Cloud Data Management (CDM) v5.2 in 2020.

NetApp & Veeam

Veeam Availability Orchestrator v3 (VAO) provide a new level of NetApp integration for DP:

  • FULL recovery orchestration for NetApp ONTAP Snapshots
  • Automated testing and reports that have become essential to your DR strategies
  • TR-4777: Veeam & StorageGRID

Continue to read

All announcements from Aug-Nov 2019

Am I missing something?

Please let me know in the comments below!

If you spotted an error, please let me know personally 😉

Disclaimer

Opinions & observations are my own, and not official NetApp information. This post contains future looking statements and may contain errors. If you have spotted an error, please let me know.

What you might miss about NetApp from Aug-Nov 2019, including Insight in Las Vegas? Part 2

MAX Data 1.5

  • Support for ONTAP 9.6 GA and later releases
  • Support for FAS storage systems or ONTAP Select systems running ONTAP 9.7 besides AFF storage systems
  • Resizing application memory allocation
  • Support for Red Hat Enterprise Linux 7.7
  • Support for local snapshots on server-only systems
  • Significant performance improvements with more I/o, less latency: 5.4M I/o 4KB READ @ 12.5usec latency

Previously in 1.4

With version 1.4 you can use MAX Data without AFF. Tiering now works between PMEM and your SSD installed in the server.

Some info leaks that HCI will support MAX Data at some point.

Considering new compute node H615C with the Cascade Lake CPUs, which is by the way, required for Optane memory, so it looks like NetApp putting all together to make it happen.

Continue to read

Announcements from Aug-Nov 2019

Am I missing something?

Please let me know in the comments below!

If you spotted an error, please let me know personally 😉

Disclaimer

Opinions & observations are my own, and not official NetApp information. This post contains future looking statements and may contain errors. If you have spotted an error, please let me know.

What you might miss about NetApp from Aug-Nov 2019, including Insight in Las Vegas? Part 1

E-Series

Performance

End-to-End NVMe with EF600 – More I/o (x2 times more than EF570), less latency:

NVMe in EF600

  • 100Gb NVMe/RoCE
  • 100Gb NVMe/InfiniBand
  • 32Gb NVMe/FC

E-Series Performance Analyzer

An automated installation and deployment of Grafana, NetApp E-Series Web Services, and supporting software for performance monitoring of NetApp E-Series Storage Systems. NetApp intend this project to allow you to quickly and simply deploy an instance of our performance analyzer for monitoring your E-Series storage systems. We incorporate various open source components and tools in order to do so. While they primarily intend it to serve as a reference implementation for using Grafana to visualize the performance of your E-Series systems, I also can be customizable and extensible based on your individual needs.

https://github.com/NetApp/eseries-perf-analyzer

New TR docs about EF & DB

Continue to read

announcements from Aug-Nov 2019

Am I missing something?

Please let me know in the comments below!

If you spotted an error, please let me know personally 😉

Disclaimer

Opinions & observations are my own, and not official NetApp information. This post contains future looking statements and may contain errors. If you have spotted an error, please let me know.

What you might miss about NetApp from Aug-Nov 2019, including Insight in Las Vegas?

Some competitors might say NetApp do not innovate anymore. Well, read this article and answer yourself whether it is true, or it is just yet another shameless marketing.

E-Series

Performance

End-to-End NVMe with EF600 – More I/o (x2 times more than EF570), less latency:

NVMe in EF600

  • 100Gb NVMe/RoCE
  • 100Gb NVMe/InfiniBand
  • 32Gb NVMe/FC

E-Series Performance Analyzer

An automated installation and deployment of Grafana, NetApp E-Series Web Services, and supporting software for performance monitoring of NetApp E-Series Storage Systems. NetApp intend this project to allow you to quickly and simply deploy an instance of our performance analyzer for monitoring your E-Series storage systems. We incorporate various open source components and tools in order to do so. While they primarily intend it to serve as a reference implementation for using Grafana to visualize the performance of your E-Series systems, I also can be customizable and extensible based on your individual needs.

https://github.com/NetApp/eseries-perf-analyzer

New TR docs about EF & DB

MAX Data 1.5

  • Support for ONTAP 9.6 GA and later releases
  • Support for FAS storage systems or ONTAP Select systems running ONTAP 9.7 besides AFF storage systems
  • Resizing application memory allocation
  • Support for Red Hat Enterprise Linux 7.7
  • Support for local snapshots on server-only systems
  • Significant performance improvements with more I/o, less latency: 5.4M I/o 4KB READ @ 12.5usec latency

Previously in 1.4

With version 1.4 you can use MAX Data without AFF. Tiering now works between PMEM and your SSD installed in the server.

Some info leaks that HCI will support MAX Data at some point.

Considering new compute node H615C with the Cascade Lake CPUs, which is by the way, required for Optane memory, so it looks like NetApp putting all together to make it happen.

NetApp & Rubrik

NetApp & Rubrik announced collaboration. First StorageGRID can be a target for Rubrik archives. And second Rubrik now supports NetApp SnapDiff API. SnapDiff API is a technology in ONTAP which compares two snaps and gives a list of files changed so Rubrik can copy only changed files. While Rubrik is not the first in working with NetApp SnapDiff APIs, others like Catalogic, Commvault, IBM (TSM) and Veritas (NetBackup) can work with it as well, but Rubrik is the first one with backing up data to a public cloud. Will be available in Rubrik Cloud Data Management (CDM) v5.2 in 2020.

NetApp & Veeam

Veeam Availability Orchestrator v3 (VAO) provide a new level of NetApp integration for DP:

  • FULL recovery orchestration for NetApp ONTAP Snapshots
  • Automated testing and reports that have become essential to your DR strategies
  • TR-4777: Veeam & StorageGRID

Active IQ 2.0

  • AI learning algorithms based on customer base wisdom provides better insights about risks
  • APIs with Active IQ
  • Now you cannot just monitor but take actions and fix risks with Active IQ (which happens through UM)
  • It also decrease “nose”: Active IC trying to find the root of the problem and solve those problems first which will fix majority of your problems: let’s say you have 100 risks with, but it might happen you can fix 70% with a few fixes
  • OneCollect available in a container, and now sends info to Active IQ

AFF & FAS

AFF400, FAS8300, FAS8700. All of those systems basically the same platform but with a different number of Memory & CPU cores. FAS systems also have FlashCache module onboard for read caching. All three requires 9.7 RC1 and have the latest Intel Cascade Lake processors. And that gives us hope we might see Optane as cache as NetApp showed as part of its vision on Insight 2017:

  • Read more about new AFF400 system at Andre Schmitz’s blog
  • AFF with HW-assistant dedup & compression on a network card Pensando used for Cluster Interconnects
  • RoCE as data protocol available as Proof of Concept lab for testing, but not in 9.7
  • AFF A400 now supports NS224 disk shelf alongside with A320

AFF & NVMe

ONTAP 9.6 is qualified with RedHat, SUSE and Oracle Linux for FC-NVMe. Storage path fail-over (ANA) supported with SUSE Linux 15, Oracle Linux 7.6 and 7.7, and (still) no RedHat Linux because of RedHat slowness.

ONTAP AI with containers

Execute AI workloads at a scale with Trident and Kubernetes for CI solution “ONTAP AI” with NVIDIA DXG servers, networking & NetApp AFF NAS storage.

ASA

ASA is using the same AFF but only provide SAN protocols. ASA systems uses ONTAP with same architecture but Symmetric Active/Active network access for the block devices. Watch a short video about ASA on Twitter. Hardware for ASA is identical to standard AFF A700 and AFF A220 systems, and software bundle includes Flash/Premium bundle licenses minus NAS-only features

ONTAP

Much new stuff in ONTAP 9.7

  • User Interface Improvements
    • Simplified UI allows faster creation of LUNs, NAS shares & DP
    • Hardware health
    • Network connectivity diagram
    • Performance history (previously only performance since you opened it)
  • FabricPool Mirrors: one aggregate connected to a 2 buckets for added resiliency (previously only one bucket per aggregate)
  • CVO now supports Cloud Backup Service in AWS
  • Synchronous mirroring (SM-S) with MetroCluster (MCC)
  • Map NFS clients to specific volumes and IP addresses
  • XCP for rapid file delete
  • TCP Performance Enhancements in ONTAP 9.6
  • Watch a video about ONTAP 9.7 with Skip Shapiro

ONTAP Select

Networking best practices for ONTAP Select

ONTAP SDS is in embedded non-X86 systems for edge devices

Slimmed version of ONTAP Select called Photon used on edge devices. Blocks & Files blog post.

FlexGroup

  • Fast in-place conversion FlexVol to FlexGroup
  • NFSv4, 4.1 & pNFS support with NFS security and locking
    • With pNFS data I/O path goes locality in a cluster
  • NDMP with FlexGroups: entire FG, directory or Qtree
    • Dump & Restores between FGs or FG & FlexVols
  • FlexGroup is now supported as an origin volume for FlexCache
  • FlexClone support with FlexGroup
  • VAAI, and copy offload support, though FlexGroups still not recommended with VMs, but we can clearly see NetApp moves that direction

SnapMirror Sync (SM-S)

  • Now SM-S replicate app-created snaps & FlexClones. Scheduled snaps are not replicated though
  • NVMe/FC protocol and namespace support added

NDAS

Since NDAS replicates data with metadata (Unlike FabricPool), now you can access data directly out of the bucket

  • StorageGRID as Object storage added for NDAS besides existing AWS S3
  • NetApp looking into an ability to connect NetApp In-Place Analytics Module to object storage bucket with backed up by NDAS data from ONTAP NAS so you can do analytics with your data and how they evolved over the time (i.e. snapshots)
  • The NetApp Interoperability Matrix Tool now lists supported configurations for NetApp Data Availability Services
  • AutoSupport is now enabled by default to send NDAS diagnostic records to NetApp Support
  • 200 TB of managed capacity with 100 million files is now supported
  • ONTAP 9.6P1 & ONTAP 9.7 and later releases are now supported on the target cluster
  • Bring your own licenses (BYOL) are now available for new and upgraded NDAS instances
  • The NDAS PreChecker tool is now available to assess your environment’s readiness for deploying NDAS
  • New AWS regions are now supported for NDAS with AWS:
    • EU (Ireland)
    • EU (Frankfurt)
    • Asia Pacific (Singapore)
    • Asia Pacific (Sydney)
    • Asia Pacific (Hong Kong)

https://docs.netapp.com/us-en/netapp-data-availability-services/reference_whats_new.html#release-1-1-1-7-november-2019

SnapCenter 4.2

  • Installation with few clicks with Linux virtual appliance
  • Simplified addition of new hosts & RBAC
  • Cluster Management LIFs (before you have to add each individual SVM management LIF)
  • Support vSphere HTML client
  • Automatic config check for servers & SC plugins
  • Oracle 18c
  • SnapCenter Plug-in for Microsoft Windows now support disk creation for Windows Server 2019
  • Video about SnapCenter with Steven Cortez

New with VMware & VVOLs:

  • Adaptive QoS with VVOLs are disabled & no longer recommend in VASA 9.6 P1
  • Naming issues with VVOLs on NetApp fixed in 9.6 P1
  • Support for ESXi 6.7 U2
  • In ONTAP 9.7 in System Manager you can create RBAC user role for VSC
  • IOPS, Latency, throughout in VSC 9.6 for VVOLs on SAN (statistics comes directly from ONTAP)
  • REST API for the VASA+VSC+SRM appliance (install & config and for storage provisioning and management)
  • Wizard in VSC 9.6 creates VVOL endpoints

SRM with VVOLs are coming at some point in time

Virtual Storage Console (VSC)

  • Availability of VVol reports
  • Simplified provisioning of data stores
  • Support for configuring throughout floors (QoS Min) for storage capability profiles
  • Support for REST APIs
  • Video with Karl Konnerth about VSC

Active IQ Unified Manager 9.7

  • Most important improvement is actions with simple “Fix it” button
  • New cluster security section in the dashboard
  • Integration with NetApp Service Level Manager, which is now free and included in ONTAP

FlexCache

  • Is now free
  • Supported with on-prem ONTAP systems & Cloud Volumes ONTAP
  • thin provisioning and data compaction
  • Cache volume migration
  • ONTAP auditing of cache volume reads
  • Antivirus scanning of origin volumes
  • Any-to-Any Caching between all ONTAP platforms on-prem & in the cloud, including MCC

MCC

  • FabricPool aggregate support for MCC FC and IP
  • FlexCache support for MCC FC and IP

MetroCluster IP

Open Network: A switch-less MCC 4-node cluster uses customer switches. Switches must comply with NetApp requirements and supported by the customer or switch vendor.

Supported platforms:

  • AFF A800, A700, A320, A300, and even low-end system A220
  • FAS 9000, 8200, 2750

MCC-FC

Supported platforms:

  • AFF A700, A400, A300
  • FAS 9000, 8700, 8300, 8200

ONTAP Mediator instead of tie breaker

New Mediator (instead of the tie-breaker) for MCC & ONTAP SDS. Mediator is passive appliance, managed from ONTAP. All the logic lives in ONTAP versus previously with logic on switchover was on the tie-breaker.

StorageGRID v11.3

Cloud Storage Pools, combined with the StorageGRID industry-leading data placement policy engine (ILM policies), enable your business to automatically and intelligently move only the data that needs to move to one or more clouds. Cloud Storage Pools can be based on buckets, applications, or tenants or at the individual object level (based on metadata key and/or value pairs).

https://blog.netapp.com/introducing-azure-blob-support-tier-to-more-clouds-with-storagegrid/

Keystone: Stuff as a Service

NetApp Launches On-Prem Data Center Storage as a Service:

  • Cloud-like pay-as-you-go Storage
  • Share risks with the Customer
  • Guarantees
  • Performance
  • Availability & Efficiency
  • With real Balance sheet

Complete Digital Advisors as part of Support Edge:

  • AI powered Insights & Smart Health Checks
  • Personalized Support as part of Support Edge
  • Predictable Support Pricing

Lab on demand

New labs for NetApp partners: NKS on NetApp HCI, MAX Data, VSC 9.6 with vSphere 6.7, Active IQ Unified Manager 9.6

Lab on demand for Customers

Even if you are not NetApp customer yet, you can test NetApp products:

  • Storage Tiering with FabricPool
  • Enterprise Application Protection in the Data Fabric with ONTAP
  • Understanding Trident
  • Others

https://www.netapp.com/us/try-and-buy/test-drive/index.aspx

There are more labs for current NetApp customers

Hands-on lab is in beta testing at the moment:

  • Data Protection
  • Enterprise Database Backups
  • FabricPool
  • ONTAP NAS Technologies
  • ONTAP SAN Technologies
  • Trident plugin for containers & Kubernetes

https://mysupport-beta.netapp.com/ui/#/tools

NAbox

With Grafana you can visualize performance, space & other metrics from ONTAP & Unified Manager stored in Graphite. Download v2.5.1 (2019-09-08).

Harvest 1.6:

NetApp Harvest collects data from ONTAP & Unified Manager and sends it over to Graphite. Harvest is part of NABox.

  • Harvest Extension Manager
    • Extension for NFS connections
    • Extension for SnapMirror replications
    • Extension templates for Perl, Python, Bash
  • FlexGroup capacity metrics
  • AutoSupport logging with Harvest statistics
  • Non-critical bug fixed in plugin cdot-iscsi-lif
  • Harvest 1.6

SaaS Backup

SaaS backup for Salesforce

Now available in Salesforce marketplace

Cloud Volumes

  • Azure NetApp Files certified for AzureGov region and FedRAMP
  • SAP HANA now certified on ANF
  • Kerberos and ACL for Azure NetApp Files (ANF) and Cloud Volumes Service (CVS) with NFS v4.1
  • Cloud Manager supports NKS & Cloud Compliance
  • Cloud Manager to Deploy Trident and Cloud Volumes ONTAP
  • Cloud Volumes ONTAP (CVO) in GCP
  • Cloud Volumes Services in GCP
  • Azure VMware Solution with Azure NetApp Files (ANF)
  • VMware Cloud on AWS with CVS & CVO

Cloud Volumes On-Premises

  • Cloud Volumes Service On-Premises
  • Cloud Volumes ONTAP on HCI

Cloud Compliance

Cloud Compliance is a file classification tool for industry compliance, privacy regulations across clouds. License is free, you are paying only for running VM in your cloud provider.

  • File classification & privacy risk tool for Cloud Volumes
  • License is free, you pay to your Cloud Provider for running additional VM
  • PCI, HIPAA, GDPR, CCPA, etc
  • Credit Cards, IBAN, Emails, SSN, TIM, EIN, etc

Cloud Insights

It is a monitoring tool for cloud & on-prem infrastructure, containers & different vendors, and is free with basic edition to all NetApp customers. Cloud insights available as SaaS service, no installation & configuration needed. So here is what’s new:

  • REST APIs
  • Kubernetes Technology
  • Cloud Secure now is part of Cloud Insights
  • Active IQ integration

Cloud Secure

Thanks to self-learning AI identifies baseline file access and actively blocks & notifies unusual activities with your files against insider threats and external breaches. Cloud Secure monitor, secure, optimize across hybrid multi-cloud

  • It is a feature of Cloud Insights Premium edition
  • Monitor file activities at CVO & on-prem ONTAP for unusual activities

NetApp Kubernetes Services (NKS)

At start NKS was available in public clouds, then NetApp added NKS to on-prem NetApp HCI. And on VMworld 2019 in Las Vegas, before Insight 2019 NetApp announced NKS on VMware (no NetApp HCI platform needed).

New Solutions

  • Healthcare and Automotive AI solutions validated ONTAP AI
  • Design and recipe using NKS with rapid deployment for CI/CD pipeline use case and both public and private clouds
  • End-to-end DevOps CI/CD on NetApp HCI
  • FlexPod validated as a deployment target for NetApp Kubernetes Service
  • SAP HANA validated on FlexPod (Memory-Accelerated FlexPod) including Optane memory and SnapCenter data protection
  • Citrix XenDesktop 7 designs for knowledge workers and GPU assisted VDI for power users
  • Entry-level FlexPod with NetApp C190 and Cisco UCS C220 for enterprise robustness to medium size business
  • FlexPod with FabricPool
  • FlexPod ransomware prevention and mitigation
    • Pre-emptive hardening
    • Recovery and remediation
  • Data Protection and Security Assessment
    • Identify and mitigate risks from security gaps, ransomware
    • Leverages OCI, Cloud Secure/Cloud Insights
  • And many more

HCI

New in NetApp HCI:

Two performance tests for HCI:

  1. IOmark-VM-HC: 5 storage & 18 compute nodes using data stores & VVols
  2. IOmark-VDI-HC: 5 storage nodes & 12 compute nodes with only data stores

Total 1,440 VMs with 3,200 VDI desktops.

Notice how asymmetrical number of storage nodes compared to compute nodes are, and in “Real” HCI architectures with local disk drives you have to have more equipment, while with NetApp HCI you can choose how much storage and how much compute resources you need and scale them separately. Dedup & compression were enabled in the tests.

Containers

Kubernetes Volume Snapshots & FlexClones with Trident for Cloud & On-Prem.

In August, NetApp took its Managed Kubernetes Service to the next level with a new partnership with VMware, allowing users to manage workloads and infrastructure across public and private environments in a single pane of glass.

NetApp Trident

Ansible

How to measure storage performance

SNIA (Storage Networking Industry Association), hardcore 4h!

Technical Support

  • Flat and predictable support pricing
  • OneCollect available in a container and now sends info to Active IQ
  • You can buy managed services so Firmware upgrades on your storage will be managed for you

How to collect logs before open a support ticket

Gartner Magic Quadrant for Primary Array

NetApp is in top right corner.

Will NetApp adopt QLC flash in 2020?

https://blocksandfiles.com/2019/10/30/netapp-will-adopt-qlc-flash-in-2020/

Am I missing something?

Please let me know in the comments below!

If you spotted an error, please let me know personally 😉

Disclaimer

Opinions & observations are my own, and not official NetApp information. This post contains future looking statements and may contain errors. If you have spotted an error, please let me know.

What you might miss about NetApp from Aug-Nov 2019, including Insight in Las Vegas? Content

Some competitors might say NetApp do not innovate anymore. Well, read this article and answer yourself whether it is true, or it is just yet another shameless marketing.

Part 1

E-Series

Performance

NVMe in EF600

E-Series Performance Analyzer

New TR docs about EF & DB

Part 2

MAX Data 1.5

Previously in 1.4

Part 3

NetApp & Rubrik

NetApp & Veeam

Part 4

Active IQ 2.0

Active IQ Unified Manager 9.7

Part 5

AFF & FAS

AFF & NVMe

ONTAP AI with containers

ASA

ONTAP

ONTAP Select

ONTAP SDS is in embedded non-X86 systems for edge devices

FlexGroup

SnapMirror Sync (SM-S)

NDAS

SnapCenter 4.2

New with VMware & VVOLs:

Virtual Storage Console (VSC)

FlexCache

MCC

MetroCluster IP

MCC-FC

ONTAP Mediator instead of tie breaker

Part 6

StorageGRID v11.3

Part 7

Keystone

Complete Digital Advisors as part of Support Edge:

Part 8

Lab on demand

Lab on demand for Customers

There are more labs for current NetApp customers

Part 9

NAbox

Harvest 1.6

Part 10

SaaS Backup

SaaS backup for Salesforce

Cloud Volumes

Cloud Volumes On-Premises

Cloud Compliance

Cloud Insights

Cloud Secure

NetApp Kubernetes Services (NKS)

HCI

Part 11

New Solutions

Part 12

Containers

NetApp Trident

Ansible

Part 13

Technical Support

How to collect logs before open a support ticket

How to measure storage performance

Gartner Magic Quadrant for Primary Array

Will NetApp adopt QLC flash in 2020?

Continue to read

All announcements from Aug-Nov 2019

Am I missing something?

Please let me know in the comments below!

If you spotted an error, please let me know personally 😉

Disclaimer

Opinions & observations are my own, and not official NetApp information. This post contains future looking statements and may contain errors. If you have spotted an error, please let me know.

Why use NetApp snapshots even when you do not have Premium bundle software?

If you are extremely lazy and do not want to read any farther, the answer is “use snapshots to improve RPO and use ndmpcopy to restore files, LUNs and SnapCreator for app-consistent snapshots.

Premium bundle includes a good deal of software besides Base software in each ONTAP system, like:

  • SnapCenter
  • SnapRestore
  • FlexClone
  • And others.

So, without Premium bundle, with only Basic software we have two issues:

  • You can create snapshots, but without SnapRestore or FlexClone you cannot restore them quickly
  • And without SnapCenter you cannot make application consistent snapshot.

And some people asking, “Do I need to use NetApp snapshots in such circumstances?”

And my answer is: Yes, you can, and you should use ONTAP snapshots.

Here is the explanation of why and how:

Snapshots without SnapRestore

Why use NetApp storage hardware snapshots? Because they have no performance penalty and also no such a thing as snapshot consolidation which causes a performance impact. NetApp snapshots work pretty well and they also have other advantages. Even though it is not that fast as with SnapRestore or FlexClone to restore your data captured in snapshots, you can create snaps very fast. And most times, you need to restore something very seldom, so fast creation of snapshots with slow restoration will give you better RPO compare to a full backup. Of course, I have to admit that you improved RPO only for cases when your data were logically corrupted, and no physical damage was done to the storage because if your storage physically damaged, snapshots will not help. With ONTAP you can have up to 1023 snapshots per volume, and you can create them as fast as you need with no performance degradation whatsoever, which is pretty awesome.

Snapshots with NAS 

If we are speaking about NAS environment without SnapRestore license, you always can go to the .snapshot folder and copy any previous version of a file you need to restore. Also, you can use the ndmpcopy command to perform file, folder or even volume restoration inside storage without involving a host.

Snapshots with SAN 

If we are speaking about SAN environment without SnapRestore license, you do not have such ability as copying a file on your LUN and restore it. There are two stages in case you need to restore something on a LUN:

  1. You copy entire LUN from a snapshot
  2. And then you can either:
    • Restore entire LUN on the place of the last active version of your LUN
    • Or you can copy data from copied LUN to the active LUN.

To do that, you can use either ndmpcopy or lun copy commands to perform the first stage. And if you want to restore only some files from an old version of the LUN from a snapshot, you need to map that copy to a host and copy required data back to active LUN.

Application consistent storage snapshots 

Why do you need application consistency in the first place? Sometimes, in an environment like the NAS file share with doc files, etc., you do not need that at all. But if you are using applications like Oracle DB, MS SQL or VMWare you’d better have application consistency. Imagine you have a Windows machine and you are pulling hard drive while Windows is running, let’s forget for a moment that your Windows will stop working, this is not the point here, and let’s focus on data protection side of that. The same happens when you are creating a storage snapshot, data captured in that snapshot will be similarly not complete. Will the pulled off hard drive be a proper copy of your data? Kind of, right? Because some of the data will be lost in host memory and your FS probably will not be consistent, and even though you’ll be able to restore logged file system, your application data will be damaged in a way it hard to restore, because against of the data lost from host memory. Similarly, snapshots will contain probably damaged FS, if you try to restore from such a copy, your Windows might not start, or it might start after FS recheck, but your applications especially Data Bases definitely will not like such a backup. Why? Because most probably you’ll get your File System corrupted because applications and OS which were running on your machine didn’t have a chance to destage data from memory to your hard drive. So, you need someone who will prepare your OS & applications to create a backup. As you may know, application consistent storage hardware snapshots can be created by backup software like Veeam, Commvault, and many others, or you even can trigger a storage snapshot creation yourself with relatively simple Ansible or PowerShell script. Also, you can do application-consistent snapshots with free NetApp SnapCreator software framework, unlike SnapCenter, it does not have a simplistic and straight-forward application GUI wizards which help to walk you through with the process of integration with your app. Most times, you have to write a simple script for your application to benefit online & application-consistent snapshots, another downside that SnapCreator is not officially supported software. But at the end of the day, it is relatively easy setup, and it will definitely pay you off once you finish setting up.

List of other software features available in Basic software

This Basic ONTAP functionality also might be useful: 

  • Horizontal scaling, nod-disruptive operations such as online volume & LUN migration, non-disruptive upgrade with adding new nodes to the cluster
  • API automation
  • FPolicy file screening
  • Create snapshots to improve RPO
  • Storage efficiencies: Deduplication, Compression, Compaction
  • By default ONTAP deduplicate data across active file system and all the snapshots on the volume. Savings from the snapshot data sharing is a magnitude of number of snapshots: the more snapshots you have, the more savings you’ll have
  • Storage Multi-Tenancy
  • QoS Maximum
  • External key manager for Encryption
  • Host-based MAX Data software which works with ONTAP & SAN protocols
  • You can buy FlexArray license to virtualize 3rd party storage systems
  • If you have an All Flash system, then you can purchase additional FabricPool license which is useful especially with snapshots, because it is destaged cold data to cheap storage like AWS S3, Google Cloud, Azure Blob, IBM Cloud, Alibaba Cloud or on-premise StorageGRID system, etc.

Summary

Even Basic software has a reach functionality on your ONTAP system, you definitely should use NetApp snapshots, and set up application integration to make your snapshot application consistent. With hardware NetApp storage snapshots, you can have 1023 snapshots per volume, create them as fast as you need without sacrificing storage performance, so snapshots will increase your RPO. Application consistency with SnapCreator or any other 3rd party backup software will build confidence that all the snapshots can be restorable when needed.

ONTAP & Antivirus NAS protection

NetApp with ONTAP OS supports antivirus integration known as Off-box Antivirus Scanning or VSCAN. With VSCAN ability, the storage system will scan each new file with an antivirus system. VSCAN allows increasing corporate data security.

ONTAP supports the next list of antivirus software:

  • Symantec
  • Trend Micro
  • Computer Associates
  • Kaspersky
  • McAfee
  • Sophos

Also, ONTAP supports FPolicy technology which can prevent a file been written or read based on file extension or file content header.

This time I’d like to discuss an example of CIFS (SMB) integration with antivirus system McAfee.

AV-1

In this example im going to show how to set up integration with McAfee. Here are the minimum requirements for McAfee but approximately the same with other AVs:

  • MS Windows Server 2008 or higher
  • NetApp storage with ONTAP 8 or higher
  • SMB v2 or higher (CIFS v1.0 not supported)
  • NetApp ONTAP AV Connector (Download page)
  • McAfee VirusScan Enterprise for Storage (VSEfS)
  • For more details see NetApp Support Matrix Tool.
AV-2

Diagram of antivirus integration with ONTAP system.

Preparation

To set up such an integration, we will need to configure the next software components:

AV-3

VSEfS

We need to set up McAfee VSEfS, which can work in two modes: as an independent product or as managed by McAfee ePolicy Orchestrator (McAfee ePO). In this article, I will discuss how to configure it as an independent product. To set up & configure VSEfS we will need already installed and configured:

  • McAfee VirusScan Enterprise (VSE). Download VSE
  • McAfee ePolicy Orchestrator (ePO), not needed if VirusScan used as an independent product.

SCAN Server

At first, we need to configure few SCAN servers to balance the workload between them. I will install each SCAN server on a separate Windows Server with McAfee VSE, McAfee VSEfS, and ONTAP AV Connector. In this article, we will create three SCAN servers: SCAN1, SCAN2, SCAN3.

Active Directory

At the next step, we need to create user scanuser in our domain, in this example domain will be NetApp.

ONTAP

After ONTAP been started, we need to create Cluster management LIF and SVM management LIF; set up AD integration and configure file shares and data LIFs for SMB protocol. Here, we will have NCluster-mgmt LIF for cluster management and SVM01-mgmt for SVM management.

NCluster::> network interface create -vserver NCluster -home-node NCluster-01 -home-port e0M -role data -protocols none -lif NCluster-mgmt -address 10.0.0.100 -netmask 255.0.0.0
NCluster::> network interface create -vserver SVM01 -home-node NCluster-01 -home-port e0M -role data -protocols none -lif SVM01-mgmt -address 10.0.0.105 -netmask 255.0.0.0
NCluster::> domain-tunnel create -vserver SVM01
NCluster::> security login create -username NetApp\scanuser -application ontapi -authmethod domain -role readonly -vserver NCluster
NCluster::> security login create -username NetApp\scanuser -application ontapi -authmethod domain -role readonly -vserver SVM01

ONTAP AV Connector

On each SCAN server, we will install the ONTAP AV Connector. At the end of the installation, I will add AD logging & password for the user scanuser.

AV-4

Then configure management LIFs

Start → All Programs → NetApp → ONTAP AV Connector → Configure ONTAP Management LIFs

In the field “Management LIF” we will add DNS name or IP address for the NCluster-mgmt or SVM01-mgmt. In the Account field, we will fill with NetApp\scanuser. Also, then pressing “Test,” “Update” or “Save” if test finished.

AV-5

McAfee Network Appliance Filer AV Scanner Administrator Account

Assuming you already installed McAfee on three SCAN servers, on each SCAN server, we are logging as an administrator and in Windows taskbar opening VirusScan Console and then open Network Appliance Filer AV Scanner and choosing tab called “Network Appliance Filers.” So, in the field “This Server is processing scan request for these filers” press the “Add button” and put to the address field “127.0.0.1”, and then also add scanuser credentials.

AV-6

Returning to ONTAP console

Configuring off-box scanning, then enabling it, creating and applying scan policies. SCAN1, SCAN2, and SCAN3 are the Windows servers with installed McAfee VSE, VSEfS, and ONTAP AV Connector.
First, we create a pool of AV servers:

NCluster::> vserver vscan scanner-pool create -vserver SVM01 -scanner-pool POOL1 -servers SCAN1,SCAN2,SCAN3 -privileged-users NetApp\scanuser 
NCluster::> vserver vscan scanner-pool show
Scanner Pool Privileged Scanner Vserver Pool Owner Servers Users Policy 
-------- ---------- ------- ------------ ------------ ------- 
SVM01 POOL1 vserver SCAN1, NetApp\scanuser idle SCAN2, SCAN3

NCluster::> vserver vscan scanner-pool show -instance
Vserver: SVM01 Scanner Pool: 
POOL1 Applied Policy: idle 
Current Status: off 
Scanner Pool Config Owner: vserver 
List of IPs of Allowed Vscan Servers: SCAN1, SCAN2, SCAN3 
List of Privileged Users: NetApp\scanuser

Second, we apply a scanner policy:

NCluster::> vserver vscan scanner-pool apply-policy -vserver SVM01 -scanner-pool POOL1 -scanner-policy primary
NCluster::> vserver vscan enable -vserver SVM01
NCluster::> vserver vscan connection-status show
Connected Connected Vserver Node Server-Count Servers 
--------- -------- ------------ ------------------------ 
SVM01 NClusterN1 3 SCAN1, SCAN2, SCAN3

NCluster::> vserver vscan on-access-policy show
Policy Policy File-Ext Policy Vserver Name Owner Protocol Paths Excluded Excluded Status 
--------- --------- ------- -------- ---------------- ---------- ------ 
NCluster default_ cluster CIFS - - off CIFS SVM01 default_ cluster CIFS - - on CIFS 

Licensing

There is no other licensing needed on ONTAP side to enable and use FPolicy & off-box anti-virus scanning; this is a basic functionality available in any ONTAP system. However, you might need to license additional functionality from the antivirus side, so please check it with your antivirus vendor.

Summary

Here are some advantages in integration storage system with your corporate AV: NAS integration with antivirus allows you to have one of the antivirus systems on your desktops and another for your NAS share. There is no need to do NAS scanning on workstations and waste their limited resources. All NAS data protected, there is no way for a user with advanced privileges to connect to the file share without antivirus protection and put there some unscanned files.

NetApp in containerization era

It’s not really a technical article as I usually do, but rather a short list of topics for one direction NetApp developing a lot recently, called “Containers”. Containerization getting more and more popular nowadays and I noticed NetApp heavily invests efforts in it, so I identified four main directions in that field. Let’s name a few NetApp products using containerization technology:

  1. E-Series with running containers on top of the platform
  2. Containerization of existing NetApp software:
    • ActiveIQ PAS
  3. Trident plugin for ONTAP, SF & E-Series (NAS & SAN):
    • NetApp Trident plug-in for Jenkins
    • Converged Infrastructure:
    • Oracle, PostgreSQL & MongoDB in containers with Trident
    • Integration with Moby Project
    • Integration with Mesosphere Project
  4. Cloud-native services & Software:
    • Cloud Insights
      • Monitor Kubernetes environment
      • NKS visibility in Cloud Insights
    • SaaS Backup for Service Providers
  5. Other

Documents, Howtos, Architectures, News & Best Practices:

Is it a full list of NetApp’s efforts towards containerization?

I bet that is far not complete. Post your thoughts and links with documents, news, howtos, architectures and best practice guides in the comments below to expand this list if I missed something!

New NetApp platform for ONTAP 9.6 (Part 3) AFF C190

NetApp introduced C190 for Small Business, following the new platform A320 with ONTAP 9.6.

C190

This new All-Flash system has:

  • Fixed format, with no ability to connect additional disk shelves:
    • Only 960 SSD drives installed only in the controller chassis
    • Only 4 configs: with 8, 12, 18 or 24 drives
      • Effective capacity respectively: 13, 24, 40 or 55 TB
    • Supports ONTAP 9.6 GA and higher
    • C190 build with the same chassis as A220, so per HA pair you’ll get:
      • 4x 10Gbps SFP cluster ports
      • 8x UTA ports (10 Gbps or FC 16Gbps)
      • There is a model with 10GBASE-T ports instead of UTA & cluster interconnect ports (12 ports total). Obviously BASE-T ports do not support FCP protocol
  • There will be no more “useful capacity”, NetApp will provide only “Effective capacity”:
    • With dedup, compression, compaction and 24 x 960 GB drives the system provide ~50 TiB Effective capacity. 50 TiB is pretty reliable conservative number because it is even less than ~3:1 data reduction
    • Deduplication snapshot sharing functionality introduced in previous ONTAP versions allows gaining even better efficiency
    • And of course FabricPool tiering can help to save much space
  • C190 comes with Flash bundle which adds to Basic software:
    • SnapMirror/SnapVault for replication
    • SnapRestore for fast restoration from snapshots
    • SnapCenter for App integration with storage snapshots
    • FlexClone for thing cloning.

Fixed configuration with built-in drives, I personally think, is an excellent idea in general, taking into account we have such a wide variety of capacity in SSD drives nowadays and even more to come. Is this the future format for all storage systems with flash? Though C190 supports only 960 GB SSD drives, and new Mid-range A320 system, can have more than one disk shelf.

Fixed configuration allows to manufacture & deliver the systems to clients faster and reduce costs. C190 will cost sub $25k with min config according to NetApp.

Also, in my opinion, C190 can more or less cover market place left after the announcement for the end of sale (EOS) of hardware and virtual AltaVault (AVA, and recently know under a new name “Cloud Backup”) appliances thanks to FabricPool tiering. Cloud Backup appliances still available through AWS & Azure market places. Especially now it is the case after FabricPool in ONTAP 9.6 no longer have a hard-coded ratio for how many data system can store in the cloud compare to hot tier & allows wright-through with “All” policy.

Turns out information about storage capacity “consumed” more comfortable in the form of effective capacity. All this useful capacity, garbage collector and other storage overheads, RAIDs and system reserves are too complicated, so hey, why not? I bet idea of showing only effective capacity influenced by vendors like Pure, which have very effective marketing for sure.

Cons

  • MetroCluster over IP is not supported in C190, while Entry-level A220 & FAS2750 systems support MCC-IP with ONTAP 9.6
  • C190 require ONTAP 9.6, and ONTAP 9.6 do not support 7MTT.

Read more

Disclaimer

All product names, logos, and brands are the property of their respective owners. All company, product, and service names used in this website are for identification purposes only. No one is sponsoring this article.

How does the ONTAP cluster work? (Part 4)

This article is part of the series How does the ONTAP cluster work? Also previous series of articles How ONTAP Memory work will be a good addition to this one.

Data protocols

ONTAP is considered as a unified storage system, meaning it supports both block (FC, FCoE, NVMeoF and iSCSI) & file (NFS, pNFS, CIFS/SMB) protocols for its clients. SDS versions of ONTAP (ONTAP Select & Cloud Volumes ONTAP) do not support FC, FCoE or FC-NVMe protocols because of their software-defined nature.

Physical ports, VLANs and ifgroups considered as “ports” in ONTAP. Each port can run multiple LIFs. If a port has at least one VLAN, then LIFs can be created only on VLANs, not anymore on the port itself. If a port part of an ifgroup, LIFs can be created only on top of the ifgroup, not on the port itself anymore. If a port part of ifgroup on top of which created one or a few VLANs, LIFs can be created only on top of those VLANs, not on ifgroup or physical port anymore.
It is very common configuration when two ports are part of an ifgroup and a few VLANs created for protocols. Here are two very popular examples, in this examples storage system configured with:

  • SMB for PC users (MTU 1500)
    • Popular example: User home directories
  • SMB for Windows Servers (MTU 9000)
    • Use case: MS SQL & Hyper-V
  • NFS for VMware & Linux Servers (MTU 9000)
    • Use-case: VMware NFS Datastore
  • iSCSI for VMware, Linux & Windows Servers (MTU 9000)
    • Block-deice for OS boot and some other configs, like Oracle ASM

Example 1 is for customers who want to use all of the Ethernet-based protocols, but have only 2 ports per node. Example 2 is more preferable with dedicated Ethernet ports for iSCSI traffic if a storage node have sufficient number of ports.

Notice Example 1, has two iSCSI VLANs A & B, which are not necessary, I would use two, in order to increase number of connections over ifgrp to increase load-balancing but it is up to storage & network admins. Normally each, iSCSI-A & iSCSI-B would use a separate IP subnet. See ifgroup section for the network load-balancing explanation.

VLANs

VLANs in ONTAP allow to separate two IP networks one from another and often used with NFS, CIFS and iSCSI protocols, though multiple IP addresses allowed on a single port or VLAN. A VLAN can be added on a physical Ethernet port or to an ifgroup.

ifgroup

Interface group is a collection of a few ports (typically with the same speed). Ports from a single ifgroup must be located on a single node. An ifgroup provide network redundancy to ethernet. Each ifgroup perceived and used as a physical port. One of the most notable & used type of ifgroup is Dynamic multimode. Dynamic multimode enables LACP protocol on ports so in case one port in a group dies another will be used fully transparently to upper-level protocols like NFS, SMB and iSCSI. Most notable in LACP is the ability to distribute data across links in attempt to equally load all the links in the ifgroup.

Ports starting with latter “e“, means it is a physical port, then number of PCIe bus (0 means on-board ports), and then another latter starting with “a” which represents index of the port on the PCIe bus. While ifgroup (virtual aggregated) port names starts with “a” (can be any latter), then number, and then another latter, for example a1a, to keep same format as physical ports for naming convention, even though number and the ending latter no longer represents anything in particular (i.e. PCIe bus or port position on the bus), but used only as index to distinguish from other ports.

Unfortunately LACP load distribution is far from perfect: the more hosts in network communicate with an ifgroup, the more probability to equally distribute traffic across network ports. LACP uses a single static formula depending on source and destination information of a network packet, there is no intellectual analysis and decision-making as for example in SAN protocols and there is no feedback from lower Ethernet level to upper-level protocols. Also LACP often used in conjunction of Multi-Chassis Ether Channel (vPC is another commercial name) functionality, to distribute links across a few switches and provide switch redundancy, which require some additional efforts from switch configuration and the switches themselves. While on another hand SAN protocols do not need switches to provide this type of redundancy because it done on protocol upper level. This is the primary reason why SMB and NFS protocols developed their extensions to provide similar functionality: to more intellectually and equally distribute load across links and be aware of network path status.

One day these protocols will fully remove necessity for Ethernet LACP & Multi-Chassis Ether Channel: pNFS, NFS Session Trunking (NFS Multipathing), SMB Multichannel and SMB Continuous Availability. Until then we going to use ifgroups with configurations which do not support those protocols.

NFS

NFS was the first protocol available in ONTAP. The latest versions of ONTAP 9 support NFSv2, NFSv3, NFSv4 (4.0 and 4.1) and pNFS. Starting with 9.5, ONTAP support 4-byte UTF-8 sequences in names for files and directories.
Network switch technically is not required for NFS traffic and direct host connection is possible, but network switch is used in all the configurations to provide additional level of network redundancy and be able easier add new hosts when needed.

SMB

ONTAP supports SMB 2.0 and higher up to SMB 3.1. Starting with ONTAP 9.4 SMB Multichannel, which provides functionality similar to multipathing in SAN protocols, is supported. Starting with ONTAP 8.2 SMB protocol supports Continuous Availability (CA) with SMB 3.0 for Microsoft Hyper-V and SQL Server. SMB is a session-based protocol and by default does not tolerate session brakes, so SMB CA helps to tolerate unexpected session loss, for example in case a network port went down. ONTAP supports SMB encryption, which is also known as sealing. Sped up AES instructions (Intel AES NI) encryption is supported in SMB 3.0 and later. Starting with ONTAP 9.6 FlexGroup volume supports SMB CA and thus support MS SQL & Hyper-V on FlexGroup.
Network switch technically is not required for SMB traffic and direct host connection is possible, but network switch is used in all the configurations to provide additional level of network redundancy and be able easier add new hosts when needed.
Here is the hierarchy visualization & logical representation of NAS protocols in ONTAP cluster and corresponding commands (highlighted in grey):

Each NAS LIF can accept NFS & SMB traffic, but usually engineers tend to separate them on separate LIFs.

FCP

ONTAP on physical appliances supports SCSI-based FCoE as well as FC protocols, depending on HBA port speed. Both FC & FCoE are known under a single umbrella name FCP. An iGroup is a collection of WWPN address from initiator hosts which allowed to access storage LUNs. WWPN address is interface on FC or FCoE port. typically you need to add all the initiator host ports to iGroup to allow multipathing work properly. ONTAP uses N_Port ID Virtualization (NPIV) for FC data and therefore require a FC network switch (typically at least two) which also supports NPIV, direct FC connections from host to storage are not supported.
Here is the hierarchy visualization & logical representation of FCP protocols in ONTAP cluster and corresponding commands (highlighted in grey):

Read more about ONTAP Zoning here.

iSCSI

iSCSI is another SCSI-based protocol encapsulated in IP/Ethernet transport. NetApp also supports Data Center Bridging (DCB) protocol for some models, depending on Ethernet port chips. Network switch technically is not required and direct host connection is possible, but network switch is recommended to be able easier add new hosts when needed.
Network switch technically is not required for iSCSI traffic and direct host connection is possible if number of storage ports allows, though network switch can be used to easier add new hosts when needed. Network switch is recommended to have with iSCSI.
Here is the hierarchy visualization & logical representation of iSCSI protocol in ONTAP cluster and corresponding commands (highlighted in grey):

NVMeoF

NVMe over Fabrics (NVMeoF) refers to the ability to use NVMe protocol over existing network infrastructure like Ethernet (Converged or traditional), TCP, Fiber Channel or InfiniBand for transport (as opposite to run NVMe over PCIe without additional encapsulation). NVMe is SAN block data protocol. In contrast to NVMe (just) where an extra layer of transport is not used and devices connected directly to PCIe bus.

Starting with ONTAP 9.5, NVMe ANA protocol supported which provide, similarly to ALUA, multipathing functionality to NVMe. ANA for NVMe currently supported only with SUSE Enterprise Linux 15, VMware 6.7 and Windows Server 2012/2016. FC-NVMe without ANA supported with SUSE Enterprise Linux 12 SP3 and RedHat Enterprise Linux 7.6.

FC-NVMe

NVMeoF supported only on All-Flash FAS systems and not for Entry level A200 and A220 systems due to the lack of FC 32 Gb ports. Subsystem in NVMe used for the same purpose as iGroups, it allows initiator host NQN addresses which allowed to access a namespace. A namespace in this context is very similar to a LUN in FCP or iSCSI. Do not mix up namespace term in NVMe with a single namespace in ONTAP cluster.
Here is the hierarchy visualization & logical representation of FC-NVMe protocol in ONTAP cluster and corresponding commands (highlighted in grey):

Spirit of this article

This article explains principles, architecture, NetApp’s unique approaches and maybe even spirit of ONTAP clusterization. Each configuration, model and appliance have their nuances, which left out of the scope of this article. I’ve tried to give a general direction of the ideas behind NetApp innovative technologies, while (trying) not putting too many details to the equation to keep it simpler but not to lose important system architecture details. This is a far not complete story about ONTAP; for example, I didn’t mention about 7-Mode Transition Tool (7MTT) required for the transition to Clustered ONTAP to make complex information easier to consume nor didn’t go to WAFL detail explanation. Therefore some things might be a bit different in your case.

Summary

Clustered ONTAP first time was introduced around 2010 in version 8. After almost 10 years of hardening ONTAP (cluster) become mature, highly scalable and flexible solution with not just unique, unprecedented on the market functionalities in a single product but also impressive performance thanks to its WAFL versatility & clusterization capabilities.

Continue to read

How ONTAP Memory work

Zoning for cluster storage in pictures

Disclaimer

Please note in this article I described my own understanding of the internal organization of ONTAP systems. Therefore, this information might be either outdated, or I simply might be wrong in some aspects and details. I will greatly appreciate any of your contribution to make this article better, please leave any of your ideas and suggestions about this topic in the comments below.

All product names, logos, and brands are property of their respective owners. All company, product and service names used in this website are for identification purposes only.