What you might miss about NetApp from Aug-Nov 2019, including Insight in Las Vegas? Part 5

AFF & FAS

AFF400, FAS8300, FAS8700. All of those systems basically the same platform but with a different number of Memory & CPU cores. FAS systems also have FlashCache module onboard for read caching. All three requires 9.7 RC1 and have the latest Intel Cascade Lake processors. And that gives us hope we might see Optane as cache as NetApp showed as part of its vision on Insight 2017:

  • Read more about new AFF400 system at Andre Schmitz’s blog
  • AFF with HW-assistant dedup & compression on a network card Pensando used for Cluster Interconnects
  • RoCE as data protocol available as Proof of Concept lab for testing, but not in 9.7
  • AFF A400 now supports NS224 disk shelf alongside with A320

AFF & NVMe

ONTAP 9.6 is qualified with RedHat, SUSE and Oracle Linux for FC-NVMe. Storage path fail-over (ANA) supported with SUSE Linux 15, Oracle Linux 7.6 and 7.7, and (still) no RedHat Linux because of RedHat slowness.

ONTAP AI with containers

Execute AI workloads at a scale with Trident and Kubernetes for CI solution “ONTAP AI” with NVIDIA DXG servers, networking & NetApp AFF NAS storage.

ASA

ASA is using the same AFF but only provide SAN protocols. ASA systems uses ONTAP with same architecture but Symmetric Active/Active network access for the block devices. Watch a short video about ASA on Twitter. Hardware for ASA is identical to standard AFF A700 and AFF A220 systems, and software bundle includes Flash/Premium bundle licenses minus NAS-only features

ONTAP

Much new stuff in ONTAP 9.7

  • User Interface Improvements
    • Simplified UI allows faster creation of LUNs, NAS shares & DP
    • Hardware health
    • Network connectivity diagram
    • Performance history (previously only performance since you opened it)
  • FabricPool Mirrors: one aggregate connected to a 2 buckets for added resiliency (previously only one bucket per aggregate)
  • CVO now supports Cloud Backup Service in AWS
  • Synchronous mirroring (SM-S) with MetroCluster (MCC)
  • Map NFS clients to specific volumes and IP addresses
  • XCP for rapid file delete
  • TCP Performance Enhancements in ONTAP 9.6
  • Watch a video about ONTAP 9.7 with Skip Shapiro

ONTAP Select

Networking best practices for ONTAP Select

ONTAP SDS is in embedded non-X86 systems for edge devices

Slimmed version of ONTAP Select called Photon used on edge devices. Blocks & Files blog post.

FlexGroup

  • Fast in-place conversion FlexVol to FlexGroup
  • NFSv4, 4.1 & pNFS support with NFS security and locking
    • With pNFS data I/O path goes locality in a cluster
  • NDMP with FlexGroups: entire FG, directory or Qtree
    • Dump & Restores between FGs or FG & FlexVols
  • FlexGroup is now supported as an origin volume for FlexCache
  • FlexClone support with FlexGroup
  • VAAI, and copy offload support, though FlexGroups still not recommended with VMs, but we can clearly see NetApp moves that direction

SnapMirror Sync (SM-S)

  • Now SM-S replicate app-created snaps & FlexClones. Scheduled snaps are not replicated though
  • NVMe/FC protocol and namespace support added

NDAS

Since NDAS replicates data with metadata (Unlike FabricPool), now you can access data directly out of the bucket

  • StorageGRID as Object storage added for NDAS besides existing AWS S3
  • NetApp looking into an ability to connect NetApp In-Place Analytics Module to object storage bucket with backed up by NDAS data from ONTAP NAS so you can do analytics with your data and how they evolved over the time (i.e. snapshots)
  • The NetApp Interoperability Matrix Tool now lists supported configurations for NetApp Data Availability Services
  • AutoSupport is now enabled by default to send NDAS diagnostic records to NetApp Support
  • 200 TB of managed capacity with 100 million files is now supported
  • ONTAP 9.6P1 & ONTAP 9.7 and later releases are now supported on the target cluster
  • Bring your own licenses (BYOL) are now available for new and upgraded NDAS instances
  • The NDAS PreChecker tool is now available to assess your environment’s readiness for deploying NDAS
  • New AWS regions are now supported for NDAS with AWS:
    • EU (Ireland)
    • EU (Frankfurt)
    • Asia Pacific (Singapore)
    • Asia Pacific (Sydney)
    • Asia Pacific (Hong Kong)

https://docs.netapp.com/us-en/netapp-data-availability-services/reference_whats_new.html#release-1-1-1-7-november-2019

SnapCenter 4.2

  • Installation with few clicks with Linux virtual appliance
  • Simplified addition of new hosts & RBAC
  • Cluster Management LIFs (before you have to add each individual SVM management LIF)
  • Support vSphere HTML client
  • Automatic config check for servers & SC plugins
  • Oracle 18c
  • SnapCenter Plug-in for Microsoft Windows now support disk creation for Windows Server 2019
  • Video about SnapCenter with Steven Cortez

New with VMware & VVOLs:

  • Adaptive QoS with VVOLs are disabled & no longer recommend in VASA 9.6 P1
  • Naming issues with VVOLs on NetApp fixed in 9.6 P1
  • Support for ESXi 6.7 U2
  • In ONTAP 9.7 in System Manager you can create RBAC user role for VSC
  • IOPS, Latency, throughout in VSC 9.6 for VVOLs on SAN (statistics comes directly from ONTAP)
  • REST API for the VASA+VSC+SRM appliance (install & config and for storage provisioning and management)
  • Wizard in VSC 9.6 creates VVOL endpoints

SRM with VVOLs are coming at some point in time

Virtual Storage Console (VSC)

  • Availability of VVol reports
  • Simplified provisioning of data stores
  • Support for configuring throughout floors (QoS Min) for storage capability profiles
  • Support for REST APIs
  • Video with Karl Konnerth about VSC

FlexCache

  • Is now free
  • Supported with on-prem ONTAP systems & Cloud Volumes ONTAP
  • thin provisioning and data compaction
  • Cache volume migration
  • ONTAP auditing of cache volume reads
  • Antivirus scanning of origin volumes
  • Any-to-Any Caching between all ONTAP platforms on-prem & in the cloud, including MCC

MCC

  • FabricPool aggregate support for MCC FC and IP
  • FlexCache support for MCC FC and IP

MetroCluster IP

Open Network: A switch-less MCC 4-node cluster uses customer switches. Switches must comply with NetApp requirements and supported by the customer or switch vendor.

Supported platforms:

  • AFF A800, A700, A320, A300, and even low-end system A220
  • FAS 9000, 8200, 2750

MCC-FC

Supported platforms:

  • AFF A700, A400, A300
  • FAS 9000, 8700, 8300, 8200

ONTAP Mediator instead of tie breaker

New Mediator (instead of the tie-breaker) for MCC & ONTAP SDS. Mediator is passive appliance, managed from ONTAP. All the logic lives in ONTAP versus previously switchover logic was on the tie-breaker’s side.

Continue to read

All announcements from Aug-Nov 2019

Am I missing something?

Please let me know in the comments below!

If you spotted an error, please let me know personally 😉

Disclaimer

Opinions & observations are my own, and not official NetApp information. This post contains future looking statements and may contain errors. If you have spotted an error, please let me know.

What you might miss about NetApp from Aug-Nov 2019, including Insight in Las Vegas? Part 4

Active IQ 2.0

  • AI learning algorithms based on customer base wisdom provides better insights about risks
  • APIs with Active IQ
  • Now you cannot just monitor but take actions and fix risks with Active IQ (which happens through UM)
  • It also decrease “nose”: Active IC trying to find the root of the problem and solve those problems first which will fix majority of your problems: let’s say you have 100 risks with, but it might happen you can fix 70% with a few fixes
  • OneCollect available in a container, and now sends info to Active IQ

Active IQ Unified Manager 9.7

  • Most important improvement is actions with simple “Fix it” button
  • New cluster security section in the dashboard
  • Integration with NetApp Service Level Manager, which is now free and included in ONTAP

Continue to read

All announcements from Aug-Nov 2019

Am I missing something?

Please let me know in the comments below!

If you spotted an error, please let me know personally 😉

Disclaimer

Opinions & observations are my own, and not official NetApp information. This post contains future looking statements and may contain errors. If you have spotted an error, please let me know.

What you might miss about NetApp from Aug-Nov 2019, including Insight in Las Vegas? Part 3

NetApp & Rubrik

NetApp & Rubrik announced collaboration. First StorageGRID can be a target for Rubrik archives. And second Rubrik now supports NetApp SnapDiff API. SnapDiff API is a technology in ONTAP which compares two snaps and gives a list of files changed so Rubrik can copy only changed files. While Rubrik is not the first in working with NetApp SnapDiff APIs, others like Catalogic, Commvault, IBM (TSM) and Veritas (NetBackup) can work with it as well, but Rubrik is the first one with backing up data to a public cloud. Will be available in Rubrik Cloud Data Management (CDM) v5.2 in 2020.

NetApp & Veeam

Veeam Availability Orchestrator v3 (VAO) provide a new level of NetApp integration for DP:

  • FULL recovery orchestration for NetApp ONTAP Snapshots
  • Automated testing and reports that have become essential to your DR strategies
  • TR-4777: Veeam & StorageGRID

Continue to read

All announcements from Aug-Nov 2019

Am I missing something?

Please let me know in the comments below!

If you spotted an error, please let me know personally 😉

Disclaimer

Opinions & observations are my own, and not official NetApp information. This post contains future looking statements and may contain errors. If you have spotted an error, please let me know.

What you might miss about NetApp from Aug-Nov 2019, including Insight in Las Vegas? Part 2

MAX Data 1.5

  • Support for ONTAP 9.6 GA and later releases
  • Support for FAS storage systems or ONTAP Select systems running ONTAP 9.7 besides AFF storage systems
  • Resizing application memory allocation
  • Support for Red Hat Enterprise Linux 7.7
  • Support for local snapshots on server-only systems
  • Significant performance improvements with more I/o, less latency: 5.4M I/o 4KB READ @ 12.5usec latency

Previously in 1.4

With version 1.4 you can use MAX Data without AFF. Tiering now works between PMEM and your SSD installed in the server.

Some info leaks that HCI will support MAX Data at some point.

Considering new compute node H615C with the Cascade Lake CPUs, which is by the way, required for Optane memory, so it looks like NetApp putting all together to make it happen.

Continue to read

Announcements from Aug-Nov 2019

Am I missing something?

Please let me know in the comments below!

If you spotted an error, please let me know personally 😉

Disclaimer

Opinions & observations are my own, and not official NetApp information. This post contains future looking statements and may contain errors. If you have spotted an error, please let me know.

What you might miss about NetApp from Aug-Nov 2019, including Insight in Las Vegas? Part 1

E-Series

Performance

End-to-End NVMe with EF600 – More I/o (x2 times more than EF570), less latency:

NVMe in EF600

  • 100Gb NVMe/RoCE
  • 100Gb NVMe/InfiniBand
  • 32Gb NVMe/FC

E-Series Performance Analyzer

An automated installation and deployment of Grafana, NetApp E-Series Web Services, and supporting software for performance monitoring of NetApp E-Series Storage Systems. NetApp intend this project to allow you to quickly and simply deploy an instance of our performance analyzer for monitoring your E-Series storage systems. We incorporate various open source components and tools in order to do so. While they primarily intend it to serve as a reference implementation for using Grafana to visualize the performance of your E-Series systems, I also can be customizable and extensible based on your individual needs.

https://github.com/NetApp/eseries-perf-analyzer

New TR docs about EF & DB

Continue to read

announcements from Aug-Nov 2019

Am I missing something?

Please let me know in the comments below!

If you spotted an error, please let me know personally 😉

Disclaimer

Opinions & observations are my own, and not official NetApp information. This post contains future looking statements and may contain errors. If you have spotted an error, please let me know.

Why GCP Anthos on NetApp HCI is a big deal?

Google Cloud & NetApp announced a new validated design with GKE running on NetApp HCI on-premise.

Read what you might miss from NetApp announcements during Aug-Nov 2019 compressed into a single article.

Kubernetes was originally designed by Google, Google is one of the main contributors to Docker, and obviously the most advanced, mature & stable on the market. If you tried GKE in GCP & other competitive solutions, you know what I’m talking about.

Containers on-premises are difficult when you want to make Enterprise solution for new containerized applications on-premises for number of reasons: Installation, configuration, management, updates of your core infrastructure components, persistent & performant & predictable storage performance, DevOps do not want to deal with infrastructure they want just consume it. These are the key problems to solve and NetApp aims to do it.

NVA-1141: NetApp HCI with Anthos. NVA Design/Deployment

Bullet points why Google Anthos on NetApp HCI is an important announcement:

  • Hybrid cloud. NetApp according to its Data fabric vision, continue to bring hybrid cloud experience to its users in the flash. Now with Anthos on HCI your on-prem data center becomes just another cloud zone. Software updates for GKE & Anthos are on Google’s shoulders, you just consume it. Not just NetApp HCI maintenance like software & firmware updates can be bought as a service, but space as well. You can pay as you go & consume infrastructure as a service: OPEX instead of CAPEX by request with NetApp Keystone
  • NetApp Kubernetes Services (NKS) In addition to NetApp NKS which allows for the deployment & management of on-premises & in the cloud kubernetes clusters, Anthos provides the ability to deploy clusters on-prem and fully integrated with Google Cloud, including the ability to manage from the GKE console. NKS bundled with Istio, Helm & many other components for your microservices which puts DevOps to the next level. Cloud infrastructure on-premises reached your data center
  • Storage automation. NetApp Trident is literally the most advanced storage driver for containers at the market so far which brings automation, API and persistent storage to containerization world. NetApp Trident with NKS & Anthos totally make sense. Speaking about Automation, NetApp Ansible playbooks are also the most advanced on the market at the moment with 106 published & supported modules, and SolidFire itself is known as fully API-driven storage, so you can work with it solely through RESTful API
  • Simple, predictive and performant enterprise storage with QoS whether on-prem or in the cloud: use Trident and Ansible with NetApp HCI on-prem or CVO or CVS in AWS, Azure or GCP, moreover replicate your data to the cloud for DR or Test/Dev
  • NetApp HCI vs other HCI solutions. One of the most notable HCI competitor is Nutanix so I want to use it as an example. Nutanix’s storage architecture with local disk drives certainly interesting but not unique and obviously have some architectural disadvantages, scalability was one issue to name. Local disk drives are blessing & great news for tiny solutions and not so good of idea when you need to scale it up, cheapness of a small solution with commodity HW & local drives might turn into curse at scale. That’s why Nutanix eventually developed dedicated storage nodes connected over the network to overcome the issue while stepping to the very competitive lend of network storage systems. Because dedicated storage nodes connected over the network is not something new & unique for Nutanix, there are plenty of capable & scalable network storage systems out there. Therefore, most exciting part of Nutanix is their ecosystem & simplicity not the storage architecture though. Now thanks to Anthos, NetApp HCI get in to a unique position with scalability, ecosystem, simplicity, hybrid cloud & functionality for microservices where some other great competitors like Nutanix not reached yet, and that gives NetApp a momentum in the HCI market
  • Performance. Don’t forget about NetApp’s Max Data software which already working with VMware & SolidFire, it will take NetApp only one last step to bring DCPMM like Intel Optane to NetApp HCI. Note NetApp just announced on Insight 2019 a compute node with Intel Cascade Lake CPUs which required for Optane. Max Data is not available on NetApp HCI yet, but we can clearly see that NetApp putting everything together to make it happen. Persistent memory in form of a file system for a Linux host server with tiering for cold blocks to “slow” SSD storage can put NetApp on top of all the competitors in terms of performance

HCI Performance

Speaking about which, take a look on these two performance tests:

  1. IOmark-VM-HC: 5 storage & 18 compute nodes using data stores & VVols
  2. IOmark-VDI-HC: 5 storage nodes & 12 compute nodes with only data stores

Total 1,440 VMs with 3,200 VDI desktops.

Notice how asymmetrical number of storage nodes compared to compute nodes are, and in “Real” HCI architectures with local drives you have to have more equipment, while with NetApp HCI you can choose how much storage and how much compute resources you need and scale them separately. Dedup & compression were enabled in the tests.

Disclaimer

This article is for information purposes only, may contain errors and personal opinions. This text neither authorized nor sponsored by NetApp. If you have spotted an error, please let me know.

What you might miss about NetApp from Aug-Nov 2019, including Insight in Las Vegas? Content

Some competitors might say NetApp do not innovate anymore. Well, read this article and answer yourself whether it is true, or it is just yet another shameless marketing.

Part 1

E-Series

Performance

NVMe in EF600

E-Series Performance Analyzer

New TR docs about EF & DB

Part 2

MAX Data 1.5

Previously in 1.4

Part 3

NetApp & Rubrik

NetApp & Veeam

Part 4

Active IQ 2.0

Active IQ Unified Manager 9.7

Part 5

AFF & FAS

AFF & NVMe

ONTAP AI with containers

ASA

ONTAP

ONTAP Select

ONTAP SDS is in embedded non-X86 systems for edge devices

FlexGroup

SnapMirror Sync (SM-S)

NDAS

SnapCenter 4.2

New with VMware & VVOLs:

Virtual Storage Console (VSC)

FlexCache

MCC

MetroCluster IP

MCC-FC

ONTAP Mediator instead of tie breaker

Part 6

StorageGRID v11.3

Part 7

Keystone

Complete Digital Advisors as part of Support Edge:

Part 8

Lab on demand

Lab on demand for Customers

There are more labs for current NetApp customers

Part 9

NAbox

Harvest 1.6

Part 10

SaaS Backup

SaaS backup for Salesforce

Cloud Volumes

Cloud Volumes On-Premises

Cloud Compliance

Cloud Insights

Cloud Secure

NetApp Kubernetes Services (NKS)

HCI

Part 11

New Solutions

Part 12

Containers

NetApp Trident

Ansible

Part 13

Technical Support

How to collect logs before open a support ticket

How to measure storage performance

Gartner Magic Quadrant for Primary Array

Will NetApp adopt QLC flash in 2020?

Continue to read

All announcements from Aug-Nov 2019

Am I missing something?

Please let me know in the comments below!

If you spotted an error, please let me know personally 😉

Disclaimer

Opinions & observations are my own, and not official NetApp information. This post contains future looking statements and may contain errors. If you have spotted an error, please let me know.

What you might miss about NetApp from Aug-Nov 2019, including Insight in Las Vegas?

Some competitors might say NetApp do not innovate anymore. Well, read this article and answer yourself whether it is true, or it is just yet another shameless marketing.

E-Series

Performance

End-to-End NVMe with EF600 – More I/o (x2 times more than EF570), less latency:

NVMe in EF600

  • 100Gb NVMe/RoCE
  • 100Gb NVMe/InfiniBand
  • 32Gb NVMe/FC

E-Series Performance Analyzer

An automated installation and deployment of Grafana, NetApp E-Series Web Services, and supporting software for performance monitoring of NetApp E-Series Storage Systems. NetApp intend this project to allow you to quickly and simply deploy an instance of our performance analyzer for monitoring your E-Series storage systems. We incorporate various open source components and tools in order to do so. While they primarily intend it to serve as a reference implementation for using Grafana to visualize the performance of your E-Series systems, I also can be customizable and extensible based on your individual needs.

https://github.com/NetApp/eseries-perf-analyzer

New TR docs about EF & DB

MAX Data 1.5

  • Support for ONTAP 9.6 GA and later releases
  • Support for FAS storage systems or ONTAP Select systems running ONTAP 9.7 besides AFF storage systems
  • Resizing application memory allocation
  • Support for Red Hat Enterprise Linux 7.7
  • Support for local snapshots on server-only systems
  • Significant performance improvements with more I/o, less latency: 5.4M I/o 4KB READ @ 12.5usec latency

Previously in 1.4

With version 1.4 you can use MAX Data without AFF. Tiering now works between PMEM and your SSD installed in the server.

Some info leaks that HCI will support MAX Data at some point.

Considering new compute node H615C with the Cascade Lake CPUs, which is by the way, required for Optane memory, so it looks like NetApp putting all together to make it happen.

NetApp & Rubrik

NetApp & Rubrik announced collaboration. First StorageGRID can be a target for Rubrik archives. And second Rubrik now supports NetApp SnapDiff API. SnapDiff API is a technology in ONTAP which compares two snaps and gives a list of files changed so Rubrik can copy only changed files. While Rubrik is not the first in working with NetApp SnapDiff APIs, others like Catalogic, Commvault, IBM (TSM) and Veritas (NetBackup) can work with it as well, but Rubrik is the first one with backing up data to a public cloud. Will be available in Rubrik Cloud Data Management (CDM) v5.2 in 2020.

NetApp & Veeam

Veeam Availability Orchestrator v3 (VAO) provide a new level of NetApp integration for DP:

  • FULL recovery orchestration for NetApp ONTAP Snapshots
  • Automated testing and reports that have become essential to your DR strategies
  • TR-4777: Veeam & StorageGRID

Active IQ 2.0

  • AI learning algorithms based on customer base wisdom provides better insights about risks
  • APIs with Active IQ
  • Now you cannot just monitor but take actions and fix risks with Active IQ (which happens through UM)
  • It also decrease “nose”: Active IC trying to find the root of the problem and solve those problems first which will fix majority of your problems: let’s say you have 100 risks with, but it might happen you can fix 70% with a few fixes
  • OneCollect available in a container, and now sends info to Active IQ

AFF & FAS

AFF400, FAS8300, FAS8700. All of those systems basically the same platform but with a different number of Memory & CPU cores. FAS systems also have FlashCache module onboard for read caching. All three requires 9.7 RC1 and have the latest Intel Cascade Lake processors. And that gives us hope we might see Optane as cache as NetApp showed as part of its vision on Insight 2017:

  • Read more about new AFF400 system at Andre Schmitz’s blog
  • AFF with HW-assistant dedup & compression on a network card Pensando used for Cluster Interconnects
  • RoCE as data protocol available as Proof of Concept lab for testing, but not in 9.7
  • AFF A400 now supports NS224 disk shelf alongside with A320

AFF & NVMe

ONTAP 9.6 is qualified with RedHat, SUSE and Oracle Linux for FC-NVMe. Storage path fail-over (ANA) supported with SUSE Linux 15, Oracle Linux 7.6 and 7.7, and (still) no RedHat Linux because of RedHat slowness.

ONTAP AI with containers

Execute AI workloads at a scale with Trident and Kubernetes for CI solution “ONTAP AI” with NVIDIA DXG servers, networking & NetApp AFF NAS storage.

ASA

ASA is using the same AFF but only provide SAN protocols. ASA systems uses ONTAP with same architecture but Symmetric Active/Active network access for the block devices. Watch a short video about ASA on Twitter. Hardware for ASA is identical to standard AFF A700 and AFF A220 systems, and software bundle includes Flash/Premium bundle licenses minus NAS-only features

ONTAP

Much new stuff in ONTAP 9.7

  • User Interface Improvements
    • Simplified UI allows faster creation of LUNs, NAS shares & DP
    • Hardware health
    • Network connectivity diagram
    • Performance history (previously only performance since you opened it)
  • FabricPool Mirrors: one aggregate connected to a 2 buckets for added resiliency (previously only one bucket per aggregate)
  • CVO now supports Cloud Backup Service in AWS
  • Synchronous mirroring (SM-S) with MetroCluster (MCC)
  • Map NFS clients to specific volumes and IP addresses
  • XCP for rapid file delete
  • TCP Performance Enhancements in ONTAP 9.6
  • Watch a video about ONTAP 9.7 with Skip Shapiro

ONTAP Select

Networking best practices for ONTAP Select

ONTAP SDS is in embedded non-X86 systems for edge devices

Slimmed version of ONTAP Select called Photon used on edge devices. Blocks & Files blog post.

FlexGroup

  • Fast in-place conversion FlexVol to FlexGroup
  • NFSv4, 4.1 & pNFS support with NFS security and locking
    • With pNFS data I/O path goes locality in a cluster
  • NDMP with FlexGroups: entire FG, directory or Qtree
    • Dump & Restores between FGs or FG & FlexVols
  • FlexGroup is now supported as an origin volume for FlexCache
  • FlexClone support with FlexGroup
  • VAAI, and copy offload support, though FlexGroups still not recommended with VMs, but we can clearly see NetApp moves that direction

SnapMirror Sync (SM-S)

  • Now SM-S replicate app-created snaps & FlexClones. Scheduled snaps are not replicated though
  • NVMe/FC protocol and namespace support added

NDAS

Since NDAS replicates data with metadata (Unlike FabricPool), now you can access data directly out of the bucket

  • StorageGRID as Object storage added for NDAS besides existing AWS S3
  • NetApp looking into an ability to connect NetApp In-Place Analytics Module to object storage bucket with backed up by NDAS data from ONTAP NAS so you can do analytics with your data and how they evolved over the time (i.e. snapshots)
  • The NetApp Interoperability Matrix Tool now lists supported configurations for NetApp Data Availability Services
  • AutoSupport is now enabled by default to send NDAS diagnostic records to NetApp Support
  • 200 TB of managed capacity with 100 million files is now supported
  • ONTAP 9.6P1 & ONTAP 9.7 and later releases are now supported on the target cluster
  • Bring your own licenses (BYOL) are now available for new and upgraded NDAS instances
  • The NDAS PreChecker tool is now available to assess your environment’s readiness for deploying NDAS
  • New AWS regions are now supported for NDAS with AWS:
    • EU (Ireland)
    • EU (Frankfurt)
    • Asia Pacific (Singapore)
    • Asia Pacific (Sydney)
    • Asia Pacific (Hong Kong)

https://docs.netapp.com/us-en/netapp-data-availability-services/reference_whats_new.html#release-1-1-1-7-november-2019

SnapCenter 4.2

  • Installation with few clicks with Linux virtual appliance
  • Simplified addition of new hosts & RBAC
  • Cluster Management LIFs (before you have to add each individual SVM management LIF)
  • Support vSphere HTML client
  • Automatic config check for servers & SC plugins
  • Oracle 18c
  • SnapCenter Plug-in for Microsoft Windows now support disk creation for Windows Server 2019
  • Video about SnapCenter with Steven Cortez

New with VMware & VVOLs:

  • Adaptive QoS with VVOLs are disabled & no longer recommend in VASA 9.6 P1
  • Naming issues with VVOLs on NetApp fixed in 9.6 P1
  • Support for ESXi 6.7 U2
  • In ONTAP 9.7 in System Manager you can create RBAC user role for VSC
  • IOPS, Latency, throughout in VSC 9.6 for VVOLs on SAN (statistics comes directly from ONTAP)
  • REST API for the VASA+VSC+SRM appliance (install & config and for storage provisioning and management)
  • Wizard in VSC 9.6 creates VVOL endpoints

SRM with VVOLs are coming at some point in time

Virtual Storage Console (VSC)

  • Availability of VVol reports
  • Simplified provisioning of data stores
  • Support for configuring throughout floors (QoS Min) for storage capability profiles
  • Support for REST APIs
  • Video with Karl Konnerth about VSC

Active IQ Unified Manager 9.7

  • Most important improvement is actions with simple “Fix it” button
  • New cluster security section in the dashboard
  • Integration with NetApp Service Level Manager, which is now free and included in ONTAP

FlexCache

  • Is now free
  • Supported with on-prem ONTAP systems & Cloud Volumes ONTAP
  • thin provisioning and data compaction
  • Cache volume migration
  • ONTAP auditing of cache volume reads
  • Antivirus scanning of origin volumes
  • Any-to-Any Caching between all ONTAP platforms on-prem & in the cloud, including MCC

MCC

  • FabricPool aggregate support for MCC FC and IP
  • FlexCache support for MCC FC and IP

MetroCluster IP

Open Network: A switch-less MCC 4-node cluster uses customer switches. Switches must comply with NetApp requirements and supported by the customer or switch vendor.

Supported platforms:

  • AFF A800, A700, A320, A300, and even low-end system A220
  • FAS 9000, 8200, 2750

MCC-FC

Supported platforms:

  • AFF A700, A400, A300
  • FAS 9000, 8700, 8300, 8200

ONTAP Mediator instead of tie breaker

New Mediator (instead of the tie-breaker) for MCC & ONTAP SDS. Mediator is passive appliance, managed from ONTAP. All the logic lives in ONTAP versus previously with logic on switchover was on the tie-breaker.

StorageGRID v11.3

Cloud Storage Pools, combined with the StorageGRID industry-leading data placement policy engine (ILM policies), enable your business to automatically and intelligently move only the data that needs to move to one or more clouds. Cloud Storage Pools can be based on buckets, applications, or tenants or at the individual object level (based on metadata key and/or value pairs).

https://blog.netapp.com/introducing-azure-blob-support-tier-to-more-clouds-with-storagegrid/

Keystone: Stuff as a Service

NetApp Launches On-Prem Data Center Storage as a Service:

  • Cloud-like pay-as-you-go Storage
  • Share risks with the Customer
  • Guarantees
  • Performance
  • Availability & Efficiency
  • With real Balance sheet

Complete Digital Advisors as part of Support Edge:

  • AI powered Insights & Smart Health Checks
  • Personalized Support as part of Support Edge
  • Predictable Support Pricing

Lab on demand

New labs for NetApp partners: NKS on NetApp HCI, MAX Data, VSC 9.6 with vSphere 6.7, Active IQ Unified Manager 9.6

Lab on demand for Customers

Even if you are not NetApp customer yet, you can test NetApp products:

  • Storage Tiering with FabricPool
  • Enterprise Application Protection in the Data Fabric with ONTAP
  • Understanding Trident
  • Others

https://www.netapp.com/us/try-and-buy/test-drive/index.aspx

There are more labs for current NetApp customers

Hands-on lab is in beta testing at the moment:

  • Data Protection
  • Enterprise Database Backups
  • FabricPool
  • ONTAP NAS Technologies
  • ONTAP SAN Technologies
  • Trident plugin for containers & Kubernetes

https://mysupport-beta.netapp.com/ui/#/tools

NAbox

With Grafana you can visualize performance, space & other metrics from ONTAP & Unified Manager stored in Graphite. Download v2.5.1 (2019-09-08).

Harvest 1.6:

NetApp Harvest collects data from ONTAP & Unified Manager and sends it over to Graphite. Harvest is part of NABox.

  • Harvest Extension Manager
    • Extension for NFS connections
    • Extension for SnapMirror replications
    • Extension templates for Perl, Python, Bash
  • FlexGroup capacity metrics
  • AutoSupport logging with Harvest statistics
  • Non-critical bug fixed in plugin cdot-iscsi-lif
  • Harvest 1.6

SaaS Backup

SaaS backup for Salesforce

Now available in Salesforce marketplace

Cloud Volumes

  • Azure NetApp Files certified for AzureGov region and FedRAMP
  • SAP HANA now certified on ANF
  • Kerberos and ACL for Azure NetApp Files (ANF) and Cloud Volumes Service (CVS) with NFS v4.1
  • Cloud Manager supports NKS & Cloud Compliance
  • Cloud Manager to Deploy Trident and Cloud Volumes ONTAP
  • Cloud Volumes ONTAP (CVO) in GCP
  • Cloud Volumes Services in GCP
  • Azure VMware Solution with Azure NetApp Files (ANF)
  • VMware Cloud on AWS with CVS & CVO

Cloud Volumes On-Premises

  • Cloud Volumes Service On-Premises
  • Cloud Volumes ONTAP on HCI

Cloud Compliance

Cloud Compliance is a file classification tool for industry compliance, privacy regulations across clouds. License is free, you are paying only for running VM in your cloud provider.

  • File classification & privacy risk tool for Cloud Volumes
  • License is free, you pay to your Cloud Provider for running additional VM
  • PCI, HIPAA, GDPR, CCPA, etc
  • Credit Cards, IBAN, Emails, SSN, TIM, EIN, etc

Cloud Insights

It is a monitoring tool for cloud & on-prem infrastructure, containers & different vendors, and is free with basic edition to all NetApp customers. Cloud insights available as SaaS service, no installation & configuration needed. So here is what’s new:

  • REST APIs
  • Kubernetes Technology
  • Cloud Secure now is part of Cloud Insights
  • Active IQ integration

Cloud Secure

Thanks to self-learning AI identifies baseline file access and actively blocks & notifies unusual activities with your files against insider threats and external breaches. Cloud Secure monitor, secure, optimize across hybrid multi-cloud

  • It is a feature of Cloud Insights Premium edition
  • Monitor file activities at CVO & on-prem ONTAP for unusual activities

NetApp Kubernetes Services (NKS)

At start NKS was available in public clouds, then NetApp added NKS to on-prem NetApp HCI. And on VMworld 2019 in Las Vegas, before Insight 2019 NetApp announced NKS on VMware (no NetApp HCI platform needed).

New Solutions

  • Healthcare and Automotive AI solutions validated ONTAP AI
  • Design and recipe using NKS with rapid deployment for CI/CD pipeline use case and both public and private clouds
  • End-to-end DevOps CI/CD on NetApp HCI
  • FlexPod validated as a deployment target for NetApp Kubernetes Service
  • SAP HANA validated on FlexPod (Memory-Accelerated FlexPod) including Optane memory and SnapCenter data protection
  • Citrix XenDesktop 7 designs for knowledge workers and GPU assisted VDI for power users
  • Entry-level FlexPod with NetApp C190 and Cisco UCS C220 for enterprise robustness to medium size business
  • FlexPod with FabricPool
  • FlexPod ransomware prevention and mitigation
    • Pre-emptive hardening
    • Recovery and remediation
  • Data Protection and Security Assessment
    • Identify and mitigate risks from security gaps, ransomware
    • Leverages OCI, Cloud Secure/Cloud Insights
  • And many more

HCI

New in NetApp HCI:

Two performance tests for HCI:

  1. IOmark-VM-HC: 5 storage & 18 compute nodes using data stores & VVols
  2. IOmark-VDI-HC: 5 storage nodes & 12 compute nodes with only data stores

Total 1,440 VMs with 3,200 VDI desktops.

Notice how asymmetrical number of storage nodes compared to compute nodes are, and in “Real” HCI architectures with local disk drives you have to have more equipment, while with NetApp HCI you can choose how much storage and how much compute resources you need and scale them separately. Dedup & compression were enabled in the tests.

Containers

Kubernetes Volume Snapshots & FlexClones with Trident for Cloud & On-Prem.

In August, NetApp took its Managed Kubernetes Service to the next level with a new partnership with VMware, allowing users to manage workloads and infrastructure across public and private environments in a single pane of glass.

NetApp Trident

Ansible

How to measure storage performance

SNIA (Storage Networking Industry Association), hardcore 4h!

Technical Support

  • Flat and predictable support pricing
  • OneCollect available in a container and now sends info to Active IQ
  • You can buy managed services so Firmware upgrades on your storage will be managed for you

How to collect logs before open a support ticket

Gartner Magic Quadrant for Primary Array

NetApp is in top right corner.

Will NetApp adopt QLC flash in 2020?

https://blocksandfiles.com/2019/10/30/netapp-will-adopt-qlc-flash-in-2020/

Am I missing something?

Please let me know in the comments below!

If you spotted an error, please let me know personally 😉

Disclaimer

Opinions & observations are my own, and not official NetApp information. This post contains future looking statements and may contain errors. If you have spotted an error, please let me know.

What is NetApp ASA?

ASA stands for All-flash SAN Array. ASA based on low-end & high-end AFF systems that are using ONTAP.

ONTAP architecture in ASA systems remains the same, with no changes. The only change is in the access to the storage over SAN protocols.

In (non-ASA) Unified ONTAP systems SAN protocols like FC and iSCSI are using ALUA which stands for Asymmetrical Logical Unit Access so, this type of connection called active/active but uses ”active optimized” and ”active non-optimized” paths. NVMe ANA works similar to ALUA for SCSI-Based protocols. Both with ANA & ALUA in case of one storage controller failure, the host waits for a timeout, before the host switches to the active non-optimized path. Which works perfectly fine. See more in the section ”Share-nothing architecture”, and “Network access” in a series of articles ”How ONTAP Cluster works”.

But there are some customers who were:

  1. Used to the idea of symmetric active/active connectivity
  2. Looking for a product that will provide fewer notifications to the host at the event of a path loss

NetApp listened to its customers, evaluated both requests and provided ASA products that give the solution they have been looking for.

Video with Skip Shapiro about ASA:

Zoning for cluster storage in pictures

NetApp AFF & FAS storage systems can combine into a cluster up to 12 nodes (6 HA pairs) for SAN protocols. Let’s take a look on zoning and connections on an example with 4 nodes (2 HA pairs) in the image below.

For simplicity we will discuss connection of a single host to the storage cluster. In this example we connect each node to each server. Each storage node connected with double links for reliability reasons.

It is easy to notice only two paths going to be Preferred in this example (solid yellow arrows).

Since NetApp FAS/AFF systems implement “share nothing” architecture we have disk drives assigned to a node, then disks on a node  grouped into a RAID group, then one or a few RAID groups combined into a plex, usually one plex form an aggregate (in some cases two plexes form an aggregate, in this case both plexes must have identical RAID configuration, think of it as an analogy to RAID 1). On aggregates you have FlexVol volumes. Each FlexVol volume is a separate WAFL file system and can serve for NAS files (SMB/NFS) or SAN LUNs (iSCSI, FC, FCoE) or Namespaces (NVMeoF). A FlexVol can have multiple Qtrees and each Qtree can store an LUN or files. Read more in series of articles How ONTAP Memory works.

Each drive belongs to & served by a node. A RAID group belongs to and served by a node. All objects on top of those are belong to and are served by a single node, including Aggregates, FlexVols, Qtrees, LUNs & Namespaces.

At a given time a disk can belong to a single node and in case of a node failure, HA partner takes over disks, aggregates, and all the objects on top of that. Note that a “disk” in ONTAP can be entire physical disk as well as a partition on a disk. Read more about disks and ADP (disk partitioning) here.

Though an LUN or Namespace belong to a single node, it is possible to access them through the HA partner or even from other nodes. The most optimal path is always through a node which owns the LUN or Namespace. If a node has more than one port, all ports to that node are considered as optimal paths (also known as Non-Primary paths) through that node. Normally it is a good idea to have more optimal paths to a LUN.

ALUA & ANA

ALUA (Asymmetric Logic Unit Access) is a protocol which help hosts to access LUNs through optimal paths, it also allows to automatically change paths to a LUN if it moved to another controller. ALUA is used in both FCP and iSCSI protocols. Similarly to ALUA, ANA ( Asymmetric Namespace Access) is a protocol for NVMe over Fabrics protocols like FC-NVMe, iNVMe, etc.

Host can use one or a few paths to an LUN and that is depended on the host multipathing configuration and Portset configuration on the ONTAP cluster.

Since an LUN belong to a single storage node and ONTAP provide online migration capabilities between nodes, your network configuration must provide access to the LUN from all the nodes, just in case. Read more in series of articles How ONTAP cluster works.

According to NetApp best practices, zoning is quite simple:

  • Create one zone for each initiator (host) port on each fabric
  • Each zone must have one initiator port and all the target (storage node) ports.

Keeping one initiator per zone reduces “cross talks” between initiators to 0.

Example for Fabric A, Zone for “Host_1-A-Port_A”:

NodePortWWPN
Host 1Port APort A
ONTAP Node1Port ALIF1-A (NAA-2)
ONTAP Node2Port A LIF2-A (NAA-2)
ONTAP Node3Port A LIF3-A (NAA-2)
ONTAP Node4Port A LIF4-A (NAA-2)

Example for Fabric B, Zone for “Host_1-B-Port_B”:

NodePortWWPN
Host 1Port BPort B
ONTAP Node1Port BLIF1-B (NAA-2)
ONTAP Node2Port B LIF2-B (NAA-2)
ONTAP Node3Port BLIF3-B (NAA-2)
ONTAP Node4Port BLIF4-B (NAA-2)

Here is how zoning from tables above it looks like:

Vserver or SVM

An SVM in ONATP cluster lives on all the nodes in the cluster. Each SVM separated one from another and used for creating a multi-tenant environment. Each SVM can be managed by a separate group of people or companies and one will not interfere with another. In fact they will not know about other existence at all, each SVM is like a separate physical storage system box. Read more about SVM, Multi-Tenancy and Non-Disruptive Operations here.

Logical Interface (LIF)

Each SVM has its own WWNN in case of FCP, own IQN in case of iSCSI or Namespace in case of NVMeoF. Each SVM can share a physical storage node port. Each SVM assigns its own range of network addresses (WWPN, IP, or Namespace ID) to a physical port and normally each SVM assigns one network address to one physical port. Therefore one physical port might have a few WWPN network addresses on a single physical storage node port each assigned to a different SVM, if a few SVM exists. NPIV is a crucial functionality which must be enabled on a FC switch for ONTAP cluster with FC protocol to function properly.

Unlike ordinary virtual machines (i.e. ESXi or KVM), each SVM “exists” on all the nodes in the cluster, not just on a single node, the picture below shows two SVMs on a single node just for simplification.

Make sure that each node has at least one LIF, in this case host multipathing will be able to find an optimal path and always access an LUN through optimal route even if a LUN will migrate to another node. Each port has its own assigned “physical address” which you cannot change and network addresses. Here is an example of network & physical addresses looks like in case of iSCSI protocol. Read more about SAN LIFs here and about SAN protocols like FC, iSCSI, NVMeoF here.

Zoning recommendations

For ONTAP 9, 8 & 7 NetApp recommends having a single initiator and multiple targets.

For example in case of FCP, each physical port has its own physical WWPN (WWPN 3 in the image above) which should not be used at all, but rather WWPN addresses assigned to an LIF (WWPN 1 & 2 in the image above) must be used for zoning and host connections. Physical addresses looks like 50:0A:09:8X:XX:XX:XX:XX, this type of addresses numbered according to NAA-3 (IEEE Network Address Authority 3), assigned to a physical port, and should not be used at all. Example: 50:0A:09:82:86:57:D5:58. You can see addresses numbered according to NAA-3 listed on network switches, but they should not be used.

When you create zones on a Fabric, you should use 2X:XX:00:A0:98:XX:XX:XX, this type of addresses numbered according to NAA-2 (IEEE Network Address Authority 2) and assigned to your LIFs. Thanks to NPIV technology, the physical N_Port can register additional WWPNs which means your switch must be enabled in NPV mode in order ONTAP to serve data over FCP protocol to your servers. Example 20:00:00:A0:98:03:A4:6E

  • Block 00:A0:98 is the original OUI block for ONTAP
  • Block D0:39:EA is the newly added OUI block for ONTAP
  • Block 00:A0:B8 is used on NetApp E-Series hardware
  • Block 00:80:E5 is reserved for future use.

Read more

Disclaimer

Please note in this article I described my own understanding of the internal organization of ONTAP systems. Therefore, this information might be either outdated, or I simply might be wrong in some aspects and details. I will greatly appreciate any of your contribution to make this article better, please leave any of your ideas and suggestions about this topic in the comments below.

All product names, logos, and brands are property of their respective owners. All company, product and service names used in this website are for identification purposes only.