AFF400, FAS8300, FAS8700. All of those systems basically the same platform but with a different number of Memory & CPU cores. FAS systems also have FlashCache module onboard for read caching. All three requires 9.7 RC1 and have the latest Intel Cascade Lake processors. And that gives us hope we might see Optane as cache as NetApp showed as part of its vision on Insight 2017:
AFF with HW-assistant dedup & compression on a network card Pensando used for Cluster Interconnects
RoCE as data protocol available as Proof of Concept lab for testing, but not in 9.7
AFF A400 now supports NS224 disk shelf alongside with A320
AFF & NVMe
ONTAP 9.6 is qualified with RedHat, SUSE and Oracle Linux for FC-NVMe. Storage path fail-over (ANA) supported with SUSE Linux 15, Oracle Linux 7.6 and 7.7, and (still) no RedHat Linux because of RedHat slowness.
ONTAP AI with containers
Execute AI workloads at a scale with Trident and Kubernetes for CI solution “ONTAP AI” with NVIDIA DXG servers, networking & NetApp AFF NAS storage.
ASA is using the same AFF but only provide SAN protocols. ASA systems uses ONTAP with same architecture but Symmetric Active/Active network access for the block devices. Watch a short video about ASA on Twitter. Hardware for ASA is identical to standard AFF A700 and AFF A220 systems, and software bundle includes Flash/Premium bundle licenses minus NAS-only features
NFSv4, 4.1 & pNFS support with NFS security and locking
With pNFS data I/O path goes locality in a cluster
NDMP with FlexGroups: entire FG, directory or Qtree
Dump & Restores between FGs or FG & FlexVols
FlexGroup is now supported as an origin volume for FlexCache
FlexClone support with FlexGroup
VAAI, and copy offload support, though FlexGroups still not recommended with VMs, but we can clearly see NetApp moves that direction
SnapMirror Sync (SM-S)
Now SM-S replicate app-created snaps & FlexClones. Scheduled snaps are not replicated though
NVMe/FC protocol and namespace support added
NDAS
Since NDAS replicates data with metadata (Unlike FabricPool), now you can access data directly out of the bucket
StorageGRID as Object storage added for NDAS besides existing AWS S3
NetApp looking into an ability to connect NetApp In-Place Analytics Module to object storage bucket with backed up by NDAS data from ONTAP NAS so you can do analytics with your data and how they evolved over the time (i.e. snapshots)
The NetApp Interoperability Matrix Tool now lists supported configurations for NetApp Data Availability Services
AutoSupport is now enabled by default to send NDAS diagnostic records to NetApp Support
200 TB of managed capacity with 100 million files is now supported
ONTAP 9.6P1 & ONTAP 9.7 and later releases are now supported on the target cluster
Bring your own licenses (BYOL) are now available for new and upgraded NDAS instances
The NDAS PreChecker tool is now available to assess your environment’s readiness for deploying NDAS
New AWS regions are now supported for NDAS with AWS:
Open Network: A switch-less MCC 4-node cluster uses customer switches. Switches must comply with NetApp requirements and supported by the customer or switch vendor.
Supported platforms:
AFF A800, A700, A320, A300, and even low-end system A220
FAS 9000, 8200, 2750
MCC-FC
Supported platforms:
AFF A700, A400, A300
FAS 9000, 8700, 8300, 8200
ONTAP Mediator instead of tie breaker
New Mediator (instead of the tie-breaker) for MCC & ONTAP SDS. Mediator is passive appliance, managed from ONTAP. All the logic lives in ONTAP versus previously switchover logic was on the tie-breaker’s side.
Opinions & observations are my own, and not official NetApp information. This post contains future looking statements and may contain errors. If you have spotted an error, please let me know.
AI learning algorithms based on customer base wisdom provides better insights about risks
APIs with Active IQ
Now you cannot just monitor but take actions and fix risks with Active IQ (which happens through UM)
It also decrease “nose”: Active IC trying to find the root of the problem and solve those problems first which will fix majority of your problems: let’s say you have 100 risks with, but it might happen you can fix 70% with a few fixes
OneCollect available in a container, and now sends info to Active IQ
Active IQ Unified Manager 9.7
Most important improvement is actions with simple “Fix it” button
New cluster security section in the dashboard
Integration with NetApp Service Level Manager, which is now free and included in ONTAP
Opinions & observations are my own, and not official NetApp information. This post contains future looking statements and may contain errors. If you have spotted an error, please let me know.
NetApp & Rubrik announced collaboration. First StorageGRID can be a target for Rubrik archives. And second Rubrik now supports NetApp SnapDiff API. SnapDiff API is a technology in ONTAP which compares two snaps and gives a list of files changed so Rubrik can copy only changed files. While Rubrik is not the first in working with NetApp SnapDiff APIs, others like Catalogic, Commvault, IBM (TSM) and Veritas (NetBackup) can work with it as well, but Rubrik is the first one with backing up data to a public cloud. Will be available in Rubrik Cloud Data Management (CDM) v5.2 in 2020.
Opinions & observations are my own, and not official NetApp information. This post contains future looking statements and may contain errors. If you have spotted an error, please let me know.
Considering new compute node H615C with the Cascade Lake CPUs, which is by the way, required for Optane memory, so it looks like NetApp putting all together to make it happen.
Opinions & observations are my own, and not official NetApp information. This post contains future looking statements and may contain errors. If you have spotted an error, please let me know.
An automated installation and deployment of Grafana, NetApp E-Series Web Services, and supporting software for performance monitoring of NetApp E-Series Storage Systems. NetApp intend this project to allow you to quickly and simply deploy an instance of our performance analyzer for monitoring your E-Series storage systems. We incorporate various open source components and tools in order to do so. While they primarily intend it to serve as a reference implementation for using Grafana to visualize the performance of your E-Series systems, I also can be customizable and extensible based on your individual needs.
Opinions & observations are my own, and not official NetApp information. This post contains future looking statements and may contain errors. If you have spotted an error, please let me know.
Kubernetes was originally designed by Google, Google is one of the main contributors to Docker, and obviously the most advanced, mature & stable on the market. If you tried GKE in GCP & other competitive solutions, you know what I’m talking about.
Containers on-premises are difficult when you want to make Enterprise solution for new containerized applications on-premises for number of reasons: Installation, configuration, management, updates of your core infrastructure components, persistent & performant & predictable storage performance, DevOps do not want to deal with infrastructure they want just consume it. These are the key problems to solve and NetApp aims to do it.
Bullet points why Google Anthos on NetApp HCI is an important announcement:
Hybrid cloud. NetApp according to its Data fabric vision, continue to bring hybrid cloud experience to its users in the flash. Now with Anthos on HCI your on-prem data center becomes just another cloud zone. Software updates for GKE & Anthos are on Google’s shoulders, you just consume it. Not just NetApp HCI maintenance like software & firmware updates can be bought as a service, but space as well. You can pay as you go & consume infrastructure as a service: OPEX instead of CAPEX by request with NetApp Keystone
NetApp Kubernetes Services (NKS) In addition to NetApp NKS which allows for the deployment & management of on-premises & in the cloud kubernetes clusters, Anthos provides the ability to deploy clusters on-prem and fully integrated with Google Cloud, including the ability to manage from the GKE console. NKS bundled with Istio, Helm & many other components for your microservices which puts DevOps to the next level. Cloud infrastructure on-premises reached your data center
Storage automation. NetApp Trident is literally the most advanced storage driver for containers at the market so far which brings automation, API and persistent storage to containerization world. NetApp Trident with NKS & Anthos totally make sense. Speaking about Automation, NetApp Ansible playbooks are also the most advanced on the market at the moment with 106 published & supported modules, and SolidFire itself is known as fully API-driven storage, so you can work with it solely through RESTful API
Simple, predictive and performant enterprise storage with QoS whether on-prem or in the cloud: use Trident and Ansible with NetApp HCI on-prem or CVO or CVS in AWS, Azure or GCP, moreover replicate your data to the cloud for DR or Test/Dev
NetApp HCI vs other HCI solutions. One of the most notable HCI competitor is Nutanix so I want to use it as an example. Nutanix’s storage architecture with local disk drives certainly interesting but not unique and obviously have some architectural disadvantages, scalability was one issue to name. Local disk drives are blessing & great news for tiny solutions and not so good of idea when you need to scale it up, cheapness of a small solution with commodity HW & local drives might turn into curse at scale. That’s why Nutanix eventually developed dedicated storage nodes connected over the network to overcome the issue while stepping to the very competitive lend of network storage systems. Because dedicated storage nodes connected over the network is not something new & unique for Nutanix, there are plenty of capable & scalable network storage systems out there. Therefore, most exciting part of Nutanix is their ecosystem & simplicity not the storage architecture though. Now thanks to Anthos, NetApp HCI get in to a unique position with scalability, ecosystem, simplicity, hybrid cloud & functionality for microservices where some other great competitors like Nutanix not reached yet, and that gives NetApp a momentum in the HCI market
Performance. Don’t forget about NetApp’s Max Data software which already working with VMware & SolidFire, it will take NetApp only one last step to bring DCPMM like Intel Optane to NetApp HCI. Note NetApp just announced on Insight 2019 a compute node with Intel Cascade Lake CPUs which required for Optane. Max Data is not available on NetApp HCI yet, but we can clearly see that NetApp putting everything together to make it happen. Persistent memory in form of a file system for a Linux host server with tiering for cold blocks to “slow” SSD storage can put NetApp on top of all the competitors in terms of performance
HCI Performance
Speaking about which, take a look on these two performance tests:
Notice how asymmetrical number of storage nodes compared to compute nodes are, and in “Real” HCI architectures with local drives you have to have more equipment, while with NetApp HCI you can choose how much storage and how much compute resources you need and scale them separately. Dedup & compression were enabled in the tests.
Disclaimer
This article is for information purposes only, may contain errors and personal opinions. This text neither authorized nor sponsored by NetApp. If you have spotted an error, please let me know.
Some competitors might say NetApp do not innovate anymore. Well, read this article and answer yourself whether it is true, or it is just yet another shameless marketing.
Opinions & observations are my own, and not official NetApp information. This post contains future looking statements and may contain errors. If you have spotted an error, please let me know.
Some competitors might say NetApp do not innovate anymore. Well, read this article and answer yourself whether it is true, or it is just yet another shameless marketing.
E-Series
Performance
End-to-End NVMe with EF600 – More I/o (x2 times more than EF570), less latency:
Combines extreme IOPS (up to 2M), response times of less than 100 microseconds, and up to 44GBps
An automated installation and deployment of Grafana, NetApp E-Series Web Services, and supporting software for performance monitoring of NetApp E-Series Storage Systems. NetApp intend this project to allow you to quickly and simply deploy an instance of our performance analyzer for monitoring your E-Series storage systems. We incorporate various open source components and tools in order to do so. While they primarily intend it to serve as a reference implementation for using Grafana to visualize the performance of your E-Series systems, I also can be customizable and extensible based on your individual needs.
Considering new compute node H615C with the Cascade Lake CPUs, which is by the way, required for Optane memory, so it looks like NetApp putting all together to make it happen.
NetApp & Rubrik
NetApp & Rubrik announced collaboration. First StorageGRID can be a target for Rubrik archives. And second Rubrik now supports NetApp SnapDiff API. SnapDiff API is a technology in ONTAP which compares two snaps and gives a list of files changed so Rubrik can copy only changed files. While Rubrik is not the first in working with NetApp SnapDiff APIs, others like Catalogic, Commvault, IBM (TSM) and Veritas (NetBackup) can work with it as well, but Rubrik is the first one with backing up data to a public cloud. Will be available in Rubrik Cloud Data Management (CDM) v5.2 in 2020.
AI learning algorithms based on customer base wisdom provides better insights about risks
APIs with Active IQ
Now you cannot just monitor but take actions and fix risks with Active IQ (which happens through UM)
It also decrease “nose”: Active IC trying to find the root of the problem and solve those problems first which will fix majority of your problems: let’s say you have 100 risks with, but it might happen you can fix 70% with a few fixes
OneCollect available in a container, and now sends info to Active IQ
AFF & FAS
AFF400, FAS8300, FAS8700. All of those systems basically the same platform but with a different number of Memory & CPU cores. FAS systems also have FlashCache module onboard for read caching. All three requires 9.7 RC1 and have the latest Intel Cascade Lake processors. And that gives us hope we might see Optane as cache as NetApp showed as part of its vision on Insight 2017:
AFF with HW-assistant dedup & compression on a network card Pensando used for Cluster Interconnects
RoCE as data protocol available as Proof of Concept lab for testing, but not in 9.7
AFF A400 now supports NS224 disk shelf alongside with A320
AFF & NVMe
ONTAP 9.6 is qualified with RedHat, SUSE and Oracle Linux for FC-NVMe. Storage path fail-over (ANA) supported with SUSE Linux 15, Oracle Linux 7.6 and 7.7, and (still) no RedHat Linux because of RedHat slowness.
ONTAP AI with containers
Execute AI workloads at a scale with Trident and Kubernetes for CI solution “ONTAP AI” with NVIDIA DXG servers, networking & NetApp AFF NAS storage.
ASA is using the same AFF but only provide SAN protocols. ASA systems uses ONTAP with same architecture but Symmetric Active/Active network access for the block devices. Watch a short video about ASA on Twitter. Hardware for ASA is identical to standard AFF A700 and AFF A220 systems, and software bundle includes Flash/Premium bundle licenses minus NAS-only features
NFSv4, 4.1 & pNFS support with NFS security and locking
With pNFS data I/O path goes locality in a cluster
NDMP with FlexGroups: entire FG, directory or Qtree
Dump & Restores between FGs or FG & FlexVols
FlexGroup is now supported as an origin volume for FlexCache
FlexClone support with FlexGroup
VAAI, and copy offload support, though FlexGroups still not recommended with VMs, but we can clearly see NetApp moves that direction
SnapMirror Sync (SM-S)
Now SM-S replicate app-created snaps & FlexClones. Scheduled snaps are not replicated though
NVMe/FC protocol and namespace support added
NDAS
Since NDAS replicates data with metadata (Unlike FabricPool), now you can access data directly out of the bucket
StorageGRID as Object storage added for NDAS besides existing AWS S3
NetApp looking into an ability to connect NetApp In-Place Analytics Module to object storage bucket with backed up by NDAS data from ONTAP NAS so you can do analytics with your data and how they evolved over the time (i.e. snapshots)
The NetApp Interoperability Matrix Tool now lists supported configurations for NetApp Data Availability Services
AutoSupport is now enabled by default to send NDAS diagnostic records to NetApp Support
200 TB of managed capacity with 100 million files is now supported
ONTAP 9.6P1 & ONTAP 9.7 and later releases are now supported on the target cluster
Bring your own licenses (BYOL) are now available for new and upgraded NDAS instances
The NDAS PreChecker tool is now available to assess your environment’s readiness for deploying NDAS
New AWS regions are now supported for NDAS with AWS:
Open Network: A switch-less MCC 4-node cluster uses customer switches. Switches must comply with NetApp requirements and supported by the customer or switch vendor.
Supported platforms:
AFF A800, A700, A320, A300, and even low-end system A220
FAS 9000, 8200, 2750
MCC-FC
Supported platforms:
AFF A700, A400, A300
FAS 9000, 8700, 8300, 8200
ONTAP Mediator instead of tie breaker
New Mediator (instead of the tie-breaker) for MCC & ONTAP SDS. Mediator is passive appliance, managed from ONTAP. All the logic lives in ONTAP versus previously with logic on switchover was on the tie-breaker.
Cloud Storage Pools, combined with the StorageGRID
industry-leading data placement policy engine (ILM policies), enable your
business to automatically and intelligently move only the data that needs to
move to one or more clouds. Cloud Storage Pools can be based on buckets,
applications, or tenants or at the individual object level (based on metadata key
and/or value pairs).
Azure VMware Solution with Azure NetApp Files (ANF)
VMware Cloud on AWS with CVS & CVO
Cloud Volumes On-Premises
Cloud Volumes Service On-Premises
Cloud Volumes ONTAP on HCI
Cloud Compliance
Cloud Compliance is a file classification tool for industry compliance, privacy regulations across clouds. License is free, you are paying only for running VM in your cloud provider.
File classification & privacy risk tool for Cloud Volumes
License is free, you pay to your Cloud Provider for running additional VM
PCI, HIPAA, GDPR, CCPA, etc
Credit Cards, IBAN, Emails, SSN, TIM, EIN, etc
Cloud Insights
It is a monitoring tool for cloud & on-prem infrastructure, containers & different vendors, and is free with basic edition to all NetApp customers. Cloud insights available as SaaS service, no installation & configuration needed. So here is what’s new:
REST APIs
Kubernetes Technology
Cloud Secure now is part of Cloud Insights
Active IQ integration
Cloud Secure
Thanks to self-learning AI identifies baseline file access and actively blocks & notifies unusual activities with your files against insider threats and external breaches. Cloud Secure monitor, secure, optimize across hybrid multi-cloud
It is a feature of Cloud Insights Premium
edition
Monitor file activities at CVO & on-prem
ONTAP for unusual activities
NetApp Kubernetes Services (NKS)
At start NKS was available in public clouds, then NetApp added NKS to on-prem NetApp HCI. And on VMworld 2019 in Las Vegas, before Insight 2019 NetApp announced NKS on VMware (no NetApp HCI platform needed).
Now you can manage your Kubernetes clusters & containers in the cloud & on-prem with a single pane of glass NKS service, now these are just yet another “zone” for you:
Notice how asymmetrical number of storage nodes compared to compute nodes are, and in “Real” HCI architectures with local disk drives you have to have more equipment, while with NetApp HCI you can choose how much storage and how much compute resources you need and scale them separately. Dedup & compression were enabled in the tests.
Containers
Kubernetes Volume Snapshots & FlexClones with Trident for Cloud & On-Prem.
In August, NetApp took its Managed Kubernetes Service to the
next level with a new partnership with VMware, allowing users to manage
workloads and infrastructure across public and private environments in a single
pane of glass.
Support for Cloud Volumes Service for Google Cloud Platform
A SAN economy driver for ONTAP: creates PVs as ONTAP LUNs. The driver creates them within a pool of automatically managed FlexVols. The ontap-san-economy driver supports Kubernetes VolumeSnapshots and PVC Clones from VolumeSnapshots
iSCSI volume resize is operational with Trident in the CSI mode and Kubernetes version 1.16 or greater. Trident will resize the volume on the storage, extends the filesystem, re-scans the device and performs reattach
Support for Kubernetes 1.16 and OpenShift 4.2
Clone PVCs with CSI Volume Cloning
Support for iSCSI raw block volumes in CSI mode, including multi-attach support
Complete list of enhancements and bug fixes, check out the Release Info
Opinions & observations are my own, and not official NetApp information. This post contains future looking statements and may contain errors. If you have spotted an error, please let me know.
ASA stands for All-flash SAN Array. ASA based on low-end & high-end AFF systems that are using ONTAP.
ONTAP architecture in ASA systems remains the same, with no changes. The only change is in the access to the storage over SAN protocols.
In (non-ASA) Unified ONTAP systems SAN protocols like FC and iSCSI are using ALUA which stands for Asymmetrical Logical Unit Access so, this type of connection called active/active but uses ”active optimized” and ”active non-optimized” paths. NVMe ANA works similar to ALUA for SCSI-Based protocols. Both with ANA & ALUA in case of one storage controller failure, the host waits for a timeout, before the host switches to the active non-optimized path. Which works perfectly fine. See more in the section ”Share-nothing architecture”, and “Network access” in a series of articles ”How ONTAP Cluster works”.
But there are some customers who were:
Used to the idea of symmetric active/active connectivity
Looking for a product that will provide fewer notifications to the host at the event of a path loss
NetApp listened to its customers, evaluated both requests and provided ASA products that give the solution they have been looking for.
NetApp AFF & FAS storage systems can combine into a cluster up to 12 nodes (6 HA pairs) for SAN protocols. Let’s take a look on zoning and connections on an example with 4 nodes (2 HA pairs) in the image below.
For simplicity we will discuss connection of a single host
to the storage cluster. In this example we connect each node to each server.
Each storage node connected with double links for reliability reasons.
It is easy to notice only two paths going to be Preferred in this example (solid yellow arrows).
Since NetApp FAS/AFF systems implement “share nothing” architecture we have disk drives assigned to a node, then disks on a node grouped into a RAID group, then one or a few RAID groups combined into a plex, usually one plex form an aggregate (in some cases two plexes form an aggregate, in this case both plexes must have identical RAID configuration, think of it as an analogy to RAID 1). On aggregates you have FlexVol volumes. Each FlexVol volume is a separate WAFL file system and can serve for NAS files (SMB/NFS) or SAN LUNs (iSCSI, FC, FCoE) or Namespaces (NVMeoF). A FlexVol can have multiple Qtrees and each Qtree can store an LUN or files. Read more in series of articles How ONTAP Memory works.
Each drive belongs to & served by a node. A RAID group belongs to and served by a node. All objects on top of those are belong to and are served by a single node, including Aggregates, FlexVols, Qtrees, LUNs & Namespaces.
At a given time a disk can belong to a single node and in case of a node failure, HA partner takes over disks, aggregates, and all the objects on top of that. Note that a “disk” in ONTAP can be entire physical disk as well as a partition on a disk. Read more about disks and ADP (disk partitioning) here.
Though an LUN or Namespace belong to a single node, it is possible to access them through the HA partner or even from other nodes. The most optimal path is always through a node which owns the LUN or Namespace. If a node has more than one port, all ports to that node are considered as optimal paths (also known as Non-Primary paths) through that node. Normally it is a good idea to have more optimal paths to a LUN.
ALUA & ANA
ALUA (Asymmetric Logic Unit Access) is a protocol which help hosts to access LUNs through optimal paths, it also allows to automatically change paths to a LUN if it moved to another controller. ALUA is used in both FCP and iSCSI protocols. Similarly to ALUA, ANA ( Asymmetric Namespace Access) is a protocol for NVMe over Fabrics protocols like FC-NVMe, iNVMe, etc.
Host can use one or a few paths to an LUN and that is depended on the host multipathing configuration and Portset configuration on the ONTAP cluster.
Since an LUN belong to a single storage node and ONTAP provide online migration capabilities between nodes, your network configuration must provide access to the LUN from all the nodes, just in case. Read more in series of articles How ONTAP cluster works.
According to NetApp best practices, zoning is quite simple:
Create one zone for each initiator (host) port on each fabric
Each zone must have one initiator port and all the target (storage node) ports.
Keeping one initiator per zone reduces “cross talks” between initiators to 0.
Example for Fabric A, Zone for “Host_1-A-Port_A”:
Node
Port
WWPN
Host 1
Port A
Port A
ONTAP Node1
Port A
LIF1-A (NAA-2)
ONTAP Node2
Port A
LIF2-A (NAA-2)
ONTAP Node3
Port A
LIF3-A (NAA-2)
ONTAP Node4
Port A
LIF4-A (NAA-2)
Example for Fabric B, Zone for “Host_1-B-Port_B”:
Node
Port
WWPN
Host 1
Port B
Port B
ONTAP Node1
Port B
LIF1-B (NAA-2)
ONTAP Node2
Port B
LIF2-B (NAA-2)
ONTAP Node3
Port B
LIF3-B (NAA-2)
ONTAP Node4
Port B
LIF4-B (NAA-2)
Here is how zoning from tables above it looks like:
Vserver or SVM
An SVM in ONATP cluster lives on all the nodes in the cluster. Each SVM separated one from another and used for creating a multi-tenant environment. Each SVM can be managed by a separate group of people or companies and one will not interfere with another. In fact they will not know about other existence at all, each SVM is like a separate physical storage system box. Read more about SVM, Multi-Tenancy and Non-Disruptive Operations here.
Logical Interface (LIF)
Each SVM has its own WWNN in case of FCP, own IQN in case of iSCSI or Namespace in case of NVMeoF. Each SVM can share a physical storage node port. Each SVM assigns its own range of network addresses (WWPN, IP, or Namespace ID) to a physical port and normally each SVM assigns one network address to one physical port. Therefore one physical port might have a few WWPN network addresses on a single physical storage node port each assigned to a different SVM, if a few SVM exists. NPIV is a crucial functionality which must be enabled on a FC switch for ONTAP cluster with FC protocol to function properly.
Unlike ordinary virtual machines (i.e. ESXi or KVM), each SVM “exists” on all the nodes in the cluster, not just on a single node, the picture below shows two SVMs on a single node just for simplification.
Make sure that each node has at least one LIF, in this case host multipathing will be able to find an optimal path and always access an LUN through optimal route even if a LUN will migrate to another node. Each port has its own assigned “physical address” which you cannot change and network addresses. Here is an example of network & physical addresses looks like in case of iSCSI protocol. Read more about SAN LIFs here and about SAN protocols like FC, iSCSI, NVMeoF here.
Zoning recommendations
For ONTAP 9, 8 & 7 NetApp recommends having a single initiator and multiple targets.
For example in case of FCP, each physical port has its own physical WWPN (WWPN 3 in the image above) which should not be used at all, but rather WWPN addresses assigned to an LIF (WWPN 1 & 2 in the image above) must be used for zoning and host connections. Physical addresses looks like 50:0A:09:8X:XX:XX:XX:XX, this type of addresses numbered according to NAA-3 (IEEE Network Address Authority 3), assigned to a physical port, and should not be used at all. Example: 50:0A:09:82:86:57:D5:58. You can see addresses numbered according to NAA-3 listed on network switches, but they should not be used.
When you create zones on a Fabric, you should use 2X:XX:00:A0:98:XX:XX:XX, this type of addresses numbered according to NAA-2 (IEEE Network Address Authority 2) and assigned to your LIFs. Thanks to NPIV technology, the physical N_Port can register additional WWPNs which means your switch must be enabled in NPV mode in order ONTAP to serve data over FCP protocol to your servers. Example 20:00:00:A0:98:03:A4:6E
Block 00:A0:98 is the original OUI block for ONTAP
Block D0:39:EA is the newly added OUI block for ONTAP
Block 00:A0:B8 is used on NetApp E-Series hardware
Please note in this article I described my own understanding of the internal organization of ONTAP systems. Therefore, this information might be either outdated, or I simply might be wrong in some aspects and details. I will greatly appreciate any of your contribution to make this article better, please leave any of your ideas and suggestions about this topic in the comments below.
All product names, logos, and brands are property of their respective owners. All company, product and service names used in this website are for identification purposes only.