NetApp announced Cloud Volumes support for Google Cloud Platform this month, thus covering all three major public cloud providers in the market. Offering NetApp cloud volumes on Azure as Azure NetApp Files was a big highlight of 2017, back when I used to work at NetApp, so I can only imagine the excitement in the Cloud Data Services BU right now. In this blog post, I want to cover the basics of NetApp cloud volumes. What are its core features, How they can be useful to you, and so on.
NetApp cloud volumes provide cloud-native enterprise-class “File” services to the users. It is built and maintained by NetApp and the public cloud provider teams. It is a multi-tenant SaaS Service which promises guaranteed performance and availability. And being a cloud service, the cost model is usage-based, so you only pay for what you use! Any volume that you create using this service can support NFSv3, NFSv4, and CIFS/SMB protocols simultaneously. It has SLA backed performance tiers, starting with ~3000 IOPS/TB for the premium tier, and it looks like they are also working on lower performance tiers, which will be less expensive to consume. Currently, the premium tier comes to ~$.30/GB/Month, which according to NetApp is cheaper than the similar performing native file services that the cloud provider offers. One explanation for this cost differential can be that NetApp is able to use all of its proven storage efficiency features like Deduplication, Compression, and Compaction to offer this service. Along with these efficiency features, you also get Instant Snapshots and Clones for these volumes, which can be reused by other teams or other environments that a customer might have. To add to the benefits, there is no Data Ingress cost, so if you have your own Datacenters producing a lot of data that you want to analyze in the cloud, you can simply SnapMirror the data from your On-Prem cluster to cloud volumes running in your preferred cloud provider and utilize the Compute there to process your data. Keep in mind, that only Data Ingress is free, you will be charged if you want to bring the data back from the cloud to your On-Prem cluster. Having cloud volumes in multiple clouds also means that you can SnapMirror data between cloud providers(although you will have to pay for Data Egress charges on one end), but this is an easy way to have your data present where you need it.
To instantiate a new cloud volume, you basically navigate to the Cloud Data Services portal and click Create Volume. You will see the following wizard to create a new volume.
You enter a name for the volume, select a region where you want this volume to be deployed in, and create a volume path. This needs to be unique, as it is the junction path variable for the new volume that is being created on the ONTAP cluster. You also have the option of creating this volume from a previous snapshot. Next, specify the size of the volume(currently there is a 10TB limit) and then modify the export policies. Export policies can help you secure access to the volume. You can specify the custom subnet from your VPC that would then have access to this new volume. You can also select read/write or read-only access for a particular export policy. Next, you define a snapshot policy that you would like for this volume. For people who are well versed with ONTAP, all the above settings are similar to creating a new FlexVol on an ONTAP cluster. Once you hit Create Volume, a new Volume will be provisioned for you. When ready, you will get custom instructions on how you can mount this volume on your instances.
Again, this is a File service, so you can mount the same volume on multiple instances and use it simultaneously.
Now, let’s talk about how you can get data into cloud volumes. We already discussed SnapMirror and how you leverage that to move data between On-Prem and cloud volumes, as well as between cloud volumes on multiple cloud providers. Another way that you can move data from a non-NetApp source to cloud volumes, is using Cloud Sync. Cloud Sync enables you to move data from an NFS server, CIFS server, AWS EFS or an S3 bucket into your cloud volume. And according to the Cloud Field Day Presentation, NetApp is also working on a service similar to AWS Snowball that will facilitate movement of large amounts of data between your On-Prem Datacenters and NetApp cloud volumes. This, if done correctly, can save users an additional step of moving data into S3 buckets before moving to Cloud Volumes. NetApp has also been working with the cloud providers to seamlessly integrate the creation and maintenance of cloud volumes using the native orchestration platforms like Azure Resource Manager for Azure. So users don’t have to learn new things to start consuming cloud volumes.
I hope this blog post was helpful in introducing NetApp cloud volumes and giving you a basic understanding of its capabilities. It would be good to get more information on how it integrates with the cloud provider services when the service goes GA. Till then if you want to learn more, you can watch the Cloud Field Day presentation, or you can sign up for preview with your preferred cloud provider.