Nutanix .NEXT 2016 Conference Highlights

My opinion of the Nutanix .NEXT 2016 Conference is coming a bit late but I did not have the chance to take a look until now about what Nutanix announced in the conference.

Let’s to analyze that I consider the most exciting announced features that come the next months. You can see the full list of announcements at Nutanix .NEXT 2016 Announcements: Innovation is Just a Click Away.

A Single Platform for All Workloads

Nutanix .NEXT 2016
Source: Nutanix.com – Single Nutanix Fabric

Nutanix goes step forward with its old message from its beginnings, #NoSAN. Many workloads still run in physical servers because their requirements around resources could jeopardize the performance of other virtual machines within the hyper-converged infrastructure. Those physical workloads require a SAN array, but until now Nutanix didn’t support out-of-the-box the block storage functionality. Even deploy a VSA software on top of Nutanix and expose iSCSI targets was not feasible, it could incur in a performance degradation for the virtual workloads running in the platform.

Nowadays with flash storage price coming down and emerging technologies like NVMe more and more adopted by vendors, starts to make sense the leveraging of unused IOPS and available space of the hyper-converged infrastructure, and expose them to the physical workloads. For this, Nutanix has developed the feature called Acropolis Block Services (ABS). This capability is planned to be available in the 4.7 release.

Acropolis Block Services

Based on the iSCSI protocol, customers can use it similarly to Amazon Elastic Block Store (EBS). I believe the customers will take a look to this feature when they require to replace their SAN arrays. In addition, the distributed storage architecture is a plus from reliability and performance standpoint. I love how easy is to scale a distributed storage solution and how quick customers get more storage and performance in minutes.

Nutanix .NEXT 2016
Source: Nutanix.com – Acropolis Block Services

But, this is not reason enough to replace a SAN array. Many of the SAN arrays are also NAS, that provides file services like NFS and CIFS/SMB. What does Nutanix have to say around this? Nutanix already announced in March 2016 the Acropolis File Services (AFS).

Nutanix .NEXT 2016
Source: Nutanix.com – Acropolis File Services

With both features, the new Acropolis Block Services and the recent Acropolis File Services, Partners are now in the position to keep discussions with customers around if the replacement of their SAN array should be a new array again, or otherwise they can extend their current hyper-converged platform with the deployment of Nutanix storage nodes and use both features, ABS + AFS.

In my opinion, Nutanix still has a step forward more to close the storage cycle. I miss the capability to provide object storage, it’s funny because the Nutanix Distributed File System (NDFS) is based on object storage, but they don’t provide this feature. Developers could use the Nutanix platform like they use Amazon S3. Also it’s true I don’t see many customers consuming object storage on premise.

All Flash on All Platforms

Like I mentioned above, the price of flash storage is coming down and this is an opportunity to include the technology across all platforms (we’re using all flash home labs, why not customers?). The only all flash appliance is the NX-9000, but the new all flash configurations for all platforms will be available this month.

I have the doubt if the all flash option will also be available for Nutanix Xpress platform.

Nutanix .NEXT 2016
Source: Nutanix.com – All-Flash Everywhere

Nutanix Self-Service

Many customers are looking to build their own private cloud using Cloud Management Platform software, but most of them have enough as foundation if they can provision virtual machines in an easy manner (IaaS). If customer uses the CMP just for virtual machine provisioning, they are wasting their investment as the licensing model is usually CPU-based and the entire platform must be licensed.

The Nutanix Self-Service will be a great feature and will help customers to reduce the TCO, same they’re doing now with the adoption of Acropolis Hypervisor (AHV)

Nutanix .NEXT 2016
Source: Nutanix.com – Nutanix Self Service

Operational Tools

Operational teams love Nutanix for its simplicity. In my opinion it’s the Veeam or Rubrik of the hyper-convergence. Nutanix is pushing hard its “Invisible Infrastructure” approach and I must say they’re doing a great job. The “One Click Everything”  functionalities are brilliant, making easy the life for operators.

I’m stunned how powerful and friendly is the analytics module. It’s pretty fast returning results on a readable format. At the same time you can trigger operations from your search, it means you can remediate undesirable situations on a quick and easy manner. Nutanix makes vast use of machine learning to predict and anticipate the operations.

The following functionalities around management and operations were announced:

  • The already mentioned Self-Service.
  • Capacity planning through scenario based modeling.
  • Network visualization.
Nutanix .NEXT 2016
Source: Nutanix.com – Nutanix Network Visualization

Acropolis Container Services

The differentiation of Nutanix’s offer about containers and its competitors is the support of stateful applications. The Acropolis Distributed Storage Fabric provides persistent storage support for containers through the Docker volume extension. How Nutanix manages the containers as virtual machines is not new, VMware already showed the same functionality almost a year ago.

Nutanix .NEXT 2016
Source: Nutanix.com – Acropolis Container Services

Conclusion of Nutanix .NEXT 2016 Conference

Exciting times ahead for Nutanix’s customers with all the new functionalities coming and the new ones in the roadmap. Nutanix has a big margin of improvement ahead and if they follow the same way like at the moment, I’m sure they will be in the market for a long time and will provide solutions for those customers that don’t want to move all their workloads to the public cloud.

Leave a Comment

Your email address will not be published. Required fields are marked *