Search

Nutanix vs vSAN?!

Updated: May 6


A common comparison I keep running into in the field is the many customers who keep trying to draw parallels between Nutanix and VMware's vSAN.


First off, its not an oranges to oranges comparison.


Nutanix is nothing like vSAN.


vSAN is a core component of some HCI solutions out there, but it is only a small component. Specifically just the storage component.


Its more like an orchard growing everything (Nutanix) comparison to just one tree with specially engineered Pomegranates (vSAN)..for specific tastes. vSAN is very niche.


vSAN is not a limitless scalable cluster based storage architecture or a total HCI solution like Nutanix is, in fact the storage in vSAN is very restrictive and limited and caters to only a very small segment of the data center use case scenarios out there.


Not only that but vSAN virtual machines cannot be used for server virtualization, they are dedicated to serving up storage and are all a part of the storage components in a vSAN dedicated to storage only use for other nodes outside of the vSAN serving up virtual machines.


Actually it is also a specific type of storage for virtual machines, namely just Object storage.


You will find that when VxRAIL has serious work to do Dell-EMC get rid of the vSAN components molto pronto.


What VxRAIL does a terrible job of is in fact 1.Very large Databases 2. Large scale file deployments 3. High Performance compute stuff doing massive calculations and 4. True archiving tasks.


So to compare Nutanix to VMware stuff you would have to have a silo of vSAN serving storage plus ESXi hosts serving virtual machines plus vCenters for each cluster.


Security via NSX would then get woven into what becomes a very complicated VMware tapestry.


Once you add it all up not only is it 3x more expensive but it also requires many silo's of specific skills to operate it all.


No single pane of glass to manage it all either. Sure its similar but at the same time separate.


They are copying our simplicity approach though, so at least they have learnt from us.


However we are not static so while they make VxRAIL look like our GUI we are streaking ahead pioneering all sorts of firsts as we go.

Nutanix HCI replaces silos and is very simple.


vSAN replaces like for like with equivalents and is more complex than even the legacy vanilla infrastructure it is supposed to replace for your virtual machines and you are seriously constrained with how each pool of VMware infrastructure scales with your other silos.


And vSAN just doesn't do that (scale).


Actually VMware just does not scale either which is why we built AHV in the first place.


vSAN is just a build it yourself storage array, serving object storage for virtual machines.


This is all it does. Its one dimensional, not very flexible and very constrained.


Nutanix on the other hand is Webscale architecture that collapses many silos into one silo and is very flexible indeed.

In my opinion vSAN is just as complicated to operate as legacy 3 tier infrastructure storage systems and equally as difficult to maintain with high OPEX $$ realities baked into the equation.


Note that I am a VMAX and PowerMax certified guy with a decade of Hitachi VSP G experience before that with large Enterprise customers such as City Bank and Kaiser Permanente and many other global banks running VMAX because of the replication capability of that platform.


In fact, I helped deploy over 200 VMAX and PowerMax platforms in Fiber Channel environments so I think I can talk with a great deal of credibility when comparing legacy 3 tier storage, server and virtualization solutions to CI platforms like VCE corps VBLOCKs and HCI solutions like Nutanix or Dell-EMC VxRAIL.


Building a vSAN is also very complex for a customer to undergo even if you build one from the recommended hardware reference architectures that are available out the box.


vSAN maintenance mode is what we call a complete disaster by the way and we have observed hosts exceeding 10 hours just to enter maintenance mode with this beast.


If you have been using Nutanix this Dell-EMC failing alone will rapidly bring you back into the Nutanix fold. I have over 20 examples of EMC marketing not meeting the road rubber reality in just this situation alone.


Another area where some who are holding their breath on VMware promises and who are turning very blue waiting is the promised vSAN Data Protection promised in 2017 - we are still waiting.....what they call data protection just isn't.


File services they botched and object is allegedly on the roadmap but I will not be holding my breath on these either.


They still have a lot of work to do on the vSAN distributed file system and the unified namespace shenanigans across a cluster that allows for centrally manged functionality a la Prism.


With VxRAIL you just need a ton of front end servers to make this happen and that = more $$.


Oh and all these features and stuff they are building in makes for a very complex mesh of software, hardware and various subscriptions to turn on etc. They are still 3 years behind our simplicity and ease of use bar that we set.


VMware alleges that In-Kernel is superior to our CVM but lets look at the latency with round robin read requests for a second for in-kernel stuff shall we?


Fact 1. The kernel has to send these requests over the network with 50% (ftt=1) or 66% (ftt=2) and wait for the ack and receive data? They must be high or they failed math class with latency calculations here. Fact 2. Without data locality, in kernel has ZERO benefit (and they have zero locality of data capability).


In terms of management views, VMware claims with Nutanix you have to learn a new management tool.


Well we have it all in one HTML 5 interface. One GUI, one view.


What do they have? Oh, 8 different management consoles??!! Yeah that's much easier to manage!! Silly me!!


Now, when I first evaluated Nutanix back in 2014, bear in mind that I had at that point been a VMAX Guru for some 10 years and sold a great many Hitachi and EMC VMAX/PMAX platforms with full architectures to go with it (I was an Enterprise Storage Solution Architect in the Channel) and was fully into All Flash storage with Pure, XtremIO, Nimble and many others pushing boundaries in all flash storage platforms.


When I first looked at HCI, I must confess I just did not get it at all.


My Novell/Windows NT experience and the behavior of EMC AE's on the XOOM account conspired to help me go very wide eyed on this one with the help of a POC at XOOM and another at a large Mario-esque Gaming co in Seattle to back it up.


Being a former HP Labs design guy with x86 blades and HP 9000 series platforms and having deployed all the exotic HPC iron known to man running Solaris, AIX and HP-UX, Cray UNIX etc. with many variants of Linux to boot, I was certainly classed as a big Iron HPC bigot with a strong leaning to FC storage goodies to complement this big iron compute stuff.


It was only when EMC refused to register an XtremeIO deal for me and my sales guy at one of my banking customers and on hearing the bad news direct from the customer that our deal reg antics were spurned by EMC sans a discussion with us that I threw a walking out the door Nutanix hail Mary that was to lead to me becoming a recognized HCI specialist and a big Nutanix fan boi while I was at it and I converted that account and several others to Nutanix just to add some flavor to that curry so to speak.


As I worked in the channel I had the opportunity to see all of the HCI solutions in action first hand for myself as I was involved in the various Pre-Sales POC campaigns with all of them and got to evaluate this new emerging HCI technology first hand and I had the opportunity to align various customers with the various HCI solutions to boot.


I got so good at it that ePlus made me an Emerging technology specialist and I also had an entire lab in Milpitas to use to put stuff through its paces at my disposal.


I was also an Azure and Azure Stack SME (Dell and HPe) and was one of the few ever ePlus Architects (sole actually) that actually got that stack running in the lab.


I had a lot of high level enterprise accounts that came and played in our lab with lessons learnt from that as well to bake into the Value part in the VAR proposition pie.


It was only when I ran the Nutanix POC for XOOM and then later Aryzta bakeries in Hayward that I started to see the CFO view and value of HCI from an OPEX savings POV for myself and boy, it sure is major game changer in that department.

Not only that, as I deployed my first few live systems solo on my own and watched Nutanix swiftly become the IT standard at all of them, it became apparent that the try and buy POC angle of attack was a winning strategy for those doubters out there.


Few customers who run a serious Nutanix POC will step back to anything else after looking AOS in the eye and experiencing this platform for themselves first hand.


It is that level of totally addictive. I always quip "It's so slick, its totally sick", and never was a truer phrase ever uttered!


The rapid time to deploy Nutanix clusters also has no equal. Its so rapid one or two Hyperflex and VxRAIL POC guys asked me if I had failed and given up already.


The look on their faces when I informed them that actually I was done within 2 hours and just came back to make cables nice was classic!!


The OPEX guys are equally unbelieving when they do BIOS, Software and firmware upgrades. Nothing has Nutanix equal in that department either.


As an IT administrator it takes your life from impossible to impossibly easy.


Ask 16,000 ecstatically happy customers!!


Around the same time I was converting big name enterprises (2015/16) Michael Dell and a few Dell storage guys went to evaluate Nutanix with a view for a special edition of Dell hardware for Nutanix as a Dell offering and the Dell XC platform was born.


Michael Dell became a hard core Nutanix FanBoi himself as several people at EMC who were campaigning to kill XC found out to their detriment.


A short while after that Tucci sold EMC to Michael Dell with VMware and VCE corps VxRAIL included the package and suddenly VxRAIL and XC guys were frenemies within Dell-EMC.


The Dell guys enthused pretty rabidly about Nutanix and sent several EMC guys to look at it (Sakac and co) and those guys took serious notes.


The next thing you know, VxRAIL, which up until that point was in a distant 3rd spot on the HCI ladder and IMHO a very poor HCI example at that point, obviously took lessons learnt from these Nutanix exchanges and when version 4.0 of VxRAIL came out it was obvious what the impacts of the Nutanix collusion had been.


VxRAIL was totally transformed.


I was shocked how they had gone from piss poor to pretty solid in such a short time actually.


Recall that SimpliVity was second to Nutanix once upon a time, that's how bad VxRAIL 3.x and prior actually was.


Several EMC folks were pressing for XC to get whacked with sole focus on VxRAIL but that approach leaves less on the table not more.


Michael Dell resisted this approach initially as more makes better business sense, not less.


Within 9 months VxRAIL was in a firm second place in the Gartner HCI ladder and was now a viable HCI alternative platform to Nutanix.


However, Nutanix is still a long way ahead of VxRAIL on many hundreds of fronts due to the fact they are now (amongst many other things) hardware agnostic.


As in any Hardware, any Hypervisor and any Cloud style agnostic.


SimpliVity and VxRAIL cannot come near to Nutanix in this regard. They are constrained to HPe or Dell hardware and VMware solution sets and they just do not scale.


When you see banking organizations like Nedcor bank and Credit Suisse adopting Nutanix you also sit up and start taking serious notice because these accounts are also big EMC signature accounts.


These customers are the kings of OPEX cost control in the data center.


These are organizations who do not deploy anything unless the operations and cost savings factors are a proven fact of life.


Because IT skills in some countries are very hard resources to come by, the fact you can do everything with just this platform and networking combined is a game changer.


Contending with just a HCI silo and a Networking silo reduction is a very big factor for these companies, plus the security with Nutanix is Au-Natural and not add on Frankensteins weaving and scripting a la VxRAIL.


In fact, you can use all the hypervisors and all the public cloud offerings concurrently, you just have to have each cluster within the domain use one of the hypervisors and you can use all three Private clouds concurrently if you like.


Not only that but Nutanix also built their own Hypervisor called AHV and their own public cloud offering called Xi LEAP.


At first, being a 15 year VMware VCP and VCDX veteran and general VMware bigot, I was very skeptical of AHV.


Nowadays I prefer AHV as the hypervisor every single time because its just easier and simpler and those are my new mantras.


It has now also exceeded ESXi in my humble opinion and AHV scales to thousands of nodes in the cluster not the paltry 64 nodes that VMware and Hyper-V are constrained to.


To combat VMware NSX they also developed their own FLOW platform that takes security to a new level because unlike VxRAIL, Nutanix AOS was designed for US military security environments from the get go and has full FIPS 140-2, Sweatpea and other STIG compliance security built straight in from inception.


It was not bolted on after the fact.


There is nothing out there with the one click simplicity of Nutanix that offers the security levels we do.

It was only when I was halfway through my very first POC at XOOM that I realized I was 98% ahead of the scheduled time I had allotted for the Setup and actual POC part of the project and the hands on operating it myself experience this afforded me also threw in the realization that Nutanix is a major game changer and major technology disruptor and like Coca Cola, it is actually THE REAL THING.


In my career to date I have seen many game changer technologies come and go.


Novell Netware was the first wave I was involved in shortly followed by the Networking wave that saw me on the CCIE path and specialization track as Netware needed Ethernet switching for full value to be realized.


IMHO Nutanix is bigger than Netware and Windows NT combined with the Networking and software defined aspects because it is the anything as a service platform for on premise private cloud and a foundation stepping stone to full public cloud at easier to operate than cloud simplicity coming with it.


You need to eat some Nutanix pie to believe it my doubting friends!!


Once I was a well oiled POC guy I would be in an out in 2 days and most were done within 4 hours. I usually had to go back day two to make my cables pretty and do labels and such trivial cleanup.


Then there is the automation and orchestration aspects to bake into the ever better tasting Nutanix pie and what DevOps advantages you derive with it to contemplate as well.


Before long you start feeling like a Data Center crack dealer because win-win shines all around with this stuff and you start feeling really sorry for the guys doing the Hyper-V, SimpliVity, vSAN and VxRAIL POC's.


And we have not even started talking about Nutanix Karbon and Kubernetes containerization with this platform yet!!


So anyway, back to the many customers who seem to relate Nutanix to vSAN.


vSAN only offers 5-7% of what Nutanix offers and delivers severe latency headaches and very complex stacks which = even more latency and even more headaches.


This makes vSAN a very small case solution architecture component that requires the other VMware stuff and some serious VMware expertise and skills to run in your average data center environment.


In fact vSAN is just ANOTHER silo of VMware complexity and that = more $$$.


By the way I have built over 220 different vSAN systems using Dell, HPe, Cisco UCS, Fujitsu, SuperMicro, Inspur and Huawei server platforms ranging from 2 node to many nodes in size.


I have also deployed VxRAIL and SimpliVity solutions to over 80 other customers and I know what works and what does not and the bottom line in the OPEX stakes when comparing them all.


I doubt there are many resources who can boast my campaign ribbons globally with the same level of trench battle experience across the entire HCI and CI spectrum to the level I have attained with HCI and such running around.


Chad Sakac and maybe a half dozen more are out there and most of us work for Nutanix, Dell-EMC or VMware. (Chad was at Pivotal and that just got swallowed by VMware).


Anyways, let us dive deeper into what vSAN is or is not.


So the first thing you notice from online documents on the subject of vSAN is everyone says you need a minimum of three server nodes minimum for your vSAN.


This is plain wrong. If you want HA sure, more is better on that front.


You can build a vSAN from as little as 2 server nodes. No problem at all.


Many ROBO sites have 2 node vSAN setups and I know a lot of Taiwanese companies that do this for their core data center as well.


2 node vSAN ROBO config

This setup is also popular in the DevOps arena of late by the way. You do need to ensure that all disks that will be used by vSAN are configured in pass-through mode, and have caching disabled on the storage controller by the way.


vSAN is completely dependent on proper network configuration and functionality, so it is important to take extra care during the setup of the DVS and port groups that the vSAN cluster will be using for vSAN use in general.


On the hardware side with respect to the server nodes there are minimum requirements as well.


You will need RAID controller cards in each server node capable of RAID 0 or pass through mode operation.


This is my first problem with vSAN.


RAID is yesterdays hero. It was designed for spinning disk from the mainframe era and served the open compute world with dedicated storage arrays for a long time.


RAID is a total disaster for FLASH storage systems.


Why offload RAID 0 Passthrough to a hardware card? Surely in kernel is better? I fell off the logic bus, obviously???



RAID robs Flash of 70% of its performance and slows it down another 4x and RAID wears FLASH SSD components out 3x faster than it would with a data protection schema designed for FLASH such as Nutanix RF2/RF3 or the EMC Isilon data protection schema.


Most operators of big iron like VMAX/PMAX or Hitachi VSP G series will struggle with this aspect.


For some reason EMC and NetApp believe customers are not happy to move away from RAID.


That's just stupid thinking.


What that really means is Dell-EMC and Pure storage et al just want you all to carry on buying those big expensive storage systems because that's where they make the most money.


However, it is a fact that RAID was designed for big slow spinning disk and is possibly the very worst thing you can do to SSD flash.


Look at the pathetic performance Pure storage arrays attain using their various RAID Zero and RAID 60 shenanigans...They have the worst performance in all flash arrays, they are fortunate that they have an awesome GUI and OS that hides their poor performance pretty effectively.


PURITY is the business in AFA OS platforms.


However, if I was in the AFA market, I would be buying NetApp A800 platforms as they're much faster than Pure platforms and they don't start data loss antics when they get over 80% full because NetApp at least lets SSD garbage collection do its thang.


It puzzles me greatly that Dell-EMC folks have not replaced RAID on PMAX and XtremIO yet or that Pure storage themselves use all sorts of exotic RAID 60 and RAID Zero schemas instead of a solid Data Protection schema designed for FLASH.


Plus, the FC stack adds latency.



Compared with a locality of reference latency Nutanix offers which is in the 5µs range, arguments about a low VMAX/PMAX FC latency have ZERO credibility.


The VxRAIL argument that in Kernel is better than locality of reference data is also plain stupid. DRAM vs Network access?? Please!!


To prove it to myself (A big former FC fanboi btw) I tested this in my Milpitas lab with Pure storage and HPe Nimble storage all flash storage arrays using Virtual instruments test gear (Load Dynamix).


The test gear showed clearly what was what here.


When VxRAIL gets to 5µs latency across the network with their kernel lemme know, I will host a party with free booze and chikken wings for everyone in attendance...


Back to the vSAN list of ingredients and licenses...


You will also need vCenter server software and licenses as well as ESXi software and licenses for a literal pile of stuff.


Each server node needs hard drives and SSD's depending on what sort of storage system you are putting together (Flash or spinning disk archive storage etc.).


You need Network cards (Fibre channel or 10GbE) and some 1 GbE ports per server node for vSAN management use.


These servers need dual processors with boatloads of RAM in them.


Now let me be clear, to build the minimum sane and rational system with HA and data resiliency built in to cater for potential node failures etc. you do in fact need 3 server nodes and best practice is 4.


Ideally you will also have a separate server to run vCenter Server software on, though you could virtualize that component if you like.


By the way you do not need vCenter server for ESX/ESXi HA, vSAN uses it for vMotion and other DRS services if it is being used.


However, let us take a look at what a vSAN itself is for a second.


vSAN is an object based storage system that is for folks who are strong at VMware virtualization and who want to build their own storage system based on that software platform using suitable servers of their choice for VMware guest OS use.


Why would they do that?


Well, there could be many reasons but to be able to build your own easy to run and operate system with a simple to use GUI so easy to operate that even young teenagers could do it is the general idea here, but storage just dedicated to virtual machines and nothing else is the idea here.


Getting into technical asides of the matter, many feel that the generic storage hardware on offer from the likes of EMC, Hitachi, NetApp, Pure storage et al is not quite too their liking and mucho expensive to acquire and operate to boot.


With vSan, the theory is that you have the ability to build custom server hardware with the components you desire at the right mix of performance and cost options to build your storage system with. It's supposedly more flexible.



You can also build different types of vSAN. Storage for archive, all flash storage for high performance etc. You have lego like flexibility to build your storage with spinning disk, flash or combinations thereof.


Ultimately, most folks I know that have done this have gone there for the storage component control aspect to build a custom storage platform that fits their exact virtual machines storage requirements, lego style.


Now is this cheaper than just buying an off the shelf storage array? It sure as hell is not!!


There are a lot of things such advocates of this approach need to balance on the hardware side to come up with effective cost controls that never get added in because the people doing this do not add their time spent costs into the overall cost equation.


However, if you do have VMware guru's in your employ who are idle it could be a great use of their time to the organization, just realize that you are committing to some deep VMware skills that are not exactly free to keep it all humming right along.


In fact you are building yet another silo of unique specialization and the team that will be required to run it.


Oh, and another area where people always get it wrong is on the test and measurement side of the equation.


I have a blog posting about that on this site as well that you can dive into should you need further help achieving a deep sleep state.


Now, some folks seem to think that Linux tools with software like MPIO and FIO are all you need to determine performance and such metrics of vSAN and other storage platforms.



They would be dead wrong.


VMware can be blind to SSD metrics and data if there is SSD in the equation. If fact you are playing Russian roulette in terms of what you will see and get from VMware and SSD flash storage.


Aside from that most performance data on any storage system available masks the fact that the parameters and test settings that were used by the storage vendor to obtain the unbelievable performance stats they often brag about are completely 100% unobtainable in the real world data center setting.


As in there are no known applications that would work with the settings they use to get these alleged fantastic results..


I saw a storage system once that bragged about 7 million IOPS throughput, but when I went to look at the settings found these were set purely for performance bench marking and placing real world SQL, Oracle or some business critical application with the settings for real world operation on the high performance as possible side of the equation were a different kettle of fish all-together.


With real world application settings, this same 7 million IOPS platform struggled to sustain a 137,000 WRITE IOPS and a fairly slow 250,000 READ IOPS rate.


If I had bought one based on the marketing claims of 7 million IOPS I would have been classified as what they colorfully term "most irate".


The other issue is that these old Linux tools were almost all built for spinning disk and RAID systems (Yesterday's hero).


With VMware and some of these tools you are possibly completely blind to the stuff going on at that component level for Flash storage (SSD).


In fact RAID on systems that use all Flash SSD are a huge problem when it comes to performance metrics and their associated statistics.


RAID itself robs an all flash array of 70-80% of its performance right out the gate. It also makes the SSD wear out very rapidly because RAID makes many temporary writes before the final write is made.


It's also not an orderly logical write and data will be on the media shotgun style.


SSD components have a set amount of writes they can perform before they wear out and become useless.


The very fact every node in a vSAN setup needs a good supported RAID controller card is the very reason why performance on vSAN systems will always live in the category of "deep suck".


VMware should offer a different data protection Schema similar to what Solidfire, Isilon, Data Core or AccelStor's FlexiRemap have in regards to a data protection schemas suitable for Flash memory operations.


RAID is yesterdays hero when it comes to SSD or Flash memory based storage systems, thats just a fact.


CP30 contemplating RAID for FLASH and mulling the answer to the ultimate question


Another snag for vSAN is around NVMe SSD and the limitations PCIe 3.0 brings to the table.


Three NVMe SSD's pegged at 100% will bring your average PCIe 3.0 based server to its knees.


There are no NVMe PCIe 4.0 SSD RAID controllers at this time.


Sure, plenty of people are developing them but it makes little sense to do so in my humble opinion.


If RAID is not a good engineering idea for FLASH why bother? Build something better, many such schemas exist already!!


Why buy a Bugatti Veyron and proceed to put 17th century wagon wheels on it???


Truly a crazy concept if ever I heard one.


The ideal server platform for vSAN would be three dual processor AMD EPYC Rome platforms, each armed with PCIe v4.0 NVMe SSD's and come armed with 1 TB RAM minimum per server node and have software that did not require the NVMe SSD RAID controllers or have PCIe 4.0 RAID controllers with RAID 0 pass through capability, PCIe 4.0 1/10/25/40 GbE Ethernet NIC cards and even PCIe v4.0 FC (16/32 GB) cards.


You also need suitable 100GbE or FC switches.


But guess what? There are no financially viable NVMe PCIe v4.0 RAID controller cards available at this time and I have yet to see the PCIe 4.0 NIC cards yet either.


All this will require VMware software updates and development of software features to cater for fully.


You have to go AMD EPYC with PCIe 4.0 SSD because Intel have no PCIe 4.0 capable CPU at this time and their own Optane lab is using AMD Threadripper and EPYC CPU to test their own PCIe 4.0 Optane stuff!!


I kid you not!!


They wont have any PCIe 4.0 capable CPU fare until 2021 either by the way.


In all, considering the antics involved here to make this vSAN stuff fly, you have to have very specialized people running around doing all of this.


I personally would really not bother.


Buying Nutanix or an already configured VxRAIL system is much easier and far simpler to cater for.


You could buy vSAN ready nodes however, a far more sane and rational way to go about it if you realy want a dose of the vSAN sado-masochistic pie.


VMware has a HCL for vSAN ready nodes you can buy from many manufacturers.


So some things to bear in mind when you decide to go ahead and build your own vSAN.

  • Each object in a vSAN system is a block storage device. The limit per server node is 9000 objects per server node so you have scale issue right from the get go...

  • vSAN uses a distributed RAID schema that spreads the data out over the member server nodes storage resources - RAID is yesterdays hero and not latency friendly...

  • FTT settings (failure to tolerate) impact usable storage capacity as well so your already limited storage gets even less with FTT baked in....

  • Deduplication and compression is only available on all flash vSAN configs. If you want to use HDD, bye bye dedupe and compression......

  • There are six levels of licensing for vSAN you need to factor in to the equation

  • Dedupe and compression is NOT available on vSAN standard licensing you got to have the more expensive licensing.....

  • Data at rest encryption is only available for vSAN enterprise

  • Server hosts must be dedicated to vSAN use only....Nutanix HCI hosts can be many things in the schema......

  • Configure the vSAN using the Quickstart wizard (good luck with that)...

  • Make sure your server node hardware is on the vSAN HCL!!

  • Be sure to validate the NIC, firmware and driver versions are compatible with VMware vSAN 6.7

  • All Flash nodes need 10 GbE

  • IPV4 and IPV6 are supported......beware the jumbo frame nightmare!!

  • Pay attention to vSwitch types, NIC teaming, Jumbo IO Frames and Network IO control

  • You can use direct connect instead of switches for vSAN connectivity

  • You will need a witness appliance or two...

  • In your ROBO site when building the new cluster, do not enable DRS or HA while creating the vSAN cluster object

  • For 2 node ROBO configs a static route needs to be added to each of the vSAN hosts. ( esxcli vsan network ip add -i vmk0 -T=witness )


vSAN licen$e requirements


VMware requires a license to be installed for the vSAN enabled cluster.


All in all, you can start to get the idea this is pretty complex stuff to run this vSAN and VxRAIL stuff. Additions of technology with VMware to plug the various holes is another new stack to bolt on Frankenstein style.


vSAN, Forrest Gump would say, is like a box of chocolates, you just don't know what you're getting until you got it!!


VMware claims vSAN has 10% CPU/RAM overhead. There are public examples of vSAN consuming over 40% system CPU. Also, this overhead is just for the lightweight kernel module.


If you include overhead for other components outside the kernel (vCenter, vRealize Log Insight, VxRAIL manager, RPVM, VDP, ESRA, vRO, vRA, etc.) you will get much closer to an apples to apples comparison, while Nutanix resources required are quite low in comparison.


vSAN also needs 30% free storage to accommodate any rebalancing events, and can have serious impact to an environment if there is a node failure and the space requirement isn’t met.


One thing is certain though, it ain't one click operation friendly like Nutanix is.


More Simple please!!!








chaanbeard.com, IT Tech-Talk Blog focusing on AMD and Nutanix with Cloudy things

  • Grey Facebook Icon
  • Grey Twitter Icon
  • Grey Google+ Icon