top of page
  • Writer's pictureFred

Nutanix CE 2.0 Science Project S&M

Updated: May 11

Many college and geek Computer Science Gurus have been fretting about what they will replace their free 3 server vSphere based Home Lab rigs with of late, care of Broadcom total insanity antics forcing the issue somewhat.

I have delved into this myself recently as I have had to return my NX-3460-G5 All flash demo loaner for many reasons, one of which was the noise and weight of the thing added to the impracticality of having such a large beast at home.

She who must be obeyed was not approving much either and gave me a great deal of angst on the subject, I can tell ya!

I would not ordinarily have considered Nutanix CE as a solution here, based on the last rendition of CE, as that effort, IMHO, was best described as a sad Science Experiment of the crudest kind.

Nutanix should actually take a leaf out of Sun Microsystems SOP book and put more dev into this CE platform as this has the potential to farm and seed a strong Nutanix fan base for future generations, which will reward them handsomely for placing more effort in this Community Edition thang.

As I had to look to doing something for my own purposes I was kind of forced to dive into this myself for a viability look see and this blog should serve as a very helpful guide to anybody attempting to go down this Nutanix CE 2.0 road.

One of my colleagues invested his time and money to get an elaborate Nutanix Home Lab going based on CE 1.0 and he recently upgraded the whole shebang to CE 2.0 and I can share some of the findings and observations both he and I made on our respective journeys as well as highlight the myriad of options this platform offers with some of my own hands on feedback and corrections to the deployment guide Nutanix provides for CE 2.0 dated April 1, 2024.

My first impression was that the April 1 date on the front of the document was rather apt but then I hauled out the magnifying glass and delved deeper into the details and why I felt this way so as to unpeel the Banana somewhat here.

I changed my mind on CE but I still think CE 2.0 has a strong half assed quality about it but it is now actually viable with this newer CE 2.0 platform if you are aware of the caveats and stick to the design rules I am about to suggest, whereas CE 1.0 was only for the deeply insane or the Sado Masochists amongst us in need of a daily subtle whipping care of shit that seldom works.

Believe it or not, some folks actually get off on such endeavors from a thrill point of view......I am not one of them, I prefer playing Golf for such pleasures....

As the options for most looking for a virtualization host are also not exactly in an abundance of multitude by nature, a more forced approach was taken after I examined my own personal choice list.

For me, it had to be a Nutanix based solution of some kind and I had one of them taken/given up from my options and the other Nutanix options here just do not work for me.

Most folks probably just need a free Virtualization host platform of some sort and Proxmox really is the only other pragmatically viable option here if Oracle box will not get it done.

Obviously, neither of these would work for me, even though I did a thorough breakdown and review of Proxmox last year when one of my customers moved off that platform to Nutanix and I had to get into looking at moving the VM's from Proxmox to AOS.

This is how the initially ruled out (by me) Nutanix CE 2.0 option became something to look at on a more serious basis.

The chief problem here being the hardware involved and the actual written instructions and such to get it going on AMD or Intel hardware.

I am an AMD EPYC CPU bigot and passionate fan of their processor fare by the way.

My home compute lab actually hosts Two AMD Ryzen threadripper rigs and Five AMD Ryzen 9 workstations.

I used two of these Ryzen 9 workstations for Nutanix Purposes, one as a Windows 11 host for Visio and my various other customer environment analysis tools.

This left me with one box I could use and some parts to build a potential second host if the first one turned out to be stable and viable.

These five Ryzen 9 systems of mine all use Ryzen 9 5900X Zen 3, AM4, 12 Core, 7nm CPU.

I did actually start moving to the Zen 4 7900X CPU last year but AMD messed up the cooling of this generation of CPU so you have to remove the metal shell off of the CPU chips and build your own custom Water cooling rig to get this to work properly at reasonable temps.

Removing the CPU shell casing they call an IHS jacket from a 7900/7950X is a very delicate and tricky operation.

I actually built one such rig but went through three CPU getting there.

This was not a cheap project either.

Even if my German pals who make the EK cooling gear had not toasted two of the CPU whilst playing this game, the other shit involved in water cooling such a rig is just not practical or pragmatic.

That is just another Science Project that sucks up your dollar.

A science project squared is not my sanity recommendation either......Keep it Simple, stupid, being the mantra to religiously follow when building computer systems......

One of my pals made me an offer I could not refuse for the final water cooled 7950X product I crafted with my German pals and I went back to the 5900X and air cooled them all with the Be Quiet Dark Pro 4 cooler with suitable Be Quiet fans at no financial loss and invaluable experience to show for my efforts.

My two Threadripper rigs are also all air cooled with stock AMD Cooling fans and heat sinks by the way (in case you care).

With regards to Nutanix CE 2.0 ambitions, let's first discuss the options at hand and what it is you want to do with your Nutanix CE 2.0 Home lab environment.

If you just want to use a single Machine that can run a couple of Virtual Machines for home automation purposes or some light Raspberry Pi type VM action, you can in fact happily deploy one or more single node clusti with CE 2.0.

The downside is you cannot expand these single node clusti to three node clusters later and there are storage data protection caveats you may or may not be able to live with that you need to bake into your overall reckoning.

Some of you will want to build a three node cluster just like a normal AOS and AHV starter rig that has three Nutanix NX Nodes, so as to enjoy RF2 HA and you can also do that if you have the resources to do it (lots of Yankee Dollah and oodles of time).

Looking at the more pragmatic and lowest financial outlay options, for home project purposes to fill these needs, you can build two single node clusti on two personal computers and run one Prism Central on one of them.

You can even start with just one computer and use either Prism Element or Prism Central to manage it, though RAM required by Prism Central dictates you use 96 or 128GB RAM for the pleasure.

If you are pressed for cash and just want a great and stable Virtualization host SANS any frills for your VM's, you can deploy a single node Nutanix CE 2.0 instance with just 64GB of RAM and manage it with Prism Element with no problems at all.

I have one such host doing that right now with Prism element and I am running an entire Microsoft AD instance on it with 4 Windows Server VM's no problem with a few Mint Linux desktops.

Let's discuss the realistic memory requirements you will need for a single node CE 2.0 Clusti, as it is in fact the single most expensive part of this Geek HCI science project based shenanigan.

You have to remember that AOS itself will need between 20 to 32 GB of RAM, and Prism Central will also need another 32GB of RAM to operate comfortably and do it's thang.

A few VM's should be OK with about 20GB RAM for the CVM but any heavy Windows server instances are going to gobble your meager resources here and push the CVM and Prism Central towards using 32GB RAM each.

This is where they are most comfortable operating at.

However, you still need RAM for your VM's...

So the practical reality is that you will need MORE than 64GB RAM to run a few small Linux VM's and say 4 beefy Windows server VM's if you want to use Prism Central to replicate and fail over between two Single node host clusti.

Considering how modern Intel or AMD motherboards work with their two or four DDR4 or DDR5 RAM slots, the practical reality is your RAM choices for DDR4 systems are therefore either 64GB or 128GB whereas the DDR5 systems can do 64/96/128GB DDR5 RAM.

Most of the Intel Computer Motherboards for an i5 LGA 1700 setup state they only support 64GB of RAM on the Amazon or NewEgg sites, so be sure to go to the manufacturers web site to see the full spec to be sure you select one that supports 128GB of RAM.

X570 Zen 3, AM4 fare will support 128GB RAM no problem and X670 has options with DDR5 RAM just like the Intel stuff has with 64/96/128GB selections.

You will need either 2 x 32GB DDR4 RAM or 4 x 32GB DDR4 RAM on AM4 rigs.

24GB and 48GB DDR4 DIMMS are in existence but these are as rare as rocking horse shit and not cheap - I cannot even find such beasts on e-Bay.

If you can get these Unicorns, 96GB RAM on each Nutanix CE Host is technically perfect for most home lab purposes with two computers serving as single node clusti.

3200/3600 DDR4 RAM is actually quite affordable these days and I would not bother with anything faster if I was you as this raises the costs somewhat.

I have seen that some newer DDR4 RAM will run at 4800 in overclocked mode but my observations there are this is not viable for AHV or AOS from a stability point of view and stability is your no 1 cast in rock requirement for Virtualization hosting purposes.

I just noticed that the motherboard I selected for the Intel build actually has 3 x M2 slots.

This is pretty much ideal and even better I saw that you can get 48GB DDR5 RAM DIMMs for it as well, so 96GB will fit in just two of the available DIMM slots!

On these motherboards by the way the faster RAM is via using just 2 DDR sockets and that is for both Intel and AMD AM4 or AM5 based gear.

I also saw that the better i7 12th Gen CPU was available with integrated graphics for $249.98 and chose a 12 Core 12th Gen LGA 1700 option for my build and abandoned the i5 Po Boy platform.

I am trying to get closer to the AMD 5900X 12 core CPU with the move to an i7 for a more Apples to Apples comparison - FYI.

So, if like me, you want a basic environment that has two separate one computer "clusti" that will use all the Nutanix Prism Central features to replicate Virtual machines from one cluster to the other and back without HA disk protections, you can deploy your CE cluster using RF1 which is the same format a standard Linux or Windows Desktop uses with its SSD drive (SANS RAID).

Modern SSD are very reliable if you buy from the right companies who manufacture these things on a massive scale.

I have done both the single computer (node) RF1 Clusti and the three computer clusters using RF2 per the standard Nutanix AOS experience.

You will need 5 nodes for RF3 by the way and I do not even know if this is possible with CE 2.0.

There are however, other requirements and caveats to bear in mind here that I will lay out for you in typical DR EPYC blunt as a rock style.

For a single Computer cluster instance you will need either a 12th Gen Intel i5/i7 10, 12 or 14 Core CPU, a Motherboard that can accommodate 96/128GB RAM and a 10GB Intel based Network card and three suitable storage devices.

The latest 14th Gen i5/i7 chippery from Intel are PCIe 5.0 capable by the way but these are also pretty darn expensive and they need the more expensive DDR5 RAM.

In a perfect world your Hypervisor (H) storage device for this purpose would be a 2248/2280 M2 NVMe Gen 4 240/256GB SSD.

However as nobody is making M2 format NVMe Gen 4 SSD as small as 256GB anymore using a SATA 3 2.5" option comes to mind here and these things are pretty cheap.

This device will boot the CentOS Linux Nutanix CE 2.O currently uses for AHV & AOS.

Then you will need two further storage devices, one for Data (D) and one for Cache (C) Purposes (CVM controlled).

I found through experimentation I conducted from April 4th 2024 to the present time with a few such AMD and Intel rigs that you have to pay attention to what PCIe devices you have plugged into the PCIe bus.

Desktop computing motherboards do not have the same PCIe lanes that a Threadripper system or Server platform carry armed with an AMD EPYC CPU.

Desktop computers use cheaper non ECC RAM while Threadripper and server systems use ECC enabled RAM which costs a lot more.

These server systems also have the full PCIe lanes available to the CPU, SSD and RAM.

This is what makes the Threadripper an ideal CE 2.0 host, IMHO because it has near the same bandwidth and PCIe lanes as a server.

My first CE 2.0 install attempt on an ASUS TUF X570 Gaming WiFi motherboard shod with 64GB RAM, two NVMe Gen 4 drives and one WD Blue AHCI 6GB/s 1TB SSD and a fat Radeon 6700XT with 12GB DDR5 RAM and one Chinese Intel chipset based EN82599EN NIC was not exactly stable.

The networking was so bad the CE 2.0 CentOS Linux kernel shut down the LSB completely.

I delved into the Linux logs and found that the PCIe 4.0 GPU and the two NVMe PCIe 4.0 SSD along with the PCIe 3 4x4 NIC were having PCIe lane contention issues and after playing with a good few combinations and having made pertinent observations of the various build's stability.

As such I have to report that using NVMe SSD for a CE 2.0 rig on a Desktop Motherboard brings unwelcome problems you do not really want to deal with here with a beefy PCIe 4.0 GPU plugged in.

Therefore do not use heavy PCIe 4.0 based GPU with Gen 4 PCIe NVMe SSD for these CE 2.0 rigs unless it is one of the smaller RX 5600 GPU cards with 6GB DDR6 RAM onboard.

For AMD Ryzen rigs, even a light PCIe 3.0 GPU with a basic 2GB video RAM is fine for your needs here.

Even better is using a CPU chip that has GPU features already built into it.

As I had a bevy of 5900X 12 core Ryzen CPU at hand, this was not an option for me but AMD and Intel do have chippery with the GPU basics that are perfect for Nutanix CE 2.0 use cases if building a solid Very Poor Boy CE 2.0 Home lab is on the cards for you.

For me, as I am an ex Microprocessor fundi myself, I will never use Intel unless the price is real attractive and I can get 12 Core Ryzen 9 5900X for the same price as a 10 Core 12th Gen Intel i5/i7 that come with GPU built in.

On the Intel CPU side, you can select from the 12th Gen i5 12600K 10 core CPU from $185 and that has UHD graphics built in so you do not have to bother with PCIe bus based lane contention from a GPU.

There are some 12th Gen i7 chips with integrated graphics that use the intel 600 graphics chipset for $249.98 that would be better for CE 2.0 purposes though.

This is good as the NVMe SSD will need as much of those PCIe lanes bandwidth as possible.

WD sell a 256GB M2 2280 boot disk, model designation SN740 for $25.30 on Amazon whut is perfect for the Hypervisor boot purpose if you have 3 or more M2 ports.

For the (C) drive I suggest an M2 NVMe 2280 1TB SN770 PCIe 4.0 M2 device for $84.99 if you have the second M2 slot on the motherboard.

For the (D) disk, buy as big as you can afford but as your motherboard will likely only have two M2 slots you will need one or two of the bigger 6GB/s AHCI 2.5" 1/2/4 or 8TB SATA SSD drives for this purpose.

Lower cost with bigger density seems to be the sweet spot for SATA 3 SSD.

If you have the third M2 port that is not for WiFi get a suitable 4TB 2280 SATA SSD stick and shove it right in.

Some of these motherboards sport 3 or 4 M2 ports.

I like Crucial SATA M2 2280 SSD for this option but they are more pricey which you will see when you try buy one.

With regards to PCIe 5 NVMe SSD, please stay away from these unless you are building a Threadripper 7000 series platform.

My own Ryzen 9 5900X rig has a 500GB NVMe Gen 4 M2 2280 WD Black 850X boot disk for the Hypervisor, the same Gen 4 series WD Black 850X NVMe 1TB M2 2280 SSD for Cache and a WD Blue AHCI 6GB/s SATA 3 2.5" 1TB SSD for the Data device as I have just 2 x M2 slots on all my 5900X rigs motherboards which are from ASUS and Gigabyte.

I had to remove the PCIe 4.0 based Radeon 6700 XT 12GB DDR6 GPU it had in it for this to work in a stable manner in my X570 ASUS rig by the way.

If I was buying from scratch I would use the above ASRock motherboard, and WD Blue for everything, be it Gen 4 PCIe based NVMe for the M2 Cache device (C) and SATA 3 6GB/s AHCI SSD for the (H) & (D) device as this ASRock Motherboard has just one M2 slot.

Here I would use the 256GB 2.5" SATA 3 (H), the 2TB WD Blue NVMe Gen 4 SSD for (C) and a WD Blue 2.5" 4TB 6GB/s AHCI SATA 3 SSD for the (D) drive.

On the Intel side of the equation, I myself would go for the i7 12th Gen 12700K 12 Core CPU @ $249.98 and forego the cheap Chinese $71.99 Intel 540 chip based Network card in favor of the on motherboard 2.5GbE RealTek Ethernet port and round it off with a 96GB DDR5 RAM kit @ $249.99 with a suitable computer case for $59.99.

The motherboard selection for Intel is a bit more tricky but I did find one with three M2 slots from ASUS but prefer their ASUS PRIME B760M-A which has two M2 slots that can do 2242/2260/2280 devices and which has 4 x SATA 3 Ports as well for an astounding $124.99.

I do not Like WiFi or Bluetooth radio technology on any computing device by the way.

RF is a dark art and I hate it on a computer motherboard.

I like this Motherboard because it also has 2.5GbE Ethernet ports which means you just need to get a 2.5GbE Switch with 4 2.5 GbE ports and a 1GbE port.

MokerLink make a 4 Port 2.5G Ethernet Switch that has 2 Port 1/10G SFP+ Slots and 4 x 2.5G Base-T Ports for $48.99 that you just need to buy a 1GB Base T SFP+ for you to connect to your WiFi router.

Meanwhile, back to the motherboard, you will need one with at least one 2248/2280 M2 interface with a couple of SATA ports which the ASUS PRIME B760M-A has.

Because the Hypervisor (H) disk is best suited at 256GB in size, you will notice the newer NVMe SSD devices smallest fit these days is 500GB in size which is a problem but they are out there.

I strongly suggest a Samsung SSD device for the Hypervisor.

Having the (H) disk on a SATA 3 device is therefore prudent and super affordable as well and there are plenty of SATA 3 AHCI disk of that size still available..

On some systems with just one or two M2 ports and you should run the (C) device on the M2 2280 port and another (D) device on the other M2 port.

This would be pretty ideal and quick to setup on a CE 2.0 build.

If your selected motherboard only has a single M2 2248/2280 slot I suggest reserving that for the (C) device (NVMe Gen 4 based) and shoving everything else on 2.5" SATA 3 SSD ports (H) & (D).

DDR5 RAM on these motherboards allows you to get two 48GB DIMMs to arrive at the ideal 96GB configuration that does not break the bank (much).

In my opinion this is the most stable setup you could possibly get on Intel in any event.

The Power supply choice for this Nutanix Host Cluster rig is also rather interesting.

As there is no need for GPU beyond the UHD already on the i5/i7 CPU, all you need for the PCIe slot is that Intel 540 based dual 10GbE NIC for $71.99 sans taxes, so you do not need much power for the three SSD devices you will be needing in that configuration.

I would still lean to the 750W ATX side of the equation here for the Power supply unit here but technically you will not need much more than a 400W PSU.

My choice here is the 750W Corsair RM750e unit @$99.99.

The Intel i5/i7 12th Gen chips onward use LGA 1700 motherboard sockets by the way so do not buy i5/i7 that are not LGA 1700 fare.

You will need a fan with a solid heat-sink to cool said LGA 1700 based i7 CPU as well and for this CPU I suggest the DeepCool AK400 ZERO DARK PLUS cooling unit which sells for $44.99.

You will need to get decent thermal paste and apply it properly to these i5/i7/i9 chips by the way.

As an electronic engineer for the last 35 years, I have developed key skills from much thermal pasting experience with thousands of my own electronic projects, never mind the computer systems I have built for myself and my various businesses over the years.

Do not take the pea sized thermal paste drop in the center of the CPU approach on Intel chippery unless you want it to die STAT!

Intel fare runs very hot and requires the proper attention with thermal pasting.

There is a thermal paste spreader tool you can buy with thermal paste that makes this a doddle but a stiff business card also does the job pretty well in the event you do not have one.

You need to smear a thin coat of thermal paste on the CPU surface which requires many brush passes from the CPU drop in the center of the chip to spread effectively across the metal die of the CPU.

Once this is done a further small drop in the center caps the effort for the cooler to squish when you tighten the screws.

I do this on my AMD stuff as well but have found a pea drop in the center does the job with a Dark Pro 4 bolted to the 5900X's silicon ass.

If you go Intel, spend the extra time to get this right.

Feedback from i5 LGA 1700 Po Boy users for this purpose by the way reveals just over 14 months of use before the CPU fails, so look to the i7 LGA 1700 based chip for your build.

I have had reliable results with i7 chippery for various projects I built for my own chums here and there.

The i5 LGA 1700 14th Gen Po Boy stuff is worse than the 12th Gen Po Boy fare based on my feedback from various folks who have used it by the way.

I have AMD 5900X so this is my platform as all 12 cores are performance cores, the i7 only offers 8 performance cores plus 4E cores.

The only thing better on the i7 rig I suggested is the 96GB RAM which is faster than DDR4 fare and cheaper than 128GB DDR5 fare.

An AMD Ryzen 9 5900X CPU is currently selling for $261 on Amazon and New Egg right now and the ASRock X570S PG Riptide motherboard has three PCIe 16 slots, two M2 2280 ports, four DDR4 DIMM sockets and 6 SATA 3 ports, all for $135.99!

For a cooler use the $44.99 DeepCool AK400 ZERO DARK PLUS unit with the Arctic MX-4 thermal paste for $5.99.

My Amazon Checkout with these items for one such rig lists it all for $1509.87 without taxes.

If you are building two you will need $3019.74

This does not include a monitor by the way, I assume you all have monitors you can use.

It does not include a 10GbE Ethernet switch with the 10GbE DAC Twinax Cables with the SFP+ already attached but I have an affordable option here as well.

Amazon also offers MicroTik CR-S305 4 Port 10GBe SFP+ switches for $159.99 and you can get two of the DAC Twinax cables with GTEK SFP+ units on each end of the cable for $15.99 each for 2M length and the 3M length ones are $20.99 each.

These switches have a single 1GB Ethernet POE port you can use to connect to your WiFi router at home.

You will need One switch with two 10GbE Twinax DAC cables per Personal Computer by the way (if you want them on separate networks anyhew).

You can have both clusti on the same subnet if you like and this means you can get away with a single 4 port switch.

I bought the Intel 520 Chinese NIC with just one 10GbE port so I can connect 4 computers in this config on that MicroTik switch.

If you do want best practice network redundancy, just get four of these switches and four 10GbE Twinax DAC cables but I myself think this is kinda silly for home Lab use.

FYI, the CE 2.0 Phoenix installer picks up on the Intel 520 or 540 chipset and sorts all the networking out for you, just remember VLAN 0 is the default VLAN unless you are going to setup separate subnets for your HCI science experiment.

I go with KISS philosophy for this sort of stuff in my home lab.

If you have RealTek 1GbE and 2.5GbE ports built into your motherboard you have to install these drivers manually from the Linux CLI for CE 2.0 to work with the RealTek networking ports.

Given you can get the single Port 82559EN based Chinese NIC for $25.99 and get 10GbE and I already have Mellanox 2010 and MicroTik switches I just went this route and I am very happy that I did.

Getting these Realtek Ethernet devices working in CE 2.0 can be a bit painful but it is not that difficult.

Another one of my Geek pals over at PayPal has it all working on 2.5GbE on the ASRock PG Riptide motherboard I recommend with a 2.5GbE Netgear switch he already had and it actually works pretty darn sweet.

Two of these tower cases running on a shelf in my garage or under my desk in my office would not need permanent monitors once they are up and running either and I have a KVM device to switch between them all anyways.

This is my build suggestion for the hardware at any rate.

You can go from Po Boy 12th Gen LGA 1700 fare to as fancy as you like.

You can do this on both Intel and AMD but be aware that anything higher than a 5900X or an i7 is going to need some exotic cooling.

If you are interested, my 5900X rig that runs CE 2.0 is clocking in at a steady 4.1 GHz and the i7 I suggested on another rig we built for my pal Ben is steady at 3.4 GHz.

You can price out an i7 version on Amazon as well and save fractions on the GPU, CPU and motherboard costs but it is still gonna run you roughly $1387.20 with taxes per box.

Comparing the two I seriously suggest you go for the AMD rig, as the i7 is inferior to the 5900x CPU by a very wide margin in my opinion.

I would not build an i9 rig due to cooling but the core count there is rather attractive.

Be aware that an Intel i7 12 core is not the same as an AMD 5900X 12 core or the 7900X 12 core as on AMD all cores are performance cores.

The i7 chippery offers 8 High performance cores and 4 x E cores, so bake that into your selection as well.

I have had zero problems with 5900X and 7900X AMD Ryzen chips for CE 2.0 use so far.

The extra $120 odd the AMD rig costs is well worth the extra performance it will bring though this Intel build with 96GB RAM actually seems pretty attractive to me.

I might even go this way for my Sion-i7-2 clusti and run Prism Central from that beast just to show AMD to Intel replication is no problem!

Anyhew, enough of the hardware talk already, let's proceed on to the Software install games!

Rule no 1 for this stuff is Brace yourself and arm your patience!

First off, you will need to go to the Nutanix Community Edition web portal to get your CE 2.0 download link.

You will need to register from there and a pop up appears to help you get this done.

Fill in all the blanks in the pop up form and submit your request for access to CE 2.0.

Wait for the email with the Community edition link before you continue.

Once you get the email, download CE 2.0 and copy the md5 string into a word file for later use.

Next, go grab RUFUS 4.4 from the interwebs thang via Duck-Duck-Go.

Once Rufus 4.4 is on your computer and the CE 2.0 Phoenix-ce2.0-XXX-x86_64.iso Image download in the form of an ISO file is on your machine, the next job is to use RUFUS to make a USB boot disk using that ISO image.

The Nutanix instruction document claims an 8GB USB is sufficient for USB boot and install purposes but I got a lot of errors about the USB disk when I used several different 8GB USB sticks I have so I switched to a 16GB USB 3.2 stick I had lying around and I never saw the messages again so just use a 16GB USB device.

What they do not tell you is you also need a NEXT login for this to work once you do install it successfully, but we are getting ahead of ourselves here.

So after you have your target system assembled, the SSD plugged into SATA or the M2 2280 Ports and the BIOS can see the NIC, all the disks and all the RAM you purchased, you need to go to the advanced menu in the BIOS and turn on the Virtual Machine VMM support.

For some reason ASUS, ASRock, Gigabyte and MSI motherboards have the virtual shit turned off these days for reasons they cannot clearly articulate with any modicum of sanity.

If you forget to turn on virtualization support in your BIOS, you will see a message on your screen that the installer could not create the Install VM, the reason why would be that the VM support in your BIOS was not turned on.

This results in a Fatal termination of the installer process which leaves you in a state of consternation blinking in a perplexed manner at the prompt on the screen, wondering what the heck went wrong....

This is where you develop your inspector Clouseau skills to a finer art...

The other thing I advise CE 2.0 installers to do is to purchase a TPM module for your motherboard and plug it in and also make sure the BIOS vTPM option on your motherboard is turned off and it recognizes the physical TPM module plugged in to your motherboard instead.

These TPM devices usually run $29 to $49.

The whole TPM thing as it stands today is just fucking stupid by the way, they should have built this feature into the motherboard electronics, it does not belong in the BIOS or in an add on plugged into the motherboard afterthought style format they all went for.

I have a motherboard where this TPM unit is very loose and it wreaks havoc to the Windows 11 OS when the thing has a flaky electrical connection from the loose pin connection setup.

I have been designing various sorts of circuit boards for 30 years and it never fails to amaze me how shit this whole TPM thing is from a reliability point of view, it is totally freakin ridiculous.

Microsoft recently got pressured to abandon TPM requirements for Windows 11 and they need to proceed to ALL their fare while they are at it, the whole thing as it stands currently is as I said, just plain fucking stupid.

Do not get me wrong, the technical concept around TPM is sound, the way it has been implemented truly sucks ass.

As you can tell, I am not on the fence on this one!

Once your TPM module is firmly installed in your motherboard, never touch or fiddle with the bastard ever again!

If you need to break wind, sneeze or cough go do it as far from the motherboard as possible.

The one other thing you need to do in the BIOS while you are there is making sure your memory is running at the right clock speed.

For 3200 RAM you should see a setting in your BIOS that shows the RAM clock is 1600 (it doubles up), if it shows 1200 in your BIOS, it is not running in XMP mode.

AMD motherboards used to show XMP for this RAM clock shindig but some have changed it to show DOCP.

DOCP and XMP are the same thing. Just in case it confuzzles you somewhat!

So you are now almost ready to actually boot the whole shebang from the USB drive.

Find the USB port your motherboard designates as the specific port for booting from USB by the way.

Technically, you should be able to use any USB 3.2 port but ASUS designate a port specifically for USB boot and I have found this works far better for the sort of OS boot-install you are about to do.

Plug in the Boot USB drive you built with RUFUS 4.4 to the designated USB OS boot port and turn on your machine, waiting at the keyboard like a hawk to press F8, F10 or F12 (Different BIOS schemas use different keys for this - consult your motherboard manual) for the boot menu option.

You do not want it to boot from the SSD drives that should be blank but which may not be blank.

Once you have told the FRED (the motherboard) to boot from USB, the CE 2.0 Phoenix Installer will do it's thang and evaluate all the hardware you have and check your network connections.

The Ethernet ports need to be plugged in to the switch and the link should be showing as active on your switch before you run the USB installer by the way (obviously the machine will be on at that point).

Now this is where your expectations need to be set because if you were like me you would expect an install experience similar to that of the Foundation process Nutanix uses to install their commercial fare with.

CE 2.0 does not use Foundation even though Phoenix installs foundation on your computer.

I would like Nutanix to change this process to a Foundation for CE as it is not a lot of work, it really is not.

I think they are worried nobody would buy the low end stuff if they did this but there are other ways to take care of this that they could deploy if they invested a bit to make a more polished platform here.

This is not just for customers this is, I suggest, for Nutanix SE and SA folks and those Partner architects who need an affordable platform for learning and doing demo's on who want to immerse themselves into the software functions and features without buying a full cluster.

A lot of value added resellers will not even buy an NFR rig for their Engineers to play with, which is a great shame IMHBO.

As Nutanix sales leadership folk do not like POC and HPOC's in a deal, (they see it slowing it down the sales cycle somewhat), CE 2.0 is an invaluable tool for these folks who want to control this on solid hardware with solid networking environments for a whole host of purposes.

Networking is the foundation and lifeblood of any HCI platform and it needs to be robust and responsive, even in the CE 2.0 Home lab environment.

HPOC in the various Nutanix lab environments have their place in the scheme of things but this should only be for customer engineers to tinker with a full system for 2 weeks in a planned manner with test plans that are pre-approved by both Nutanix and their customers who engage in this sort of evaluation process.

I myself have found over the years that doing this for free with no customer skin in the game is a recipe for disastrum of the sales kind.

Anyway, back to the CE 2.0 Phoenix install process comment I made earlier.

I was expecting the installer to install the whole cluster so that all I had to do was use the web browser to log into the cluster.

Silly me for having such logical expectations!

The first CE 2.0 Phoenix install screen you encounter starts with selecting AHV or ESXi, Nutanix need to remove the ESXi option as it is now counter to Nutanix agenda.

There is no support for ESXi by the way, so do not even bother, it is a serious attempt to enhance futility itself this option.

Proceed to the disk setup below the Hypervisor selection block after the AHV has an X between the brackets.

FYI, here the (I) is for your boot USB, (H) is for the Hypervisor boot disk, (C) is for the cache disk and (D) is for the Data Disk.

I suggest a 256GB NVMe Gen 4 SSD for (H), 1TB NVMe Gen 4 SSD for Cache and a 2.5" 4TB SATA 3 6GB/s AHCI SSD for the (D) Data disk.

You can use a HDD for the DATA disk but I would not go that route unless you need a lot of space.

I would suggest two Data Disks if you want to go that route, one SSD and One HDD.

My tinkering there with that config suggests a 2TB 2.5" AHCI SATA 3 6GB/s SSD paired with a 4 or 8TB HDD is ideal.

Bear in mind this is just my own personal opinion.

You will note after you type in the Gateway IP Address that you can tick the box for create a single node cluster.

However, this does in fact NOT create a single node cluster...

Also note that my pricing that I referred to in this blog uses the smaller sized SSD, so bear this in mind, you get what you pay for here.

The default suggestions the installer makes should not need changing but do check it, I have actually seen it make bizarre selections on some rigs some of my colleagues built.

You then have to put in a few IP addresses you need to have ascertained are available to use in your network before you started this CE 2.0 shindig.

I used the default 192.168.1.x network on mine that has a class C subnet (

I assigned, for the CVM and for the virtual cluster IP but you do not need that yet.

DNS and NTP are pretty important to AOS and AHV so make sure you have investigated local NTP pools near to your location, wherever you are based on the planet.

For DNS, use your WiFi router DNS which in my case is, the same as my gateway IP as that WiFi device is the networking master in my home environment.

One other thing you need to bear in mind in the BIOS by the way is to set your BIOS to UEFI and turn off CSM support.

Your storage devices will use UEFI GPT with ZERO CSM support when Phoenix wipes them and sets it all up so make sure your BIOS is doing the same thang.

The first IP address you type in will be for the Hypervisor and the second will be for the CVM.

Do not forget to check the USB drive has an I (for install media) next to it if installing from a boot USB by the way.

If the install goes swell it will tell you to remove the USB boot media before hitting the enter key.

There should be no major failed stuff on your screen and it should all be mostly green.

There are a few things that show as failed but this is only relevant if you have a FATAL install issue that terminated the Phoenix install process.

The CVM config process takes about 10 minutes or less by the way but it will not be shorter than 5 minutes.

After that is done you can log in to the community Edition host and use ssh to log into the Controller VM IP address.

On the computer you ran Phoenix at login to AHV as root using the password nutanix/4u all in small letters.

Go to another machine you use in your home environment and open a terminal session to the CVM by typing ssh nutanix@(whatever the CVM Host address is) also using nutanix/4u as the password.

I typed ssh nutanix@ on my M3 Mac terminal session

once you have nutanix@cvm$ on your terminal screen type in:

nutanix@cvm$ cluster -s --redundancy_factor=1 create

Note the two - - before redundancy sometimes look like one line - there is no space between these - -'s .

This will create the cluster and it will go off and do a whole bunch of stuff to start all the cluster services and the prompt returns when it is done.

At the prompt nutanix@cvm$, type in cluster start : nutanix@cvm$cluster start

This should start Zeus, Cassandra, Genesis and zookeeper et al with up messages in each line and a success! message at the end to show it went as planned.

We are not done with the command line yet!

The next task is setting the cluster parameters so at the command line type in:

nutanix@cvm$ ncli cluster edit-params new-name=(cluster_name)

I used SionRyzen9_1 for my cluster name

So I typed in: nutanix@cvm$ ncli cluster edit-params new-name=SionRyzen9_1

I then set the virtual IP address of the cluster I had reserved as

nutanix@cvm$ ncli cluster set-external-ip-address external-ip-address=

Then I added the address to the list of the name servers by typing in:

nutanix@cvm$ ncli cluster add-to-name-servers servers=

Then I added the NTP server data with : nutanix@cvm$ ncli cluster add-to-ntp-servers (my NTP pool close to my geo).

You will want to investigate which NTP servers are close to your geographic location and ping them.

NTP is critical for AOS to work well by the way.

Give the DNS and NTP 10 minutes to figure shit out before logging in to the cluster from a computer with a html 5.0 web browser on your local network.

In the browser where you usually type, type in or whatever your Cluster IP address actually is.

Nutanix Prism Element uses port 9440 on a secure ssh link, so add that with a :9440 at the end of the string you type in.

The user name for the Prism GUI is admin and the password is Nutanix/4u with a capital N.

It will ask you to change your default password at the first login so write it down and save it in a word file in case you forget it.

For a first time login you will next be prompted for your Nutanix Next Community Account login.

This checks to see that your CE cluster role associated with your account is enabled.

If you have not done this yet you can register through this logon prompt by clicking the right selection on the screen.

A few words about this email address.

The email address you use here needs to be a legal domain or business email and not apple icloud, hotmail, gmail or yahoo etc personal emails.

I recently disabled my domains so I typed in my corporate email address and got the enablement link there.

This is IMHBO dumb, but whut do I know...?

My UCSB .edu email worked as well so if you are still associated with a University or college and still have those email addresses, go for it.

After that you just need the usual vanilla Prism element admin login with password.

just enter the credentials you just changed for admin and login

You then get this Prism basic screen with the basic stats of your clusti or cluster, depending on if you did a one node or three node install.

Those of you that went with the single node cluster will always see a critical warning about redundancy.

Obviously with RF1 there is no redundancy. Just accept that fact and move swiftly on.

There are two other critical warnings that are security related which you can turn off in the Health and checks detail (shows as a failed entry with an x next to whatever the message is in the long list of NCC checks that AOS runs).

It is good practice to change CVM and other passwords but it is not critical for home lab use IMHBO (in my humble, biased opinion).

Acknowledge and resolve any critical messages you see just to check if it is acting on commands and go forth to explore the Prism GUI!

The first thing I like to do after about a day is going to LCM and running an inventory.

An hour or so after running Inventory, install AHV and AOS in the 4 available updates you see under software.

After that is done (takes about 30 minutes) do FSM and Foundation separately.

That part is quick.

By the way if you race ahead and install a VM and it is running, turn it off if you have a one node cluster before updating to or you will see a failed to update message in the alerts section which will be a red circle you click on to see whut failed...

After your Prism Element based cluster has been running OK for a week or so and if you have enough RAM overhead to accommodate Prism Central you can install it from the Prism Element GUI.

This takes literally forever by the way.

I have seen this take up to 3 hours on some rigs.

Do check it for failed messages as if you have a mere 64GB of RAM is might not deploy.

I would also have all VM's turned off for this piece by the way.

Once PC is up and once you have the remote computer that will act as your remote site Clusti you can start playing with Prism Central.

Nutanix moved a lot of functionality into Prism Central and there are some features that only work from Prism Central GUI screens (Like RBAC).

I also advise you upload VM OS images using PC and not PE because all clusters can access the PC image storage pool.

That being said some of you will never venture past Prism Element and have just one Cluster or Clusti.

For those folks just dive into the VM tab in Prism Element and have a ball.

In Prism Element single Cluster/Clusti instances click on the silver setting button on the top right of the screen for managing uploading images to your CE Cluster/Clusti.

Clicking the gear icon will bring you to:

And here is also where you can change DNS and NTP and other stuff so have a good sniff around the GUI.

When you build Windows based VM's by the way you need to learn some pertinent stuff about how AOS is going to see your VMs disk.

When you build a VM therefore, apart from the OS images for any Windows based OS you will also need the Nutanix Virtio ISO image and mount that in a virtual CD-ROM of the VM you are building.

You can add a second empty virtual CD-ROM in your VM Disk build - specifically for Nutanix Guest tools (NGT) to be loaded as well.

Linux has drivers and NGT built into the various Linux Distros and this happens automatically so you only need one virtual CD-ROM to boot the Linux OS from.

This is why for demo purposes I only load Mint Linux images.

If you are intent on building Windows VM's just be prepared to add the VirtIO drivers or it will not see the disk.

Also remember to assign a network to your VM or it will not connect to anything.

Scroll down to get:

You can create as many disks as you like for your VM, CD-ROMs as well...

Below is the suggested VM Disk situation for a Windows 11 VM while creating a VM.

A Linux VM just needs the install ISO image in one of the virtual CD-ROM disks.

This is my personal best practice at any rate, we are all different, so whutever makes your boat float.....

Note that my Nutanix Host uses UEFI so my VM's have to use UEFI settings and IDE is not an acceptable UEFI format.

I am not fond of legacy BIOS with MBR mode by the way.

Once your VM has been created with either Legacy BIOS or UEFI it cannot be undone, so make sure all the details iz all correct.

I had to change the first virtual CD-ROM to sata.1 as you cannot have two virtual CD-ROMs on same sata.0

Make sure your browser supports pop-ups as when you start the VM and click on launch console, if you do not have this sorted, nuthin happens (at the speed of light).....

In FireFox under the html line on the left at the top of the screen a subtle line appears informing you of options to allow this pop -up connection. Some browsers don't do jack and I see a lot of folks using Chrome to get this done.

I never install Chrome on any of my own computers because Chrome is an elaborate Malware attack plane (Also IMHBO).

You can then log into the VM from your browser and do your thing running it the way you like.

I let my cluster settle in for 4 days before I decided to install Prism Central on it and I upgraded everything to AOS before I did PC.

Before I did the PC install, my Clusti was boasting 7% CPU use with 48-55% RAM utilization in the Prism Element screen and I dithered mightily over whether or not I should deploy PC on a Clusti with a mere 64GB RAM, as the consequences of doing so for the Utilization levels are rather shocking.

After I deployed PC, the stats leapt to 12% CPU and 98.2% RAM utilization and I dithered for a further 36 hours before I went to Amazon Prime and ordered me some 3600 Viper DDR4 RAM for some $271.99 to mitigate this.

I already have a buyer for the old 32GB DDR4 RAM it will set free so I will offset the costs there as always.

When the new DDR4 RAM comes tomorrow I will shut down that cluster and make sure the DOCP clock is right and see if the Rocky Linux AOS uses is happy with life the Universe and everything else and just expand the RAM without much bleating (I fervently hope).

My Colleague Brian had told me bad but pragmatic things about PC on 64GB RAM Clusti and I found out for myself he was bang on the buck here.

96GB you can do with PC on a DDR5 RAM system, Older DDR4 systems do not have the cheaper 96GB options.

Now one thing I will say about two DIMM slot use vs 4 DIMM slot use with Linux is the Linux OS handles this shit a lot better than what Windoze does.

I have not seen any memory latency issues with SionRyzen9_1 with PC which is hogging 98% of the available 64GB RAM with just two VM's running.

As I mentioned before a lot of stuff moved from Prism Element to Prism Central over the past 2 years and it is just better to manage everything in Prism Central but if the overhead of PC is too much for you there is no shame in doing everything in Prism Element.

Prism Central is also new and there are bugs that crop up to derail your virtual schemes here and there so think long and hard before you deploy a PC as it is a memory hog.

The concept is good and it is the way to go, I am just not so sure it is worth the overhead it brings to the table.

For me it is the whole point of the exercise but for those who just want the basics, stick with Prism element (PE).

You still get to use Prism element when you click on the Clusti/Cluster that PC is managing by the way.

Prism central is the Gateway to more stuff and features like RBAC are only in PC.

I have been growing to like it a lot the more I play with it but am also mindful of the overhead cost it brings with it.

Note the Hamburger on the top left I highlighted is where you click for the PC Dashboard to appear.

You should create all VM's from PC as well as setup the new Replication you want to run from PC.

Note Protection Domains are a Prism Element thang, not a Prism Central (PC) thang.

PC has it's own way of doing replication which saves the dual way setup doing it with Protection domains in Prism Element.

Failing back just works now if you setup replication in PC, which is tres cool.

PC also stores Images so all your clusti or clusters can access them.

Pretty handy for admin folk.

Now it is just a question of clicking around to discover stuff to rack up experience in terms of how to do stuff....

So it is now May 5th 2024 and I have installed CE 2.0 on some 21 different rigs and I have to tell you it is a different experience every time, even on the same rig if I decided to blow it away and start from scratch again..

My standard install MO is to use a 500GB NVMe Gen 4 boot disk for (H), a 1TB NVMe Gen 4 SSD for Cache (C) and a 4TB WD Blue SA510 SATA 3 SSD for the Data drive (D) with a 16GB USB 3.2 device for (I).

What is interesting on these installs has been the networking piece and what CE 2 is doing with the networking.

I installed a system yesterday where no matter what I did the cluster VM's just would not connect to the internet.

Everything could ping the gateway and all my devices on my 192.168 network but after 5 hours I was stumped and did a fresh install.

I replaced some of my AOC cables with Twinax variants and it behaved very differently to AOC cabling.

But still no internet access for el VM's.

I started to use the sole surviving synapse to mull over the behaviors.

I went to look at the motherboard and lo and behold, the 1GB Ethernet port on this motherboard was not a RealTek based device it was in fact an Intel chip.

I looked at the networking in Prism Element and it had indeed configured the device but reporting it was not "up". Hard to be up with no network cable plugged in.....

Nutanix just installs Intel network interfaces and it usually just works.

I played with the network settings and before I went to bed last night I decided to let it do DHCP with a range from through to 195.

I built a new VM and I knew it was connecting to the Interwebs when it grabbed all the Linux updates I knew it should grab from the local mirror.

This must have built a bond in the virtual switch because I used those same addresses manually and it just would not work.

I obviously need some Nutanix internal networking guru to educate me as to how it builds bonds in the virtual switch 0 plane, cause my sole surviving synapse is plenty puzzled...

If you run into this problem try the DHCP service thing with a range of IP Addresses as it seemingly builds that bond automatically?



bottom of page