Anyone have experience with Supermicro H12SSL for EPYC CPU's?

Zarathustra[H]

Extremely [H]
Joined
Oct 29, 2000
Messages
38,878
So,

I am in the process of planning an upgrade to the old home VM server. I'm thinking I'm going to go EPYC this time around, and am going to go with one of the many Supermicro h12SSL variants (I'm leaning towards the H12SSL-NT)

I decided to pop this in this subforum as it is for a VM server, and I'll likely not get too much traction in the consumer AMD Motherboard subforum.

I have a few questions regarding these, if anyone has any experience:

1.) If I order these new - like say on Amazon - should I expect there to be a new enough BIOS on there to support Milan chips?

2.) If the firmware isn't new enough, what are my options? Can it flash a BIOS update via the BMC without the help of the CPU? Or should I be buying the cheapest Naples chip I can find on eBay (that inst vendor locked) just so I can have something to flash the BIOS with?

Appreciate any wisdom you folks might be able to share!
 
I would simply try if it works.

If not, try a bios recovery/update from USB (see manual)
If not buy a cheap used supported CPU

btw
The board is fine for a VM server, see my tests barebone and with ESXi
https://www.napp-it.org/doc/downloads/epyc_performance.pdf

Thanks for your comment.

That's certainly the way I usually do things.

I was hoping to avoid the "buy something, test, it didn't work, research, order something new, wait for it to arrive, try something" cycle this time around. :p

While this is a home server, it is kind of a "home production" server that supports many things in the house my "users" use. I was going to take advantage of having them out of the house for a week to replace/upgrade the server. Even one "I have to order something I hadn't planned for" moment could tank that plan. :p

And nice testing.

Mine is a all-in-one storage/VM server. Current iteration runs Proxmox (Debian based) with a mix of KVM and LXC guests, running ZFS bare metal using ZFS on Linux for the storage.

I switched away from a FreeNAS with passed through LSI SAS HBA's running on top of ESXi back in 2016, when KVM was really popular.

In the end what did it for me was that the enterprise license for ESXi was out of my league, and the free version had serious bugs that were unpatched for months and months and months.
 
Last edited:
I don't have this board, but with the independent ARM based ASPEED AST2500 BMC you should be able to flash the BIOS with even no CPU or memory installed. SuperMicro doesn't explicitly state this, but on the flip side they don't say you need them installed either.

https://www.supermicro.com/en/products/motherboard/H12SSL-NT then Motherboard Manual then Firmware Update & Manual Recovery Instructions (don't think I can direct link that)

Here is a video someone doing it on the H12DSi-N6 (somewhat similar dual socket board)

View: https://www.youtube.com/watch?v=Q2cRX_YlQj8
 
Thanks for the help folks.

I have a Supermicro H12SSL-NT, an Epyc 7543, (32C/64T) and 512GB of Registered DDR4-3200 coming my way.

Likely won't have time to install until after Christmas, but I am going to be casually tinkering until then, making sure it all works, I can update the BIOS, the CPU isn't locked, etc.

Then between Christmas and New Years I'll be replacing the motherboard and CPU in my current server, and doing a fresh install of Proxmox and trying to migrate all of my VM's and containers over. (wish me luck)
 
I finally have all the parts!

Going to do some benchtop testing to make sure everything is working right (and I didn't get any counterfeit or vendor locked parts) before actually upgrading the server between Christmas and New Years.

Any recommendations on how to validate this?

With desktop parts I usually stability test for an extended period with mprime/prime95, and do a Cinebench run and compare that to review numbers to make sure it is performing right.

With the Epyc 7543 I'm not having a lot of luck finding performance numbers online I can validate against.

Appreciate any suggestions! (Especially ones I can run from a Linux USB Live image)
 
Last edited:
Decided to do some basic assembly:

This is the part that no matter how many times I do it on a Threadripper (and now EPYC) always scares the shit out of me:

1702356357169.png


4094 hair-fine pins that unlike some others are irreparable if touched.

With my lock I'd drop the CPU on them, which is probably why they include that nifty secondary plastic cover.

1702356582335.png


And a few minutes later. I wanted to do some benchtop testing today, but apparently I no longer have a good PSU in my spare parts bin. I'm going to have to yank the one out of my testbench machine, but I just don't feel like doing it tonight.

I'm a little bit concerned about the clearance behind PCIe slot Slot three with those weird Supermicro plastic M.2 clips sticking up, but I'm hoping it will be fine. I'm also considering putting a couple of slim heatsinks on those m.2 drives to ensure they don't get too toasty, but the Amazon store page says they only stick up by ~3mm, so hopefully I'll be OK. Those are going to be my mirrored boot drives (Using ZFS). I'm going to be using at least 5 maybe 6 of those slots, so they can't be blocked. This depends on if I can get the funky SlimSAS ports to run my Optane u.2 drives. If I can, then I don't need my u.2 riser card. 3 of the slots will have 16x cards, the rest 8x cards.

It's a little weird to see such a petite CPU cooler on a 225W TDP CPU, but we are not overclocking here. As long as we stay under TJMax at full load I'll be happy. Even better if I can get the full advertised boost out of it.
 
Anyway, in order to validate that everything was working the way it should, I started googling around for known benchmark numbers I could replicate on this thing in its benchtop configuration (without installing Windows)

All I could find was a launchtime Geekbench 4 result (I thought geekbench was for phones?) with a multicore score of ~116k. Well I ran mine and got ~128k, so I am going to call that a success. I'm going to guess that the tests at launch were run with slower memory, not the 3200MT/s stuff I have.

If anyone is curious:
https://browser.geekbench.com/v4/cpu/17037599

Then just for shits and giggles I ran a Geekbench6 intending to compare it to other results of the same CPU in their browser. I landed on a multicore score of 17139, which seems about right, but it is tough to tell, because the benchmark browser has results all over the place in it.

Again, if anyone is curious:
https://browser.geekbench.com/v6/cpu/3965899

Mine lands among the higher results for a single Epyc 7543, so I'll take that as an indication that at least I wasn't scammed, and I did indeed get the RAM and CPU I was supposed to. There were some that were a couple of hundred points higher, but it looks like they were running some sort of Asus workstation board. I'm guessing they had a bigger cooler than this little 92mm thing, and benefited from better boost clocks.

Now I am going to run Memtest. That's probably going to take... ...a while.

I've never done a memtest with this much RAM before.
 
So, I got really confused about the two different memory test tools (Memtest86 and Memtest86+) and which one I should really be using, especially considering that I am testing ECC RAM which may - by it doing its job and correcting errors - hide some problems, if all the software is doing is writing to ram, and reading back to confirm it is the same.

Since I have a thousand dollars of used RAM, a $1,400 used CPU containing the memory controller and a $600+ motherboard tying them together on the line here, I wanted to make sure I am doing it right before I leave a review that I got good stuff and give up my right for a dispute (or at least make it significantly harder)

So I did some research.

First I hit up Wikipedia.

Unsurprisingly these two confusingly similarly named projects that do mostly the same thing are related, and unsurprisingly Memtest86 came before Memtest 86+.

Allow me to summarize/paraphrase:

_______________________________________________________________________________________________

The original Memtest86 was released in 1994 by a one man project under an open source license, but after hit hit version 3 in ~2002 the one man project kind of died, and the release stayed unchanged for a few years. At that point, another person forked the code to create memtest86+ to continue the work and add support for new chipsets and CPU's. It was maintained until version 5 in ~2013 then it also died and became stagnant.

At that point the company PassMark bought the rights to the original Memtest (the non-plus version) and brought it current adding support for modern chipsets and RAM.

PassMarks version reportedly does not utilize the positron-independent code of the original, and can thus not test ALL of the RAM like the original could. I can't seem to confirm whether or not Memtest86+ does or does not do this.

And that's the way it was until 2022, until someone out of the blue (or the original maintainer, not sure) revived Memtest86+ and released version 6.00, and more recently 6.20 to bring it up to date.

_______________________________________________________________________________________________

I'm not sure if that summary leaves me more or less nonplussed than I was before (pun definitely intended) regarding which version to use.

I did some more research regarding the ECC question though, and found this on the PassMark page for the commercial version of Memtest86.

Essentially, Memtest86 will poll the ECC controller for its log of ECC errors (both corrected and uncorrected) in order to catch issues that would otherwise be masked by the ECC doing its job. The free version does this.

The pro version adds a feature called "ECC Error Injection" which can challenge a memory subsystem with intentional errors, but this feature is apparently not necessary as a user (either consumer or enterprise). It is intended for those in the business of designing and selling memory subsystems (CPU memory controllers, Motherboards and RAM) not end users or even those working with production servers.

So, no need to buy Pro for this.

Notably here, the software needs to be able to communicate with the chipset to offer this feature, so you probably want the newest version you can get your hands on, if testing ECC RAM. (for non-ECC I think it might be less critical, but I am not sure)


Judging by my scr5eenshot above, both my Memory Type and Chipset are listed as "unknown" which means it is probably not reading the ECC log during the test. I am not sure if this is because I failed to hit F2 during launch. Apparently the software defaults to a safe mode which runs without SMP. I'm not sure if that safe mode also does not try to poll the Chipset and RAM types, or if that is just because the version I am using (5.31b, I just used a linux image I had already written to a USB stick, Linux Mint 21.2, and this is what happened to be on it)


I almost feel like this info should be a pin somewhere (Maybe the memory subforum?) to save someone a lot of time in the future. FrgMstr What do you think? I'd be happy to reformat and post with a screenshot or two as I do my testing, if you agree.

Download links / official sites:
memtest86 (commercial/PassMark) https://www.memtest86.com/
Memtest86+ (open source) https://memtest.org/


As for my system I think I am going to create a Memtest86+ 6.20 USB stick and test it in SMP mode next. Hopefully it will identify the chipset and RAM. If it doesn't, I will try the closed source version.

But being over 14 hours in to a 20.5 hour test now, I am going to let this version finish first. I wish I had done my research before I started this...

It has been suggested to me that had I pressed F2 and entered the SMP mode, the test would have been MUCH faster. I wonder how well that scales. Does it scale with memory channels (I have 8 of them on this system) or CPU cores (32 in this case) or maybe even logical threads (64). Going to find out when I run it :p
 
Last edited:
Running with Memtest86+ 6.20 in SMP mode.

The newer version does seem to recognize my hardware. SMP seems to be the default now, so pressing F2 on start disables rather than enables SMP.

SMP mode thus far just seems to mean it alternates cores for the test, presumably to test the reliability of the links between all DIMM's and all cores instead of just one. It does not appear to run them in parallel.

That said, at least initially the test seems to be running MUCH faster. In just a few minutes we are up to 11%. The address ranges are also zipping by much faster.

1702526258107.png


In addition to running faster, the CPU also seems to run hotter. (the CPU cooler is noticeably louder) so I do think it is actually doing things faster, as opposed to just running a slower and somehow less stringent test, though I can't know for sure
 
Last edited:
PassMarks version reportedly does not utilize the positron-independent code of the original, and can thus not test ALL of the RAM like the original could

That's supposed to be position independent code. Unless you're running on a positronic matrix.

I think you definitely want to run the latest version of memtest86, plus or not... I have seen some versions fail to run properly on some hardware, so it may be worthwhile to try both and if you run into problems with the newest version, maybe get an older one... But especially on a system with ECC, I'd take a pass from any version as good enough. (Sometimes it's not... some unlucky people have had ram errors that don't show up in a single pass of memtest, but will show up in multiple passes, etc)

If your board lets you tweak memory voltage, you can usually induce faults by running the voltage lower, it's worth trying after you get your OS installed, and before you put it into service to confirm that everything is setup properly so the OS does get the error reports. A single correctable error would be enough, then put the voltage to normal and you're good to go.
 
I've been using PassMark's closed-source memtest86 for my latest computers (I didn't know the open source version had been revived!). Anyway, memtest86 seems to be using all cores in parallel and zipped through four passes of 32 GiB of mem in just a few hours (on a Ryzen PRO 4650G system with ECC mem).

It should also detect correctable ECC errors, as you say. There's informative talk about this in several threads on the level1techs forum, e.g. here, including screenshots.
 
Memtest86+ 6.2 completed a pass in about 5 hours, compared to the 22 or so hours the old version took in fail-safe mode:

1702555352665.png


Going to try PassMarks version as well, just for shits and giggles.
 
Well, meh. The PassMark image does not appear to be bootable for me.

I've tried writing th eimage with both the "USB Image Writer" tool in Linux Mint, and with dd, and neither produce a bootable USB stick.

No time to troubleshoot now. Maybe have to use Windows for this? The Readme has instructions for linux, but they don't appear to work.

Edit:

Oh never mind. I figured it out. It's an EFI boot, whereas I had to force the board into traditional boot for Memtest86+
 
Last edited:
And we are off.

Sure looks a little different:

1702557083488.png



It is also a much heavier image. 1.1GB vs the 6.2MB of Memtest86+

It does appear to have more in the way of specific system information though.

I just ran it with defaults. Looks like it only uses 16 cores by default.

It also seems faster yet than the latest Memtest86+
 
Happened to glance at the BMC before leaving for work. Glad I did. RAM temps were getting shockingly high using Passmarks Memtest98. BMC was alerting.

DIMM's A-D were hitting 72C.

I figured an open air test was fine just for a RAM test, but apparently not.

Got some airflow over the RAM now in a very crude manner, lol :p

PXL_20231214_125748276.jpg



With the lowest setting on the blower they are down to 40C again.

I guess that's my daily reminder that server parts are designed with constant airflow in mind.
 
Last edited:
Also worth noting.

Per my Kill-A-Watt, power at the wall was ~150W with the old Memtest86+, about 178W with the version 6.2 Memtest86+ and hovers between 200 and 250W with the PassMark version, so it is definitely hitting it harder, and it shows. it's tearing through those passes.

So, yeah, temps definitely went up with the PassMark version.
 
Last edited:
Also worth noting.

Per my Kill-A-Watt, power at the wall was ~150W with the old Memtest86+, about 178W with the version 6.2 Memtest86+ and hovers between 200 and 250W with the PassMark version, so it is definitely hitting it harder, and it shows. it's tearing through those passes.

So, yeah, temps definitely went up with the PassMark version.

This points to the two of them doing things substantially different. Which makes it worth running both.
 
This points to the two of them doing things substantially different. Which makes it worth running both.

My thoughts exactly.

If nothing else, the latter one definitely offered the RAM more thermal stress.

I don't usually bother with $50 worth of desktop RAM, but with a grand worth of used server RAM, you bet I am going to test the living daylights out of it.
 
Happened to glance at the BMC before leaving for work. Glad I did. RAM temps were getting shockingly high using Passmarks Memtest98. BMC was alerting.

DIMM's A-D were hitting 72C.

I figured an open air test was fine just for a RAM test, but apparently not.
I test my boards in a similar way, open on the desk. I just place a ~120 mm fan next to it hooked up to one of the chassis fan headers, and I also place the motherboard on a couple of strips of wood so there's airflow below it as well. (Wood is supposed to be ESD-okay-ish, and everything sits on an ESD mat as well.)
 
11 hours and 42 minutes later, I just got home from work. The test is still on pass 1. Looks like it slowed down after initial high speed operation.

I'm at 91% completion though!

By default it looks like it does 4 passes though. I'm not sure I want to do 48 hours of testing, but if I do, I am going to have to find a quieter solution than the flood blower fan :p

I test my boards in a similar way, open on the desk. I just place a ~120 mm fan next to it hooked up to one of the chassis fan headers, and I also place the motherboard on a couple of strips of wood so there's airflow below it as well. (Wood is supposed to be ESD-okay-ish, and everything sits on an ESD mat as well.)

Yeah, I might just do that. I only grabbed the blower as I was in a hurry to leave for work, and needed a quick solution, and it was right there from a previous incident in the basement :p

The blower is very effective, but it isn't exactly silent.
 
no real datacenter would use this.

I'm not a real datacenter :p

(Though to be fair, once integrated in my Super.icro 4U case in my rack, it will look a little more professional. This is just for pre-instsll testing that the parts I got from eBay are good and I wasn't scammed)


Honestly, real Enterprise users are kind of lame. They never do anything themselves, and just buy preconfigured appliances and shitty Dell or HP servers.

I like my approach much better :p

Oh, and AMD reportedly holds about 20% of the server market share now, so yeah, real datacenters would use this, they'd just buy it from Dell or HP in a noisy 2U case and use sole shitty storage appliance over something like iSCSI instead of rolling their own.

How boring :p
 
I'm not a real datacenter :p

Honestly, real Enterprise users are kind of lame. They never do anything themselves, and just buy preconfigured appliances and shitty Dell or HP servers.

I like my approach much better :p
i get you :) Im just tali
I'm not a real datacenter :p

Honestly, real Enterprise users are kind of lame. They never do anything themselves, and just buy preconfigured appliances and shitty Dell or HP servers.
I am just bullshitting, :) Just never go Dell VX Rail.
 
11 hours and 42 minutes later, I just got home from work. The test is still on pass 1. Looks like it slowed down after initial high speed operation.

I'm at 91% completion though!

By default it looks like it does 4 passes though. I'm not sure I want to do 48 hours of testing, but if I do, I am going to have to find a quieter solution than the flood blower fan :p



Yeah, I might just do that. I only grabbed the blower as I was in a hurry to leave for work, and needed a quick solution, and it was right there from a previous incident in the basement :p

The blower is very effective, but it isn't exactly silent.

That's a little bit easier on the ears:

PXL_20231215_032603987.jpg


Memtest86 RAM load temps went up by like 16C though :p
 
You might want to try Nutanix AHV just to learn more. Thats what brougt me to finally doing Linux since the command line is way more powerful.

Looks like a cool solution. I went from ESXi to Proxmox back in ~2016 because I was getting tired of the unpatched bugs in the free version of ESXi and couldn't afford the paid version as a home user.

I'd probably consider it if I were looking for new solutions today, but right now I just don't have any reason to. Proxmox does everything I need, is affordable, and is Debian based, and I love Debian. It's the most natural feeling Linux base to me.

I've been using Linux as my primary desktop since ~2002 and running bare metal Linux servers since I started running Counter-Strike servers back in 2001.
 
Doing some mprime (Linux version of Prime95) stability testing before blessing it as stable, but I have a good feeling about this thing now.

It did give me a scare at first though. Would run for a few seconds and then kill all the threads.

Not quite sure what is going on, but I googled it and lots of people are having the same issue with mprime.

When run from the configuration menu, it will be killed after a short while of running. But if you unpack the download afresh and run it with "./mprime -t" to immediately start an all core stress test, it works just fine.

Seems more like some kind of bug, and not a hardware issue, since the "mprime -t" method seems to be stable.

This is what 64 cores running at 100% looks like :p


1702785157368.png


Looks like the System Monitor GUI app in Linux Mint has some trouble with large amounts of memory. It is totally tallying that wrong.

The "free" command from the command line gets it right though.

Doing all of this from my desktop via the IPMI/BMC's console pass through. Pretty convenient. No need to hook up monitors and keyboards.

With this all core load the cores seem to be clocking at 2771 - 2772 Mhz, which is below advertised base clock of 2.8, but not by much.

Still that is a tiny bit disappointing, but probably not indicative of a problem.

Core temp is about 63C, and the CPU fan is at about 67% speed.

Might just be Supermicro doing their normal hyper-conservative thing.

I wonder if it is just bouncing off the TDP limiter. (I should probably check what it is set to in BIOS) It is pulling about 295w from the wall with all cores at full load according to my Kill-A-Watt.

Edit:

For shits and giggles did a single thread test. Core clocks up to 3676. So again, the same few Mhz short of the max boost clock. I'm guessing there are some conservative Supermicro clock settings preventing it from hitting max clocks.
 
Last edited:
Passed 48+ hours of mixed prime95 (well, actually mprime, but same thing) last night.

I'm ready to call this thing stable, and leave my positive reviews on eBay.

Next up, to do the actual drop in upgrade into my existing server.

I'll probably do that between Xmas and New Years when I have plenty of time to get it up and running when no one needs it.

Wish me luck!

And if you need server RAM or CPU's, I am happy to recommend atechcomponents (RAM) and tugm4470 on eBay. Based on my n=1 experience, they are both stellar sellers. The servethehome forums also has lots of buyers who are very happy with tugm4470. If ordering there, don't forget to be a servethehome forum member and message the code to the seller for free expedited shipping.
 
As an eBay Associate, HardForum may earn from qualifying purchases.
It's crazy how small the H12SSL series motherboards are in the case compared to the monster X9DRI-F.

The end result of this is that one of the two 12v 8pin EPS connectors is like 2mm too short to reach the closest EPS connector.

PXL_20231227_060327767-sml.jpg


Luckily I live near a Microcenter. They have 8" extensions in stock. I don't trust extensions (I've literally had one catch fire in the past), but it looks like I don't have much of a choice.
 
Last edited:
Here is hoping this is a decent cable:

PXL_20231227_214147123.jpg


I would hate to have something like this happen again...

196494_IMG_20190223_165838.jpg


196496_IMG_20190223_165913.jpg
 
I'm having one of those lovely days.

I just spent two hours troubleshooting network configuration. I was pouring over IP settings, switch settings, firewall settings, replacing cables, you name it.

Well. I fixed it. I was plugged into port 1 instead of port 0 on the back of the server.

Kill me now.
 
This is super cool to follow. I know you mentioned you switched to Proxmox as you felt like you weren't getting timely ESXi patches, but we release all ESXi patches to everyone regardless of support level. Yes, you have to login to Customer Connect, but anyone can download patches as soon as they are released.

...and the port thing? Been there done that on a multi-night project. Glad you got it sorted. 👍
 
Ah,

I forgot to keep up with posting in this thread.

The server has been back in the rack and in place now for a few days. It took longer than I expected.
Here is the last pic I took before buttoning it up and sticking it back in the rack:

1704262165225.png


I'm loving the end result, but I ran into a number of bumps along the way, which are finally resolved.

So, rather than decommissioning the old server, I transplanted it into my "testbench" machine, which I keep around for imaging drives, flashing firmware to boards, etc. etc. Stuff I want to be able to do separately from either my desktop or server.

The many PCIe lanes and ECC RAM will come in handy in that role, espe4cially since I often use ZFS on it for redundancy.

1704262197258.png


The Enthoo Pro case is awesome, and fits this massive SSI EEB form factor board. I decided to swap out the noisy fans that came with the Supermicro 4U coolers (92mm Nidec UltraFlo's) with a set of Noctua's that are friendlier to the ears when in my office. Thus far they seem to adequately keep the CPU's cool. I mean, they are "only" 95W each, so that is pretty easy by modern standards.

Seen here in its natural habitat with three hot swap 3.5" drives and six hot swap 2.5" drives. It also fits an additional 6 3.5" drives on the inside, where I have a small (relatively speaking) ZFS pool I use to temporarily store drive images, etc.

1704262221527.png
1704262233058.png



Two 12C/24T Ivy Bridge Xeon E5-2697 v2's with 256GB of ECC RAM. This would have been quite the workstation back when it was new :p

But that was a long time ago, as evident by this following screenshot:

1706559260853.png


Still pretty cool that this Ivy Bridge machine can almost hang with a first gen 16 Core Threadripper though :p Makes me wonder what it would have looked like before Spectre/Meltdown mitigations.

Also, here's a reminder that if you install Windows (or simply move a Windows install from an older system) on a system with a large amount of RAM, unless you have a corresponding very large drive, Windows WILL take over your entire drive with hiberfil.sys :p

256GB of RAM = 256GB of hiberfil.sys + swapfile on a 400gb (~380gb available) Intel SSD750 PCIe drive (the only NVMe drive I've ever found with an OPROM that loads during POST and allows you to boot on non-NVMe aware motherboards), which I partitioned 100GB for Linux, and 280GB for Windows. That doesn't leave much free :p

"Why is the drive full? I don't remember storing stuff on this drive or filling it with programs... ...oh"

And now we have disabled hibernation and swap. We won't get that fancy fast booting hibernation stuff, but I don't care.

The only question now is what I do with the old Sandy Bridge-E x79 Workstation board I was using in the testbench? It has been with me since 2011. I almost get a little misty-eyed at the thought of it no longer being in service somehow.

1704262281606.png


It was my main desktop from 2011 to 2019, when under water it would hit a heavy overclock of 4.8Gghz. I bought it when Bulldozer sucked at launch, and used it until I upgraded to my Threaderipper in 2019, then it went in the Testbench where it has been enjoying a lighter retirement load since.
 
Last edited:
So,

I am in the process of planning an upgrade to the old home VM server. I'm thinking I'm going to go EPYC this time around, and am going to go with one of the many Supermicro h12SSL variants (I'm leaning towards the H12SSL-NT)

I decided to pop this in this subforum as it is for a VM server, and I'll likely not get too much traction in the consumer AMD Motherboard subforum.

I have a few questions regarding these, if anyone has any experience:

1.) If I order these new - like say on Amazon - should I expect there to be a new enough BIOS on there to support Milan chips?

2.) If the firmware isn't new enough, what are my options? Can it flash a BIOS update via the BMC without the help of the CPU? Or should I be buying the cheapest Naples chip I can find on eBay (that inst vendor locked) just so I can have something to flash the BIOS with?

Appreciate any wisdom you folks might be able to share!
From my memory, Supermicro can be flashed from a USB port to recover a bricked BIOS. But check out the motherboard support site to be sure before you buy. I've had to do it a few times but it's been a while ago.

Also want to say, when dealing with high end hardware like this, it's important that you use the administrators manual to ensure you are buying the correct type of memory. Depending on how much memory you want (capacity, # of dimms, etc) the type of memory you need may be different (number of ranks for example). Just don't go and buy whatever and hope it works like people do on consumer grade PC's. QVL is important.
 
Also want to say, when dealing with high end hardware like this, it's important that you use the administrators manual to ensure you are buying the correct type of memory. Depending on how much memory you want (capacity, # of dimms, etc) the type of memory you need may be different (number of ranks for example). Just don't go and buy whatever and hope it works like people do on consumer grade PC's. QVL is important.

I have many Supermicro boards and I stuff whatever memory I can snipe cheap on Ebay into them. Works fine.
 
Also want to say, when dealing with high end hardware like this, it's important that you use the administrators manual to ensure you are buying the correct type of memory. Depending on how much memory you want (capacity, # of dimms, etc) the type of memory you need may be different (number of ranks for example). Just don't go and buy whatever and hope it works like people do on consumer grade PC's. QVL is important.

I checked the QVL on the Supermicro site, but it was surprisingly sparse, and had a disclaimer that essentially don't trust it too much.

I bought some pretty common Hynix server RAM (8x64GB Registered DDR4-3200 Hynix HMAA8GR7CJR4N-XN) and tested the living daylights out of it with Memtest86, Memtest86+ and Mprime LargeFFT's and luckily it was rock solid.
 
Back
Top