Intel officially announces 5'th Gen Xeons

https://wccftech.com/intel-5th-gen-...ald-rapids-up-to-64-cores-320-mb-cache-prices

There's a lot to digest in there and I've been downing cough syrup in place of my daily coffee so I can't wrap my head around it all, but it seems way too good to be true. Like Intel has to prove itself or something.
64 cores?


1702794656944.gif
 
Nah 64/128 is fine, much past there and you have a hard time with the RAM to feed them for most use cases, with "only" 8 memory channels you are looking at needing 8 sticks of 64GB EEC DDR5 RDIMMs those are around $800 a stick, but if you go past 64 cores with 8 memory channels you need to bump up to 128GB sticks and those are a good $3500 each. So even with a $12K CPU, you can buy 2 of them, and fill them out with 16 sticks of 64GB, for less than you can for a single 96/192, or a 128/256 using 128GB sticks, unless you also go with 64GB sticks there but then you will find yourself RAM starved for many use case scenarios.
You also have serious tradeoffs between core counts and clock speeds when you look at the products past 64 cores, 128 threads at 3.9 can often be better than 256 at 3.1, especially in an environment where you want to ride the efficiency curve which usually rides around the 75% utilization rate because after a point you get diminishing returns as fans of undervolting will be more than happy to tell you and keeping huge datacenters inside those optimal power curves is important because it saves a lot of money monthly.

There is some seriously tricky math involved in building out systems now especially when you get into HCI stacks where Networking, Storage, and PCIe lane availability come in as a heavy decision factor, Servers are a very interesting space right now.
 
Nah 64/128 is fine, much past there and you have a hard time with the RAM to feed them for most use cases, with "only" 8 memory channels you are looking at needing 8 sticks of 64GB EEC DDR5 RDIMMs those are around $800 a stick, but if you go past 64 cores with 8 memory channels you need to bump up to 128GB sticks and those are a good $3500 each. So even with a $12K CPU, you can buy 2 of them, and fill them out with 16 sticks of 64GB, for less than you can for a single 96/192, or a 128/256 using 128GB sticks, unless you also go with 64GB sticks there but then you will find yourself RAM starved for many use case scenarios.
You also have serious tradeoffs between core counts and clock speeds when you look at the products past 64 cores, 128 threads at 3.9 can often be better than 256 at 3.1, especially in an environment where you want to ride the efficiency curve which usually rides around the 75% utilization rate because after a point you get diminishing returns as fans of undervolting will be more than happy to tell you and keeping huge datacenters inside those optimal power curves is important because it saves a lot of money monthly.

There is some seriously tricky math involved in building out systems now especially when you get into HCI stacks where Networking, Storage, and PCIe lane availability come in as a heavy decision factor, Servers are a very interesting space right now.
Nailed it, and yes sir, the server space is very interesting right now.

It's also always fun to point out the HW-cost often is a drop in the bucket compared to the software licensing you're going to be incurring running whatever program/suite you're after. No one is going to give a crap about HW costs when the SW costs easily dwarf it.

We play a bit in the HPC-space at work (various Ansys simulations) and overall, we'd rather see those existing cores performing slightly higher then shelling out for more per-core licenses to accommodate additional nodes. We've got one cluster somewhere around 800 cores in total across ~25 nodes (not at work, don't recall exact figures here), but even a few hundred MHz increase across is huge when we keep these things cranking at 100% for days/weeks on end, as close to 24/7 as we can throughout their lifecycle. Like you mentioned, there's a sweet spot our there somewhere, just have to find it for each individual application.
 
Nailed it, and yes sir, the server space is very interesting right now.

It's also always fun to point out the HW-cost often is a drop in the bucket compared to the software licensing you're going to be incurring running whatever program/suite you're after. No one is going to give a crap about HW costs when the SW costs easily dwarf it.

We play a bit in the HPC-space at work (various Ansys simulations) and overall, we'd rather see those existing cores performing slightly higher then shelling out for more per-core licenses to accommodate additional nodes. We've got one cluster somewhere around 800 cores in total across ~25 nodes (not at work, don't recall exact figures here), but even a few hundred MHz increase across is huge when we keep these things cranking at 100% for days/weeks on end, as close to 24/7 as we can throughout their lifecycle. Like you mentioned, there's a sweet spot our there somewhere, just have to find it for each individual application.
VMWare licensing, Exchange licensing, Server 2022 Datacenter, and lets not forget good old Red Hat...
 
VMWare licensing, Exchange licensing, Server 2022 Datacenter, and lets not forget good old Red Hat...
Shoot, Server 2022 DC/Standard (for backup servers) alone for our new Nutanix Cluster was pushing at least 60k alone... and that's just what we have On-Prem! I don't want to think about our global data centers.... *shudder*

To tie back to the OP, again... what HW costs... really very irrelevant in the grand scheme/at scale.
 
Good, but how long until Ebay is flooded with the older Xeons, but newer than most E and W models you find :D
 
Back
Top