So what do the guys who create distros use for creating a new release?In my case it is compiling (open source) software. Compiling all of FreeBSD with itself or compiling the Linux kernel often can really use cores.
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
So what do the guys who create distros use for creating a new release?In my case it is compiling (open source) software. Compiling all of FreeBSD with itself or compiling the Linux kernel often can really use cores.
I need memory channels, 3 maybe 4 of the 7975wx units would cover my needs nicely.
I need to look more into switch less networking though, I would need each of them direct connected, the aggregator switches to cover the load of 4 of them would not be something I’d want to pay for.
Crapload of VMs.What... Do do you do with them?
And why?
So what do the guys who create distros use for creating a new release?
That right there is why ARM can do so well as it does in the datacenter market. Want to host a billion websites on a single stack then having a few thousand cores in that stack goes a long way to making it happen, especially if there are purchases being made on those sites. None of that is CPU intensive especially with dedicated accelerators handling the encryption but that is a lot of simultaneous work and there more is better.Usually spondered servers, cloud or otherwise. They actually don't care about single core speed.
Linus Torvalds is one of the original Threadripper users.
Crapload of VMs.
Exchange, Sharepoint, AD, DHCP, DNS, Network Monitors, Web Servers, Remote Workstations, Phone systems, Accounting systems, various management systems and data warehouses.
Currently, it is being done with 2, EPYC 7551Ps but they are nearing the end of warranty support and they are big 240v 3U beasts, that are configured in a soft redundancy so if one dies I can limp along, the jobs are split between them but if One fails I at worst loose about 10m worth of data with 3-5 minutes of down time but it leaves the system in an over-provisioned state, which isn't ideal but still totally functionally for a short period.
I would prefer to move over to an HCI configuration where the work is spread evenly across the 3-4 servers, but the data is actually stored and replicated across the whole stack in real-time, so if one of the servers fails there is no data loss and downtime is measured in seconds.
If I go with 3, I would prefer that all of them use the 64 thread, 256GB configurations (in a 1u or 2u) config and I could then get away with a standard 120v system so that I can spread the workload across all 3, but if I lose one then I am still not in an over-provisioned state.
If I go with 4, I can cut down on the individual core counts and memory spread the load across all 4 but then if 1 fails I am still not over-provisioned across the remaining 3, so instead of needing 3 32/64 units, I can go with 4 24/48's instead.
But if I go with 4 then my networking hardware won't have the needed throughput so that's where the RDMA capable cards come in, you direct connect each server to the other server via a dedicated link, then use standard networking equipment to connect the VM's to the main networking switch, but with that configuration, I know there are some fun things you need to do with subnetting on the individual networking cards but I am not 100% on what it is and the info out there that I can find is sparse, to say the least, and that is more money than I want to play fuck around and find out with as I would be under the gun to get it up and running so that means bringing in a canned solution from Dell or HP for this which wasn't previously possible because AMD was basically Lenovo only for the Threadrippers, and blah blah blah.
If I want it that way then yes, as each server would contain all the data and it would be replicated across each of them in real time so any migrations of the VM between them is virtually instant. Thats where the RDMA network cards come in, they like to run 25 or 50Gbps for the “cheap” ones and upwards of 100 for the more expensive options.Ahh, my bad. I tend to think of that kind of stuff as EPYC/Xeon server workloads, and Tghreadrippers as local workstations.
Is the traffic really that heavy between them? 10gig Switches are quite reasonable these days. 25gig ones aren't far behind. Stick a dual port SFP28 NIC in each and that ought to do it, right?
If I want it that way then yes, as each server would contain all the data and it would be replicated across each of them in real time so any migrations of the VM between them is virtually instant. Thats where the RDMA network cards come in, they like to run 25 or 50Gbps for the “cheap” ones and upwards of 100 for the more expensive options.
RDMA lets the network card access storage and ram without really bothering the CPU so it works mostly in real time.
Switches with say 10-12 SFP+ ports that also support RDMA… $$$$.
Cheaper to direct connect.
Thats where the Mellanox stuff comes into play.
The network adapters aren't bad, the switches are where the pain happens. The ConnectX-4 runs around $450, so 2 in a server comes out to ~$900, 4 ports of SFP+ 10GB isn't much cheaper than that really.Interesting. This RDMA thing is a whole new rabbit hole I have not played with yet. Very interesting that it requires specialized switching. Bypassing the CPU would seem to be a server side thing. Switching is just packets, and doesn't seem to have much to do with what happens once the packets get to their destination.
Or is it one of those proprietary licensing things, where for no good technical reason, if you want the specialty RDMA NIC's you also have to overpay for switching as part of a package?
I hate that stuff.
The network adapters aren't bad, the switches are where the pain happens. The ConnectX-4 runs around $450, so 2 in a server comes out to ~$900, 4 ports of SFP+ 10GB isn't much cheaper than that really.
But the fun part of RDMA is the direct memory and storage access, so it can pull from ram or place things in ram, same with HDD access, and its definately needed if you have GPU resources in there as it lets the GPU VRAM directly sync as well. Don't need VRAM though and that eats bandwidth for breakfast.
But yeah the Nvidia network switches aren't exactly cheap
https://store.nvidia.com/en-us/netw...pectrum-25gbe-100gbe-1u-open-ethernet-switch/
And that is one of the "budget" models.
Just about any will do, HPE Aruba makes a switch I can use and have that tie into my existing environment, but that pricing gets weird, because nobody pays HPE Sticker price, they will advertise the switch I need as $40K and turn around and sell it for $6K+ licensing for their GreenLake management console.Do you need specialty switches, or will any SFP28/QSFP28 switches do?
Just about any will do, HPE Aruba makes a switch I can use and have that tie into my existing environment, but that pricing gets weird, because nobody pays HPE Sticker price, they will advertise the switch I need as $40K and turn around and sell it for $6K+ licensing for their GreenLake management console.
Just needs to support RDMA and RoCEv1 and v2.
Linked the Nvidia one because their site is actually easy to navigate, the HPE site is a mess.
I’ll have to double check later but I swear my Arista 7050QX-32S supports all that. I definitely have RDMA between my Hyper-V hosts for the Starwind VSAN setup I run. But my setup is a home lab and I can’t afford anything other than used/EoL/EoS enterprise gear.Just about any will do, HPE Aruba makes a switch I can use and have that tie into my existing environment, but that pricing gets weird, because nobody pays HPE Sticker price, they will advertise the switch I need as $40K and turn around and sell it for $6K+ licensing for their GreenLake management console.
Just needs to support RDMA and RoCEv1 and v2.
Linked the Nvidia one because their site is actually easy to navigate, the HPE site is a mess.
WOW, this is too sexy. Is it real or fabricated? Somebody got a hold of them before Nov 21st?
https://www.servethehome.com/amd-ry...000wx-at-96-cores-and-threadripper-7000-hedt/WOW, this is too sexy. Is it real or fabricated? Somebody got a hold of them before Nov 21st?
I see. So this specific threadripper already exists in this Dell's workstation? They got a testing sample or something.
Already sent my rep the question of "Can I get this in a 2U?"I see. So this specific threadripper already exists in this Dell's workstation? They got a testing sample or something.
https://www.dell.com/en-us/blog/meet-dells-newest-workstation-featuring-96-cores/
Already sent my rep the question of "Can I get this in a 2U?"
Here's hoping they come back with a yes.
They used to sell rackmount versions of the Precision workstations that had Intel i-series processors in them (like the R3930), but looks to be all Xeon (and a 2P configuration to boot) now. The T5820 and T7820 had rackmount conversion kits, taking 4u.Damnit Jim!
That would explain why Xeon-W and even Gen 13/14 1U chassis are relatively easy to come by.
AMD Threadripper PRO 7995WX CPU breaks Cinebench world records with air cooler at 102°C and 980W
https://videocardz.com/newz/amd-thr...orld-records-with-air-cooler-at-102c-and-980w
well you got to figure the new chip is running all core at 4.8+ghz where as max boost on epyc is 3.1 plus you got generational ipc improvements, prob better memory controller. so i guess it's eeking out a win. not sure what numbers the epyc's scored. and i'm pretty sure that record is just for "on air cooling" performance . maybe epyc wins on sub-zero oc? idk.Wouldn't a modern dual EPYC system be faster even without overclocking?
R23 record they beat was from dual epyc 9654 setup, the 3990x PBO could beat 2x epyc 7742 in the past...Wouldn't a modern dual EPYC system be faster even without overclocking?
So, my gut is telling me to go with the Supermicro H13SRA-TF. Supermicro boards always dependable. At least the workstation and server ones. No idea if it has enough VRM capacity to support overclocking, or if this is even supported in the BIOS though. Which probably isn't a problem. I never overclocked my 3960x, and I doubt I'll overclock whatever 7000 series Threadripper I get.
The Asus Pro WS TRX50-SAGE is also very tempting though. The PCIe slot configuration on it is damn near perfect:
- 16x Gen 5
-Blank
- 16x Gen5
-Blank
- 8x Gen5
- 4x Gen4
-16x Gen4
On the flipside, as opposed to the Supermicro board, it only has one 10gig ethernet port (the other is 2.5 gig) where the supermicro board has dual 10gig ethernet ports, and it also only has one m.2 slot, where the supermicro has 2.
If I don't downsize my current m.2 drives I'd need a 4 drive 16x expansion card on either one of these boards. I'd need my dual port 10 gig NIC on the Asus, but not on the Supermicro.
This kind of evens it up a little bit. The Asus is a little bit more compelling from a future upgradeability perspective. If I ever upgrade to dual SFP28, I'll need a NIC on either one, and in that case the Asus will have more free slots.
In the end, it will probably come down to what I can find in stock and at what price, but these offerings are certainly interesting.
I see. So this specific threadripper already exists in this Dell's workstation? They got a testing sample or something.
https://www.dell.com/en-us/blog/meet-dells-newest-workstation-featuring-96-cores/
I imagine this was an promotion organized by Dell not them hacking their system to remote test a workstation.I presume all of the manufacturer of systems and boards must have had pre-release samples for validation purposes. I guess this one leaked a little early?
Supermicro if you want to run everything stock.So, my gut is telling me to go with the Supermicro H13SRA-TF. Supermicro boards always dependable. At least the workstation and server ones. No idea if it has enough VRM capacity to support overclocking, or if this is even supported in the BIOS though. Which probably isn't a problem. I never overclocked my 3960x, and I doubt I'll overclock whatever 7000 series Threadripper I get.
The Asus Pro WS TRX50-SAGE is also very tempting though. The PCIe slot configuration on it is damn near perfect:
- 16x Gen 5
-Blank
- 16x Gen5
-Blank
- 8x Gen5
- 4x Gen4
-16x Gen4
On the flipside, as opposed to the Supermicro board, it only has one 10gig ethernet port (the other is 2.5 gig) where the supermicro board has dual 10gig ethernet ports, and it also only has one m.2 slot, where the supermicro has 2.
If I don't downsize my current m.2 drives I'd need a 4 drive 16x expansion card on either one of these boards. I'd need my dual port 10 gig NIC on the Asus, but not on the Supermicro.
This kind of evens it up a little bit. The Asus is a little bit more compelling from a future upgradeability perspective. If I ever upgrade to dual SFP28, I'll need a NIC on either one, and in that case the Asus will have more free slots.
In the end, it will probably come down to what I can find in stock and at what price, but these offerings are certainly interesting.
It's fun!Interesting. This RDMA thing is a whole new rabbit hole I have not played with yet. Very interesting that it requires specialized switching. Bypassing the CPU would seem to be a server side thing. Switching is just packets, and doesn't seem to have much to do with what happens once the packets get to their destination.
Or is it one of those proprietary licensing things, where for no good technical reason, if you want the specialty RDMA NIC's you also have to overpay for switching as part of a package?
I hate that stuff.
Aruba is getting me the numbers on an 8000 series switch that does what I need and that will let me manage it in the unit in Aruba Central and monitor and secure it via Clearpass, But I have also put it into my procurement requirements for the new servers that they have the capability of running direct connect in a switchless configuration.I’ll have to double check later but I swear my Arista 7050QX-32S supports all that. I definitely have RDMA between my Hyper-V hosts for the Starwind VSAN setup I run. But my setup is a home lab and I can’t afford anything other than used/EoL/EoS enterprise gear.
From your comments this sounds like yours is a corporate environment. But I’d recommend a 40Gb switch that supports that stuff but get 50/100Gbe NICs as they should support 40 Gb mode and maybe you can swing a faster switch later.
If you’re not opposed to used, definitely trawl eBay for used Arista gear; if it’s not EoS they might let you buy support for it.
That MSN2010 you linked up thread runs around $4k on eBay in the US.