AMD Ryzen Threadripper 7000 Series Lineup Revealed

In my case it is compiling (open source) software. Compiling all of FreeBSD with itself or compiling the Linux kernel often can really use cores.
So what do the guys who create distros use for creating a new release?
 
I need memory channels, 3 maybe 4 of the 7975wx units would cover my needs nicely.
I need to look more into switch less networking though, I would need each of them direct connected, the aggregator switches to cover the load of 4 of them would not be something I’d want to pay for.

What... Do do you do with them?


And why?
 
What... Do do you do with them?


And why?
Crapload of VMs.
Exchange, Sharepoint, AD, DHCP, DNS, Network Monitors, Web Servers, Remote Workstations, Phone systems, Accounting systems, various management systems and data warehouses.

Currently, it is being done with 2, EPYC 7551Ps but they are nearing the end of warranty support and they are big 240v 3U beasts, that are configured in a soft redundancy so if one dies I can limp along, the jobs are split between them but if One fails I at worst loose about 10m worth of data with 3-5 minutes of down time but it leaves the system in an over-provisioned state, which isn't ideal but still totally functionally for a short period.

I would prefer to move over to an HCI configuration where the work is spread evenly across the 3-4 servers, but the data is actually stored and replicated across the whole stack in real-time, so if one of the servers fails there is no data loss and downtime is measured in seconds.

If I go with 3, I would prefer that all of them use the 64 thread, 256GB configurations (in a 1u or 2u) config and I could then get away with a standard 120v system so that I can spread the workload across all 3, but if I lose one then I am still not in an over-provisioned state.
If I go with 4, I can cut down on the individual core counts and memory spread the load across all 4 but then if 1 fails I am still not over-provisioned across the remaining 3, so instead of needing 3 32/64 units, I can go with 4 24/48's instead.
But if I go with 4 then my networking hardware won't have the needed throughput so that's where the RDMA capable cards come in, you direct connect each server to the other server via a dedicated link, then use standard networking equipment to connect the VM's to the main networking switch, but with that configuration, I know there are some fun things you need to do with subnetting on the individual networking cards but I am not 100% on what it is and the info out there that I can find is sparse, to say the least, and that is more money than I want to play fuck around and find out with as I would be under the gun to get it up and running so that means bringing in a canned solution from Dell or HP for this which wasn't previously possible because AMD was basically Lenovo only for the Threadrippers, and blah blah blah.
 
Usually spondered servers, cloud or otherwise. They actually don't care about single core speed.

Linus Torvalds is one of the original Threadripper users.
That right there is why ARM can do so well as it does in the datacenter market. Want to host a billion websites on a single stack then having a few thousand cores in that stack goes a long way to making it happen, especially if there are purchases being made on those sites. None of that is CPU intensive especially with dedicated accelerators handling the encryption but that is a lot of simultaneous work and there more is better.
 
Crapload of VMs.
Exchange, Sharepoint, AD, DHCP, DNS, Network Monitors, Web Servers, Remote Workstations, Phone systems, Accounting systems, various management systems and data warehouses.

Currently, it is being done with 2, EPYC 7551Ps but they are nearing the end of warranty support and they are big 240v 3U beasts, that are configured in a soft redundancy so if one dies I can limp along, the jobs are split between them but if One fails I at worst loose about 10m worth of data with 3-5 minutes of down time but it leaves the system in an over-provisioned state, which isn't ideal but still totally functionally for a short period.

I would prefer to move over to an HCI configuration where the work is spread evenly across the 3-4 servers, but the data is actually stored and replicated across the whole stack in real-time, so if one of the servers fails there is no data loss and downtime is measured in seconds.

If I go with 3, I would prefer that all of them use the 64 thread, 256GB configurations (in a 1u or 2u) config and I could then get away with a standard 120v system so that I can spread the workload across all 3, but if I lose one then I am still not in an over-provisioned state.
If I go with 4, I can cut down on the individual core counts and memory spread the load across all 4 but then if 1 fails I am still not over-provisioned across the remaining 3, so instead of needing 3 32/64 units, I can go with 4 24/48's instead.
But if I go with 4 then my networking hardware won't have the needed throughput so that's where the RDMA capable cards come in, you direct connect each server to the other server via a dedicated link, then use standard networking equipment to connect the VM's to the main networking switch, but with that configuration, I know there are some fun things you need to do with subnetting on the individual networking cards but I am not 100% on what it is and the info out there that I can find is sparse, to say the least, and that is more money than I want to play fuck around and find out with as I would be under the gun to get it up and running so that means bringing in a canned solution from Dell or HP for this which wasn't previously possible because AMD was basically Lenovo only for the Threadrippers, and blah blah blah.

Ahh, my bad. I tend to think of that kind of stuff as EPYC/Xeon server workloads, and Tghreadrippers as local workstations.

Is the traffic really that heavy between them? 10gig Switches are quite reasonable these days. 25gig ones aren't far behind. Stick a dual port SFP28 NIC in each and that ought to do it, right?
 
Ahh, my bad. I tend to think of that kind of stuff as EPYC/Xeon server workloads, and Tghreadrippers as local workstations.

Is the traffic really that heavy between them? 10gig Switches are quite reasonable these days. 25gig ones aren't far behind. Stick a dual port SFP28 NIC in each and that ought to do it, right?
If I want it that way then yes, as each server would contain all the data and it would be replicated across each of them in real time so any migrations of the VM between them is virtually instant. Thats where the RDMA network cards come in, they like to run 25 or 50Gbps for the “cheap” ones and upwards of 100 for the more expensive options.

RDMA lets the network card access storage and ram without really bothering the CPU so it works mostly in real time.

Switches with say 10-12 SFP+ ports that also support RDMA… $$$$.
Cheaper to direct connect.

Thats where the Mellanox stuff comes into play.

Like 2 of these for each.
https://www.nvidia.com/en-us/networking/ethernet/connectx-4-lx/
 
Last edited:
If I want it that way then yes, as each server would contain all the data and it would be replicated across each of them in real time so any migrations of the VM between them is virtually instant. Thats where the RDMA network cards come in, they like to run 25 or 50Gbps for the “cheap” ones and upwards of 100 for the more expensive options.

RDMA lets the network card access storage and ram without really bothering the CPU so it works mostly in real time.

Switches with say 10-12 SFP+ ports that also support RDMA… $$$$.
Cheaper to direct connect.

Thats where the Mellanox stuff comes into play.

Interesting. This RDMA thing is a whole new rabbit hole I have not played with yet. Very interesting that it requires specialized switching. Bypassing the CPU would seem to be a server side thing. Switching is just packets, and doesn't seem to have much to do with what happens once the packets get to their destination.

Or is it one of those proprietary licensing things, where for no good technical reason, if you want the specialty RDMA NIC's you also have to overpay for switching as part of a package?

I hate that stuff.
 
Interesting. This RDMA thing is a whole new rabbit hole I have not played with yet. Very interesting that it requires specialized switching. Bypassing the CPU would seem to be a server side thing. Switching is just packets, and doesn't seem to have much to do with what happens once the packets get to their destination.

Or is it one of those proprietary licensing things, where for no good technical reason, if you want the specialty RDMA NIC's you also have to overpay for switching as part of a package?

I hate that stuff.
The network adapters aren't bad, the switches are where the pain happens. The ConnectX-4 runs around $450, so 2 in a server comes out to ~$900, 4 ports of SFP+ 10GB isn't much cheaper than that really.
But the fun part of RDMA is the direct memory and storage access, so it can pull from ram or place things in ram, same with HDD access, and its definately needed if you have GPU resources in there as it lets the GPU VRAM directly sync as well. Don't need VRAM though and that eats bandwidth for breakfast.


But yeah the Nvidia network switches aren't exactly cheap
https://store.nvidia.com/en-us/netw...pectrum-25gbe-100gbe-1u-open-ethernet-switch/
And that is one of the "budget" models.

But yeah the problem is EPYC is the "correct" part for me to be using but anything with the core count I need is 10x more powerful than I need it to actually be as well as $20K more expensive than I could get sign off on it being. EPYC just does too much heavy lifting, so the Threadripper Pro's are the happy medium there.
 
Last edited:
The network adapters aren't bad, the switches are where the pain happens. The ConnectX-4 runs around $450, so 2 in a server comes out to ~$900, 4 ports of SFP+ 10GB isn't much cheaper than that really.
But the fun part of RDMA is the direct memory and storage access, so it can pull from ram or place things in ram, same with HDD access, and its definately needed if you have GPU resources in there as it lets the GPU VRAM directly sync as well. Don't need VRAM though and that eats bandwidth for breakfast.


But yeah the Nvidia network switches aren't exactly cheap
https://store.nvidia.com/en-us/netw...pectrum-25gbe-100gbe-1u-open-ethernet-switch/
And that is one of the "budget" models.

Do you need specialty switches, or will any SFP28/QSFP28 switches do?
 
Do you need specialty switches, or will any SFP28/QSFP28 switches do?
Just about any will do, HPE Aruba makes a switch I can use and have that tie into my existing environment, but that pricing gets weird, because nobody pays HPE Sticker price, they will advertise the switch I need as $40K and turn around and sell it for $6K+ licensing for their GreenLake management console.
Just needs to support RDMA and RoCEv1 and v2.

Linked the Nvidia one because their site is actually easy to navigate, the HPE site is a mess.
 
Assuming that this would compete favorably or come out above the 7950X3D in terms of single/few thread clock speed when overclocked and overall performance, there's a part of me that is SORELY tempted. I'm a long time HEDT guy, from X58 to X99 etc... but seeing HEDT coming back onto the scene conceptually is nice. Unfortunately, given the prices of The TR much less the PRO chips, I'm not sure I can really justify it. I'd love the additonal PCI-E lanes, have enjoyed quad channel RAM in the past ,and really like how (unlike Intel) AMD seem to be opening things up to allow for registered memory and ECC as well as full capability (even allowing you to slot a PRO into a TRX50 boardif you want); I'm also wondering if Threadripper will be faster to gain access to AMD's planned switch to OpenSIL (FOSS , coreboot friendly init microcode and firmware ). It would also get me away from any potential scheduler issues with the 7950X3D, given that these TR and TR PRO chips swing a lot of cache around and have full access to it!

Still, it means a minimum $1500 and probably $2500 CPU or upto $5000 if I want even more cache, plus whatever the mobos will cost. I am wondering if Asus will roll out a parallel to their very well received Rampage/Dominus or Zenith series HEDT boards but I'm sure it won't be cheap either way, easily $1000+. Its the kind of upgrade I'd certainly LIKE to have if we finally get back to the days of HEDT being enthusiast all around performance increase vs the mainstream as opposed to purely highly parallelized workload exclusive kind of stuff, but we're yet to see if that's the case; either way, the cost will be significantly above what it used to be which even if one can afford it it does stretch a harder look at "want vs need". Still, its nice to even see these questions worth asking again and yet to see if I'm going to talk myself into it ^_^;;
 
Just about any will do, HPE Aruba makes a switch I can use and have that tie into my existing environment, but that pricing gets weird, because nobody pays HPE Sticker price, they will advertise the switch I need as $40K and turn around and sell it for $6K+ licensing for their GreenLake management console.
Just needs to support RDMA and RoCEv1 and v2.

Linked the Nvidia one because their site is actually easy to navigate, the HPE site is a mess.

Do you need layer 3 features in these switches?

Mikrotik has a few that VERY reasonable by comparison. They can max out all ports in layer 2, but the CPU's aren't strong enough to handle anything but token routing.

They don't have a lot of SFP28 stuff yet (only a small number of routers) but I expect that to trickle into the switch products in the not too distant future.

I've been running a 16 port SFP+ switch of theirs for years, as well as three 24port gigabit copper switches, with dual SFP+ uplinks. They have some QSFP+ switches too.

That 16 port SFP+ switch cost me only ~$600, over 3 years ago.

So, if 10/40 can work, and you don't absolutely need 25/100 right now (and you don't need layer 3) they can do the job right now.
 
Not sure I would be up for Threadripper 7000, is the a guarantee that Threadripper 7000 is compatible with 8000 or hopefully 9000? I am feeling mighty burned with the Threadripper 3000 not being upgradable to Threadripper 5000. I do like lots of cores for creative work and vms though. I also am still on xeon v2s for my server cause upgrade cost was kinda high but looking to upgrade this Black Friday. May get a new workstation in 2025 when we have to cause Windows 10 is being sunset.
 
Just about any will do, HPE Aruba makes a switch I can use and have that tie into my existing environment, but that pricing gets weird, because nobody pays HPE Sticker price, they will advertise the switch I need as $40K and turn around and sell it for $6K+ licensing for their GreenLake management console.
Just needs to support RDMA and RoCEv1 and v2.

Linked the Nvidia one because their site is actually easy to navigate, the HPE site is a mess.
I’ll have to double check later but I swear my Arista 7050QX-32S supports all that. I definitely have RDMA between my Hyper-V hosts for the Starwind VSAN setup I run. But my setup is a home lab and I can’t afford anything other than used/EoL/EoS enterprise gear.

From your comments this sounds like yours is a corporate environment. But I’d recommend a 40Gb switch that supports that stuff but get 50/100Gbe NICs as they should support 40 Gb mode and maybe you can swing a faster switch later.

If you’re not opposed to used, definitely trawl eBay for used Arista gear; if it’s not EoS they might let you buy support for it.

That MSN2010 you linked up thread runs around $4k on eBay in the US.
 
Oh also the Mellanox SN2700, 32x100Gbe, can be had for sub-$2k on eBay US; though you may need to acquire software as a separate purchase.
 
For 25GB I haven't had issues with the BCM957414A4142CC I've used and they're cheap as dirt but I'm not actually pushing them at all (running 10GB at the moment). Same broadcom chips are in my Dells too.
 
Already sent my rep the question of "Can I get this in a 2U?"
Here's hoping they come back with a yes.

I'm guessing no on the basis of wrong product line for a rackmount config. AMD doesn't include server OS's in their driver packages for TR chipsets, so big OEMs are going to stay within those bounds, and would steer customers to Epyc for any server needs.

FWIW this is one area Intel is a bit more flexible - server OS's are supported in their chipset drivers for Xeon Workstations.
 
Last edited:
Damnit Jim!
That would explain why Xeon-W and even Gen 13/14 1U chassis are relatively easy to come by.
 
Damnit Jim!
That would explain why Xeon-W and even Gen 13/14 1U chassis are relatively easy to come by.
They used to sell rackmount versions of the Precision workstations that had Intel i-series processors in them (like the R3930), but looks to be all Xeon (and a 2P configuration to boot) now. The T5820 and T7820 had rackmount conversion kits, taking 4u.
 
Last edited:
Wouldn't a modern dual EPYC system be faster even without overclocking?
well you got to figure the new chip is running all core at 4.8+ghz where as max boost on epyc is 3.1 plus you got generational ipc improvements, prob better memory controller. so i guess it's eeking out a win. not sure what numbers the epyc's scored. and i'm pretty sure that record is just for "on air cooling" performance . maybe epyc wins on sub-zero oc? idk.
 
Wouldn't a modern dual EPYC system be faster even without overclocking?
R23 record they beat was from dual epyc 9654 setup, the 3990x PBO could beat 2x epyc 7742 in the past...

950w-980w was maybe more or similar to the dual stock epyc power usage and cinebench does not reward memory bandwith much basically not at all after a very low amount (does not saturate mid ddr-4 dual channel setup) removing a lot of the epyc system advantage here.
 


So, my gut is telling me to go with the Supermicro H13SRA-TF. Supermicro boards always dependable. At least the workstation and server ones. No idea if it has enough VRM capacity to support overclocking, or if this is even supported in the BIOS though. Which probably isn't a problem. I never overclocked my 3960x, and I doubt I'll overclock whatever 7000 series Threadripper I get.

The Asus Pro WS TRX50-SAGE is also very tempting though. The PCIe slot configuration on it is damn near perfect:

- 16x Gen 5
-Blank
- 16x Gen5
-Blank
- 8x Gen5
- 4x Gen4
-16x Gen4

On the flipside, as opposed to the Supermicro board, it only has one 10gig ethernet port (the other is 2.5 gig) where the supermicro board has dual 10gig ethernet ports, and it also only has one m.2 slot, where the supermicro has 2.

If I don't downsize my current m.2 drives I'd need a 4 drive 16x expansion card on either one of these boards. I'd need my dual port 10 gig NIC on the Asus, but not on the Supermicro.

This kind of evens it up a little bit. The Asus is a little bit more compelling from a future upgradeability perspective. If I ever upgrade to dual SFP28, I'll need a NIC on either one, and in that case the Asus will have more free slots.

In the end, it will probably come down to what I can find in stock and at what price, but these offerings are certainly interesting.
 
So, my gut is telling me to go with the Supermicro H13SRA-TF. Supermicro boards always dependable. At least the workstation and server ones. No idea if it has enough VRM capacity to support overclocking, or if this is even supported in the BIOS though. Which probably isn't a problem. I never overclocked my 3960x, and I doubt I'll overclock whatever 7000 series Threadripper I get.

The Asus Pro WS TRX50-SAGE is also very tempting though. The PCIe slot configuration on it is damn near perfect:

- 16x Gen 5
-Blank
- 16x Gen5
-Blank
- 8x Gen5
- 4x Gen4
-16x Gen4

On the flipside, as opposed to the Supermicro board, it only has one 10gig ethernet port (the other is 2.5 gig) where the supermicro board has dual 10gig ethernet ports, and it also only has one m.2 slot, where the supermicro has 2.

If I don't downsize my current m.2 drives I'd need a 4 drive 16x expansion card on either one of these boards. I'd need my dual port 10 gig NIC on the Asus, but not on the Supermicro.

This kind of evens it up a little bit. The Asus is a little bit more compelling from a future upgradeability perspective. If I ever upgrade to dual SFP28, I'll need a NIC on either one, and in that case the Asus will have more free slots.

In the end, it will probably come down to what I can find in stock and at what price, but these offerings are certainly interesting.

I don't know if I missed this earlier, but it looks like the motherboard prices have started to pop up.

The Asus one has popped up on the Asus store for $1,299.99. Sheesh.

https://www.asus.com/us/motherboards-components/motherboards/workstation/pro-ws-w790e-sage-se/

Edit:

Never mind. I was on the international page and clicked the link for the U.S. page, and in the process it somehow took me from the Threadripper board to an intel 790 board and I didn't notice. That said, they seem to be similar in feature set, so I wouldn't be surprised if the pricing is similar.
 
Last edited:
I presume all of the manufacturer of systems and boards must have had pre-release samples for validation purposes. I guess this one leaked a little early?
I imagine this was an promotion organized by Dell not them hacking their system to remote test a workstation.
 
So, my gut is telling me to go with the Supermicro H13SRA-TF. Supermicro boards always dependable. At least the workstation and server ones. No idea if it has enough VRM capacity to support overclocking, or if this is even supported in the BIOS though. Which probably isn't a problem. I never overclocked my 3960x, and I doubt I'll overclock whatever 7000 series Threadripper I get.

The Asus Pro WS TRX50-SAGE is also very tempting though. The PCIe slot configuration on it is damn near perfect:

- 16x Gen 5
-Blank
- 16x Gen5
-Blank
- 8x Gen5
- 4x Gen4
-16x Gen4

On the flipside, as opposed to the Supermicro board, it only has one 10gig ethernet port (the other is 2.5 gig) where the supermicro board has dual 10gig ethernet ports, and it also only has one m.2 slot, where the supermicro has 2.

If I don't downsize my current m.2 drives I'd need a 4 drive 16x expansion card on either one of these boards. I'd need my dual port 10 gig NIC on the Asus, but not on the Supermicro.

This kind of evens it up a little bit. The Asus is a little bit more compelling from a future upgradeability perspective. If I ever upgrade to dual SFP28, I'll need a NIC on either one, and in that case the Asus will have more free slots.

In the end, it will probably come down to what I can find in stock and at what price, but these offerings are certainly interesting.
Supermicro if you want to run everything stock.
ASUS if you want to OC and tinker.

Now get out of my office!
 
Interesting. This RDMA thing is a whole new rabbit hole I have not played with yet. Very interesting that it requires specialized switching. Bypassing the CPU would seem to be a server side thing. Switching is just packets, and doesn't seem to have much to do with what happens once the packets get to their destination.

Or is it one of those proprietary licensing things, where for no good technical reason, if you want the specialty RDMA NIC's you also have to overpay for switching as part of a package?

I hate that stuff.
It's fun!
And I'm looking hard at the SAGE. Love my Zenith II so far - wish they had more bling (yeah, I said it) to make it interesting to look at :p
 
I’ll have to double check later but I swear my Arista 7050QX-32S supports all that. I definitely have RDMA between my Hyper-V hosts for the Starwind VSAN setup I run. But my setup is a home lab and I can’t afford anything other than used/EoL/EoS enterprise gear.

From your comments this sounds like yours is a corporate environment. But I’d recommend a 40Gb switch that supports that stuff but get 50/100Gbe NICs as they should support 40 Gb mode and maybe you can swing a faster switch later.

If you’re not opposed to used, definitely trawl eBay for used Arista gear; if it’s not EoS they might let you buy support for it.

That MSN2010 you linked up thread runs around $4k on eBay in the US.
Aruba is getting me the numbers on an 8000 series switch that does what I need and that will let me manage it in the unit in Aruba Central and monitor and secure it via Clearpass, But I have also put it into my procurement requirements for the new servers that they have the capability of running direct connect in a switchless configuration.
So I have a meeting in early Dec where they are going to run some metrics on my existing setup to verify IOPS and other measurables to ensure their build is in the right ballpark, and they aren't just throwing hardware at me, but they are recommending switchless configs for stacks of 3 or fewer. So I look forward to seeing what HPE is proposing for the servers I haven't played with one of those for a long time.
 
Back
Top