MooCow
[H]F Junkie
- Joined
- Apr 13, 2000
- Messages
- 8,252
Yeah I hate how high end boards have features I don't care about, like fuckin.. built in WiFi. Like bitch, its a fucking desktop. Who does competitive multiplayer gaming over WiFi god damnit..
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Exactly. I thought I would be ok spending $279 on a board to satisfy my nerdy needs, nope I was wrong. I need to now spend $500 to get a fully equipped board with all the latest features on it and this isn't even considered the high end more like mid to high end. At least I know I can connect any M.2, any GPU, any USB, any speed internet, and overclock my 12th gen 12700kf to 5.2 all core with ease or grab the latest current 13th gen CPU and clock it to the absolute maximum pushing 6ghz on a core or two with "instant 6ghz" mode in the bios and that's not even talking about the 13900ks coming which should clock higher than any other CPU with higher clocks on more cores. The VRM and heatsink package on the higher end boards like the Aorus master z790 can handle two or three times the maximum power a 13900k can push so the board is essentially bomb proof.The only problem with the above post is that prices of motherboards of a relative quality that's comparable to a $150 motherboard of a decade ago have crept significantly upwards. You see, today's mainstream CPUs are much, much harder on a motherboard's VRMs than a six-core HEDT CPU of a decade ago ever was. One would really, really need to spend $220 just for a motherboard that matches yesteryear's $120 motherboard in relative quality.
This is exactly why my most recent motherboards cost nearly $300. But while the motherboard in my previous AMD system had RGB LEDs on the I/O enclosure cover, my current system's Intel motherboard has no onboard RGB LEDs but will accommodate RGB memory DIMMs and RGB fans.
Overclocking is also increasing power limits and boost settings even if you don't increase the peak clock speed. That still provides a performance boost without requiring stability testing. You need a high-end board for that.
In answer to OP's question, I'd say it's worth buying the lowest-end board for the highest-end chipset. In my experience, the higher end you go, the more unecessary features, blingy lights, racing stripes and cheap plastic shrouds there are, and the price becomes exponentially more expensive. Moreover, paying top dollar for better VRMs that will allow you to overclock 2-3% higher is completely not worth your money. That said, if you step down an entire chipset tier, you'll get a materially reduced feature set.
Get an x/z670 Asus Prime and be done wth it.
That, and looking over the actual VRM setup (basically what Buildzoid does)No you don't. You just have to do some research and see what boards support changing these (see the B660 Mortar board I was talking about earlier).
Some yes, some no. As for the pricing - not quite how my budgeting works, or my use cases work. I build a high end workstation and high end gaming system every 3-4 years, and then some lower-end "used" kit to fill in gaps. After that 3 year mark, high-end stuff drops down to the tier-2 uses, tier-2 to tier-3, and so on - till it dies, or has no easy "drop down" to fit into. Current workstation is a 3960X w. 128G on a Zenith II Extreme Alpha - it's showing no signs of slowing down and is approaching the 30 month old mark, so it'll keep going (unless Storm Peak is amazing) - same for the Gaming box, which is a 10700K w. 3090 on custom water (same age, plus a month).Really cheap board also work well a decade later too.
When it come that having great longevity, the question can easily become has price goes up.
Are you better now (and in average during the use span) with that $400 board bought in 2012, or a $160 board in 2012 and a $260 board in 2017 ? (or a 130-150-170 in 2012-2016-2022).
For things were a computer change is a really big deal, it can make a lot of sense, but I am not sure longevity work out that well, better having needed those feature right from the start for them to make sense.
Buying 10G cards eats a slot, that means one less slot for SAS controllers or Optane cards. Buying a DAC means one less USB port for other things, which are sometimes at a premium in my setups, and is something I have to buy multiple of instead of just leaving built-in. Also I'm generally using 2x10G pretty fast, and that gets expensive if one isn't built in to start with (doesn't save you anything, especially since ESXi is picky about cards).You use 10gig right away and there is no cheap board with it, the question become more is it better to rebuy 10 gig and good audio or invest one time in a 10g card and DAC solution.
Also what does the B box do, for example the plex-media box you probably not want to run sustain OC on that and there is a chance that with the money save one could have built a better plex-media box machine (that use less electricty, nice iGPU with modern codec support, etc...), lot of people run plex-media box on very cheap very old motherboard (I do). 5 years old is kind of "new"
There is comfort and fun going on (and saving trouble)
You sound like you actually use the feature from the get go and would use them regardless of longevity. (And I would imagine once it become a B-C box all those issue dissapear anyway), it is almost a different conversation.Buying 10G cards eats a slot, that means one less slot for SAS controllers or Optane cards. Buying a DAC means one less USB port for other things, which are sometimes at a premium in my setups, and is something I have to buy multiple of instead of just leaving built-in.
B-C boxes sometimes do audio, sometimes not - but they definitely start using slots like mad (part of why I tend to buy HEDT - my old x99 box has a 10G card, basic GPU, two SAS controllers, and if it could take one more, I'd feed it an optane drive for cache ). I'm definitely an edge case - but having features gives me reason to find uses for those features, and once found, it's hard to let go of them (one of the reasons I'm praying for next-gen HEDT still). I'm honestly happy so many things get built into motherboards now - it keeps the external parts down to just what I need, and I can rely on quality parts on the board (if you buy them as part of it) that will last the life of the board. A good built-in DAC is a good built-in DAC, and they do exist, and that means no matter where that system goes, it has that capability. A cheap one means that it might suddenly need an external device (or card) to accomplish a task. All about flexibility. But again - I'm an edge case. Most people have one or two systems - there are 8 in my game room alone.You sound like you actually use the feature from the get go and would use them regardless of longevity. (And I would imagine once it become a B-C box all those issue dissapear anyway), it is almost a different conversation.
Agree and this is another reason I don't think high end consumer platform motherboards are a good deal. All these modern platforms lack slot. Gone are the days where I could buy a board with 8 slots to plug stuff into. Today, I get perhaps 4... and only 2 or 3 usable given how large GPUs have gotten and where toasty M.2 slots are located. A very few of them do embed exciting features (isolated sound circuitry or DAC; 10GbE controllers) but the price premium for what should be a lower cost, integrated solution is such that adding cards would be less expensive--and at least carry over to new platforms. In our recent upgrade of my wife's PC to a 13900k, it was nice being able to just move a dual-port 10GbE Intel card from her old PC to the new one, without worry about having to restrict motherboard selection to that tiny, expensive minority that features onboard 10GbE.B-C boxes sometimes do audio, sometimes not - but they definitely start using slots like mad (part of why I tend to buy HEDT - my old x99 box has a 10G card, basic GPU, two SAS controllers, and if it could take one more, I'd feed it an optane drive for cache ). I'm definitely an edge case - but having features gives me reason to find uses for those features, and once found, it's hard to let go of them (one of the reasons I'm praying for next-gen HEDT still). I'm honestly happy so many things get built into motherboards now - it keeps the external parts down to just what I need, and I can rely on quality parts on the board (if you buy them as part of it) that will last the life of the board. A good built-in DAC is a good built-in DAC, and they do exist, and that means no matter where that system goes, it has that capability. A cheap one means that it might suddenly need an external device (or card) to accomplish a task. All about flexibility. But again - I'm an edge case. Most people have one or two systems - there are 8 in my game room alone.
I just got given a Nuc 12 Extreme to do some testing on, and I'm already wondering if the two thunderbolt ports are enough (drive enclosure, PCIE enclosure for compatible 10G dual port card, second drive enclosure). And I'm trying to get FreeBSD running on it right now which is HILARIOUS, but that's a separate story (might need a third thunderbolt drive just for the install).
I do see the point in "move over the card" - but a lot of the time that system still needs said card (since it's sticking around, just doing something ~new~), so I'd be constantly buying dual or single port 10G cards. or DACs. Etc. If it's built in, it just goes where the box goes - and especially things like networking cards are ALWAYS needed.Agree and this is another reason I don't think high end consumer platform motherboards are a good deal. All these modern platforms lack slot. Gone are the days where I could buy a board with 8 slots to plug stuff into. Today, I get perhaps 4... and only 2 or 3 usable given how large GPUs have gotten and where toasty M.2 slots are located. A very few of them do embed exciting features (isolated sound circuitry or DAC; 10GbE controllers) but the price premium for what should be a lower cost, integrated solution is such that adding cards would be less expensive--and at least carry over to new platforms. In our recent upgrade of my wife's PC to a 13900k, it was nice being able to just move a dual-port 10GbE Intel card from her old PC to the new one, without worry about having to restrict motherboard selection to that tiny, expensive minority that features onboard 10GbE.
I'd like to see 'high end' platforms have 'high end' I/O flexibility as was the case in the past but I suspect those days are gone. TBH even my TRX40 system has very little PCIE expansion card flexibility despite the enormous amount of PCIE lanes at its disposal. The physical boards simply lack the slots.
I miss having this kind of I/O:
https://www.bhphotovideo.com/c/product/1625507-REG/asus_pro_ws_wrx80e_sage_se.html/?ap=y&ap=y&smp=y&smp=y&lsft=BI:5451&gclid=CjwKCAiAwc-dBhA7EiwAxPRylCRQGCXa2DuMqXl-pIh21y4-O8qTCx6fcxU2X6WyEYFt_LiljDt6LxoCMvgQAvD_BwE
Oh, and I'd buy a Sage, but that was just Zen3 - I want Zen4/5 Threadripper, plz!Agree and this is another reason I don't think high end consumer platform motherboards are a good deal. All these modern platforms lack slot. Gone are the days where I could buy a board with 8 slots to plug stuff into. Today, I get perhaps 4... and only 2 or 3 usable given how large GPUs have gotten and where toasty M.2 slots are located. A very few of them do embed exciting features (isolated sound circuitry or DAC; 10GbE controllers) but the price premium for what should be a lower cost, integrated solution is such that adding cards would be less expensive--and at least carry over to new platforms. In our recent upgrade of my wife's PC to a 13900k, it was nice being able to just move a dual-port 10GbE Intel card from her old PC to the new one, without worry about having to restrict motherboard selection to that tiny, expensive minority that features onboard 10GbE.
I'd like to see 'high end' platforms have 'high end' I/O flexibility as was the case in the past but I suspect those days are gone. TBH even my TRX40 system has very little PCIE expansion card flexibility despite the enormous amount of PCIE lanes at its disposal. The physical boards simply lack the slots.
I miss having this kind of I/O:
https://www.bhphotovideo.com/c/product/1625507-REG/asus_pro_ws_wrx80e_sage_se.html/?ap=y&ap=y&smp=y&smp=y&lsft=BI:5451&gclid=CjwKCAiAwc-dBhA7EiwAxPRylCRQGCXa2DuMqXl-pIh21y4-O8qTCx6fcxU2X6WyEYFt_LiljDt6LxoCMvgQAvD_BwE
Do you want a simple answer, or a detailed one?Hey lopoetve what do you use all the hardware for?
At this point, detailed since you have so much hardware lol.Do you want a simple answer, or a detailed one?
By Tier / hostname / time of year:Hey lopoetve what do you use all the hardware for?
hahahaha-----upgrade your wife's computer, jeez!By Tier / hostname / time of year:
Room 1:
T1 - Forge (3960X/128G/6800XT) - Nested virtualization + windows workstation (photos/video/office tasks) + secondary gaming / Offline in summer (650W under load makes a LOT o' heat, since it's at 4.4Ghz all core ). Nested host gets 8c/64G.
T1 - Soverign (10700K/32G/3090 all on custom loop) - Gaming box/entertainment Box, no work stuff allowed (loaner for friends when they come over).
T2 - Spartan (1950X/64G/2080TI) - Control Center + Plex + 4k gaming (couch and TV) + DC + vCenter + PFSense + Nested ESXi host (basically the top of rack stuff + bits and bobx) / Same thing in summer. Nested host gets 6c/32G.
T2 - Hoplite (3950X/64G/3070) - ESXi host + VR Gaming (reboot script) / Same thing in summer.
T2 - Cataphract (6900K/64G / 2xSAS / 8 1T SSDs) - Storage box and ESXi host year round.
T2 - Gladiator / Praetorian (1700X/10900K ITX boxes 64G) - Both straight ESXi hosts, compute only, run home stuff and lab management software year round.
T1 - Kali (Nuc 12 Extreme w. 12900/3060TI) - FreeBSD Dev box (working on getting it working) / summer occasional 1080P gaming box (doesn't put out much heat, unlike Forge/Sovereign), if I want to go hide in the man cave.
Room 2:
T2 - Legion (10980XE/128G/3080) - Winter ESXi host, Summer it's the Linux version of Forge (dual boots to windows for gaming or windows software). Also where I test Linux software, since in ESXi it boots the local NVMe as a VM so I always have a Linux box around.
T3 - Hun (7940X/128G/480) - Year round ESXi host running an S3 target for deep archive (bunch o' spinners) for the lab, plus passthrough of the 480 as an emulation / arcade cabinet (plan is for it to be all hidden inside!). Under construction once I decide on upgrading spartan or not.
T3 - Zizka (3400G/16G) - Small NAS w. Optane SLOG/Spinners for feeding Legion and Hun with storage. Also runs a PFSense box to link to the others.
Room 3:
Tmixed - Excalibur/Durandal/Mjolnir/Masamune/Rifle - ESXi hosts running production-esque workloads. Masamune is the equal to Spartan from above (controls the DC room).
T3 - Dreadnought - Older storage server feeding the 4 above (mjolnir also does, using mixed Optane/8T spinners).
Not-Named - Backup and security appliance from the company I work for currently.
Wifes:
T2 - Normandy (6700K/1080) - her VR and gaming box.
Explaining: I used to work for VMware back in the day - still have a lot of contacts and do a lot of tinkering and bleeding edge work for them, and after that I worked for Dell - I run what would be considered a MASSIVE home lab (this is one of multiple sites) - I'm the archive and prototype location before we push software out to the other sites. I'm also working on a bit of FreeBSD dev work (bored), VMware dev work (less bored, more work), and then work for ... work (they subsidize part of the power), which is in the security and backup space. I'm also tinkering with NVMeOF, RDMA, and some other advanced storage capabilities (where my career started) because I can. I move from a small game room / man cave in the winter/spring to an outer office in summer/early fall, because the sun blasts the game room during the summer and cooks me if I'm not careful (and there's no AC in the game room). Everything is scripted - during the week if I don't need it, we run in low-power mode with 1 intel and 1 AMD box in Room 1, and just Hun/Zizka in 2, and just Mjolnir/Dread in 3. If I need it, a script brings it all up to full power in about 30 minutes.
Best part - run a different script, and 8 of those are Lan-Party capable systems, so I can have people over to game without running hardware or cables or anything! Switch back with a single script again - with the exception of Cataphract, it all boots from WOL and flips right back. All running 10G with Wireless Mesh between rooms, BGP for routing and IPSec to link to the other sites. OpenVPN accessible over the internet.
I also might have gone a little overkill...
I'm sure it plays candy crush just finehahahaha-----upgrade your wife's computer, jeez!
She plays obduction, some mouse game, beat saber, and soundboxing. Does great for thoseI'm sure it plays candy crush just fine
Simple PCI-E to Sata board could do otherwise pcpartpicker can show board with 8sata or more (with popularity of NAS, m.2, virtual disparition of disk drive, etc... will get rare indeed):No boards today seem to have more than 6 on Intel on the 790 platform. I didn't realize the extreme models doubled in price.
Thanks. So basically in some boards the PCI E lane may deactivate with more M.2 drives installed if I start converting what I have. That makes the AMD Asus AM5 extreme boards really interesting.Simple PCI-E to Sata board could do otherwise pcpartpicker can show board with 8sata or more (with popularity of NAS, m.2, virtual disparition of disk drive, etc... will get rare indeed):
https://pcpartpicker.com/products/motherboard/#K=8,13&sort=price&page=1
Manual validation that they all stay active if all the m.2 slot you will need are used could be required
Sometime you cannot use in x16 more in m.2 in x4 mode, sometime 2x sata are shared with an m.2 and you can just use one or the other. If you get close to the limit it can be wise to validate for the actual model you have in mind before buying.So basically in some boards the PCI E lane may deactivate with more M.2 drives installed if I start converting what I have
It looks like 4-5 modern boxes could replace all that.By Tier / hostname / time of year:
Room 1:
T1 - Forge (3960X/128G/6800XT) - Nested virtualization + windows workstation (photos/video/office tasks) + secondary gaming / Offline in summer (650W under load makes a LOT o' heat, since it's at 4.4Ghz all core ). Nested host gets 8c/64G.
T1 - Soverign (10700K/32G/3090 all on custom loop) - Gaming box/entertainment Box, no work stuff allowed (loaner for friends when they come over).
T2 - Spartan (1950X/64G/2080TI) - Control Center + Plex + 4k gaming (couch and TV) + DC + vCenter + PFSense + Nested ESXi host (basically the top of rack stuff + bits and bobx) / Same thing in summer. Nested host gets 6c/32G.
T2 - Hoplite (3950X/64G/3070) - ESXi host + VR Gaming (reboot script) / Same thing in summer.
T2 - Cataphract (6900K/64G / 2xSAS / 8 1T SSDs) - Storage box and ESXi host year round.
T2 - Gladiator / Praetorian (1700X/10900K ITX boxes 64G) - Both straight ESXi hosts, compute only, run home stuff and lab management software year round.
T1 - Kali (Nuc 12 Extreme w. 12900/3060TI) - FreeBSD Dev box (working on getting it working) / summer occasional 1080P gaming box (doesn't put out much heat, unlike Forge/Sovereign), if I want to go hide in the man cave.
Room 2:
T2 - Legion (10980XE/128G/3080) - Winter ESXi host, Summer it's the Linux version of Forge (dual boots to windows for gaming or windows software). Also where I test Linux software, since in ESXi it boots the local NVMe as a VM so I always have a Linux box around.
T3 - Hun (7940X/128G/480) - Year round ESXi host running an S3 target for deep archive (bunch o' spinners) for the lab, plus passthrough of the 480 as an emulation / arcade cabinet (plan is for it to be all hidden inside!). Under construction once I decide on upgrading spartan or not.
T3 - Zizka (3400G/16G) - Small NAS w. Optane SLOG/Spinners for feeding Legion and Hun with storage. Also runs a PFSense box to link to the others.
Room 3:
Tmixed - Excalibur/Durandal/Mjolnir/Masamune/Rifle - ESXi hosts running production-esque workloads. Masamune is the equal to Spartan from above (controls the DC room).
T3 - Dreadnought - Older storage server feeding the 4 above (mjolnir also does, using mixed Optane/8T spinners).
Not-Named - Backup and security appliance from the company I work for currently.
Wifes:
T2 - Normandy (6700K/1080) - her VR and gaming box.
Those... are mostly pretty modern? The oldest is Zen1, or the X99 Haswell (which is all storage passthrough - it's got 16 more SSD slots available for when I need more). Most are Zen2 or Skylake... Zen 3 doesn't really offer any change, and Zen4 is currently a step backwards for me (as is Alder Lake/Raptor Lake) except for the purely gaming box.It looks like 4-5 modern boxes could replace all that.
You have three separate storage systems that could be consolidated into one. Other than the two gaming systems, I don’t know enough details about the VM systems to know how much they could be consolidated, but two to three 128 GB 7950X or so workstations look sufficient, or a dual CPU server board with >=384 GB or so RAM. Less power, less heat, less space.Those... are mostly pretty modern? The oldest is Zen1, or the X99 Haswell (which is all storage passthrough - it's got 16 more SSD slots available for when I need more). Most are Zen2 or Skylake... Zen 3 doesn't really offer any change, and Zen4 is currently a step backwards for me (as is Alder Lake/Raptor Lake) except for the purely gaming box.
Plus given the mixed uses, every time I hear someone say that, I scratch my head and go "how, precisely, would you do that?" Still can't stuff more than 128G of DDR4 in a box, or 64G of DDR5 (without massively compromising performance). Also not really higher core densities out there unless you go higher end on a couple of the systems - which wouldn't help, since I'm generally RAM limited on the big boys more than anything.
I see what you're thinking - trick is - upgrading to 7950s (even if I went with 128G, more on that in a bit) doesn't buy me anything anytime soon - and costs quite a bit. No ROI even over 3-5 years on that change, given that the rest is already in place (I wouldn't buy the current setup new, mind you - not with current options - but a lot of this was put together over the last 3 years). Plus, I wouldn't have enough slots to really make it work for long (remember, 2x10G and a SAS controller in a lot of those systems, at a minimum - they require x8 electric slots, so I'd be buying expensive boards too).You have three separate storage systems that could be consolidated into one. Other than the two gaming systems, I don’t know enough details about the VM systems to know how much they could be consolidated, but two to three 128 GB 7950X or so workstations look sufficient, or a dual CPU server board with >=384 GB or so RAM. Less power, less heat, less space.
Fair, it's often best to use what one already has.I see what you're thinking - trick is - upgrading to 7950s (even if I went with 128G, more on that in a bit) doesn't buy me anything anytime soon - and costs quite a bit. No ROI even over 3-5 years on that change, given that the rest is already in place (I wouldn't buy the current setup new, mind you - not with current options - but a lot of this was put together over the last 3 years). Plus, I wouldn't have enough slots to really make it work for long (remember, 2x10G and a SAS controller in a lot of those systems, at a minimum - they require x8 electric slots, so I'd be buying expensive boards too).
A single ZFS NAS with segmented datasets by speed/size needs would handle that nicely.Storage:
They're in three different rooms, and actually it's more than that -
Room 1:
Yggdrasil (12T Synology NAS - personal stuff and media processing cache). This is an all-spinner now - 3T drives x5. About 80% full.
Spartan (Plex mass-store (25T currently)) About 90% full.
Cataphract (6T All flash VSA with replication/etc for VMware). About 25% full, but that's going to go up to 50% next week.
So one for performance, one for personal photos/video/media I'm working on/etc, and then the Plex server. Combining those doesn't seem to make a ton of sense, does it?
Room 2:
Mjolnir (35T Optane/Spinner for VMware)
Dreadnought (5T VSA with tiering - this one is probably going away soon - for VMware)
So a single storage system for the cluster there.
Room 3:
Hun (~50-60T S3 target for backups) - this is intentionally in a different room from 1/2, since it's the long-term target for backups for stuff in both of those rooms.
Zizka (3T for VMware).
Backups and one small, low-power storage system for here. I could put another Synology in - but I had the parts for Zizka lying around doing nothing, so... ~shrug~. I could also put the storage layer on Hun for performance, but then we hit issues with updates as I have to bring down everything in there to patch it.
I try to avoid combining uses on a NAS - speed is contrary to capacity, and personal stuff is contrary to professional stuff when it comes to managing backups (especially since we use that environment for demos!). I am probably ditching Dreadnought, as it's not worth running it for much longer - and the 5T of space on it isn't anything of note.
If you're truly utilizing those resources, wouldn't Epyc builds be more suitable? Ok, reusing old stuff and all, but at some point the sheer amount of boxes and power usage get too annoying.As for the servers in room 1 - 7950X doesn't really work - putting 128G of DDR5 on a board is almost impossible with any kind of speed, since it REALLY hates more than 1DPC on consumer kit. So if I wanted to modernize Gladiator/Praetorian I could, but it wouldn't actually buy me anything - those aren't CPU bound, and they'd still be stuck at 64G of RAM unless I wanted to drop the speeds significantly - or when we finally get denser DDR5 sticks. Same for Hoplite - I can't go above 64G easily now (the 3950 is a bit picky), and jumping to DDR5 would have me limited the same way. In theory, if I go ahead and do the upgrade to Spartan, I could yank Gladiator out and modernize it with a 5950 and 128G later on if needed - but I'm slightly lazy, and just haven't gotten to it yet. Plus it's mostly a backup host for when I have to do maintenance.
Ok, that wasn't apparent from the previous post.In terms of the servers in Room 3.... wellllll... Yeah, we need more than 384. Our standard build is:
View attachment 539711
Thought about it. Issues I see:Fair, it's often best to use what one already has.
A single ZFS NAS with segmented datasets by speed/size needs would handle that nicely.
Sure - but an Epyc build can just be a server. It's a really crappy workstation, VR system, etc. I MUST have 4 nodes of compute for management - two of our sites run those as dedicated servers, but instead, I can build a couple of multi-purpose boxes here and knock out management AND my plex server AND my workstation AND my VR box - without double buying hardwareIf you're truly utilizing those resources, wouldn't Epyc builds be more suitable? Ok, reusing old stuff and all, but at some point the sheer amount of boxes and power usage get too annoying.
Ok, that wasn't apparent from the previous post.
Haha! Yup! I have a lot of weird things I do. And access to a lot of hardware.lopoetve : Rough tally of ~150TB? Glad I'm not the only one with excessive home storage (288TB in sig, full setup replicated at another site as well)