Worst CPU's of all time?

Exactly. A CPU being slow doesn't necessarily make it bad. What makes it bad is being touted as being a better performer than it is and or having serious technical flaws that create actual issues with their use. For example, the Pentium FDIV bug is a good example of that. When your CPU can't do math properly it is (and was) a problem. Bulldozer being power hungry, running hot and performing like crap (worse than Phenom II clock for clock) when AMD made promises to the contrary is another. Yes, it worked but the only thing it really had going for it is that it worked in existing AM3+ motherboards and it was cheap.
I tend to drop them into one of five categories: Evolutionary, Revolutionary, Stop-Gap (or app-limited), Transitional, or Bad. Things like 11th Gen, 6x86, Prescott, and Bulldozer were stop-gap releases (sometimes you have to get SOMETHING out the door - other times it's all you have, even if it's not the best, or there's a flaw that keeps them from being the best but they're not BAD (the FPU on Cyrix)). The Core2, AMD64 x2, Zen2 and Sandy Bridge were revolutionary. The 8th/9th/10th gen Core chips were evolutionary. Things like Ryzen 1 / Alder Lake (arguably), S754 AMD64 were transitional (moving to long term platforms but short-term steps on the way).

Bad things are the ones that don't fit into the above. Things like E1, the P1 with FDIV. Truly BAD CPUs are rare.
 
Remember, Intel was cheating at the same time they were smoking Bulldozer and its derivatives.

Patched Intel CPUs performance numbers that I saw had them running pretty much the same clock-for-clock, IIRC.

They got away with it, I’ll give them that.
 
Remember, Intel was cheating at the same time they were smoking Bulldozer and its derivatives.

Patched Intel CPUs performance numbers that I saw had them running pretty much the same clock-for-clock, IIRC.

They got away with it, I’ll give them that.
I wouldn't call it cheating as much as making design choices that had a security implication later - AMD and ARM and others made the same choices, after all, in different ways. CPU optimization is HARD.
 
I’m pretty sure Intel knew, and I’m not really convinced it was an accident in the first place…
 
I’m pretty sure Intel knew, and I’m not really convinced it was an accident in the first place…
Speculative execution is how we got many of the performance improvements through the 90's and 2000s. It sucks, but ... everyone was working on better ways of doing it. Executing code that you shouldn't execute and storing the results though...

Assuming you're talking SPECTRE and Meltdown.
 
Remember, Intel was cheating at the same time they were smoking Bulldozer and its derivatives.

Patched Intel CPUs performance numbers that I saw had them running pretty much the same clock-for-clock, IIRC.

They got away with it, I’ll give them that.
Bulldozer suffers from some of the same security issues. Also, the vulnerabilities were discovered YEARS after these CPU's were designed. Spectre and Meltdown mitigation performance had a pretty substantial hit that effected AMD systems too, albeit not to the same degree. However, those mitigations were improved upon and gained some of their performance back. Plus, you could choose not to employ those mitigations on your personal systems and not lose anything.
I wouldn't call it cheating as much as making design choices that had a security implication later - AMD and ARM and others made the same choices, after all, in different ways. CPU optimization is HARD.
Exactly.
I’m pretty sure Intel knew, and I’m not really convinced it was an accident in the first place…
No, it was a design choice but Intel probably didn't foresee the discovery of those issues either. The vulnerabilities were also discovered before any known exploits were known to exist in the wild.
Speculative execution is how we got many of the performance improvements through the 90's and 2000s. It sucks, but ... everyone was working on better ways of doing it. Executing code that you shouldn't execute and storing the results though...

Assuming you're talking SPECTRE and Meltdown.
I am assuming the same thing. Regardless, if you go back to even the Athlon 64 days, the processor errata documentation read like a script for a horror movie. They had all kinds of issues under very specific circumstances. AMD is not some altruistic champion of the people. They pull a lot of the same BS Intel does when it can get away with it. While AMD tried to capitalize on their CPU's not being vulnerable at all or less vulnerable than Intel, it mattered very little since AMD's footprint in datacenters is dwarfed by Intel's and that didn't change much after Specter & Meltdown. As I recall, some of the AMD CPU's not thought to be vulnerable were discovered to be vulnerable sometime later. Still, they weren't as bad off as Intel. My point is that AMD isn't more trustworthy than Intel is.

Regardless, I think it's fair to call a processor that doesn't perform on par with its competition bad. Bulldozer may have "worked" but an eight core CPU getting the shit kicked out of it in games by an i3 makes it a bad processor for the desktop. Period. I don't care if its the most reliable CPU ever built, it was slow and only survived on the market by AMD's willingness to slash prices to the bone to move its inferior product. AMD hedged its bets on parallel processing way too early. AMD was absolutely right about where things were headed but it took the gamble too early. It was the wrong CPU at the wrong time and compared to its Intel counterparts it was a bad performer and it was saddled with an inferior platform. Not only did AMD let the platform languish well behind Intel's, but the quality of many of the boards that ran those CPU's was questionable at best. I dealt with a lot of them initially and it was a rough time.

Simply being defective, or having a design flaw isn't the only criteria of what makes for a "bad CPU". This is especially true given the fact that such things are extremely rare. Simply being able to "run stuff" passably is not good enough when you aren't even in the same ballpark as your competition in terms of performance. Pricing certainly factors in as proper placement of a product within a certain price point alters our perception of its value, but when that product can't compete at all points in the stack, its inferior and I think its absolutely justifiable to call something a bad product in light of its full context. Cyrix's MII was bad. Not because it was a defective CPU or anything but because it was too late to market and couldn't compete once it arrived. AMD's K5 is pretty much in the same boat.
 
Whatever. Just going to agree to disagree on the Bulldozer vote. This isn't a thread about even "bad" cpus. It's "worst of all time" and sorry, but I will completely and totally disagree that they belong in that category.
 
Bulldozer suffers from some of the same security issues.

But, even patched, they performed roughly identical to Intel's contemporary counterparts when they were patched.

And I'm pretty sure Intel wasn't leaving a security hole, I think they made it in cooperation with government entities. It's not like it'd be the first time a company put in backdoors. Windows is basically free; MS is making money in other markets.
 
But, even patched, they performed roughly identical to Intel's contemporary counterparts when they were patched.

And I'm pretty sure Intel wasn't leaving a security hole, I think they made it in cooperation with government entities. It's not like it'd be the first time a company put in backdoors. Windows is basically free; MS is making money in other markets.
That’s… not how those exploits work… just… no.

As to the second point - Microsoft makes tons of money on windows. Just not to end users as much.
 
That’s… not how those exploits work… just… no.

Spectre and Meltdown were used to access encrypted memory and sensitive data, and they didn't require physical access.

Also the NSA has admitted to working with Intel and Microsoft on backdoors. And don't think for a second Linux is any different.

Sure, this gave Intel a performance advantage, but it also did something AMD clearly saw as an obvious flaw and they were a much smaller company at the time. There were ulterior motives in play, and AMD wasn't on the radar enough for them to be pursaded to leave the side gate open.
 
Spectre and Meltdown were used to access encrypted memory and sensitive data, and they didn't require physical access.

Also the NSA has admitted to working with Intel and Microsoft on backdoors. And don't think for a second Linux is any different.

Sure, this gave Intel a performance advantage, but it also did something AMD clearly saw as an obvious flaw and they were a much smaller company at the time. There were ulterior motives in play, and AMD wasn't on the radar enough for them to be pursaded to leave the side gate open.
You lack understanding of how the exploits actually worked and were utilized. They’re not suitable for a government agency to acquire data. It’s also, to put it simply, an insane conspiracy theory. There are a million easier ways to accomplish the task than what you describe.

And again, AMD did it too. And ARM, a British company, and similar things happened with Power and other RISC CPU. None of this fits your narrative. I’d suggest learning more about CPU design and what it actually took to exploit rowhammer, Spectre, meltdown and the like.
 
You can have that position and still recognize that Intel cheated and used a set of exploits that left a giant hole in their security, and that's the only reason they outperformed AMD back in the day.
 
You can have that position and still recognize that Intel cheated and used a set of exploits that left a giant hole in their security, and that's the only reason they outperformed AMD back in the day.

Literally no one knew about those attack vectors until some independent company discovered them and broadcast that information to the world. The first malware, exploits etc. seen in the wild occurred well after that publication.

And to be clear, AMD has a lot of the same vulnerabilities in their chips too.

Intel has out performed AMD for most of the last 25+ years. Since AMD chips fall prey to many of the same vulnerabilities, they aren't the only reason Intel outperformed AMD back in the day.
 
You can have that position and still recognize that Intel cheated and used a set of exploits that left a giant hole in their security, and that's the only reason they outperformed AMD back in the day.
Branch Prediction (B4900 in 1982 and VAX 9000 in 1989) and Speculative Execution (mid 90s) were across EVERY vendor out there - Intel, AMD, DEC, Motorola, IBM, you name it - everyone was working on optimizing that, because otherwise you're limited to effectively one operation at a time. It's how every pipelined processor ever created are built - this is arguing the fundamental theorems of modern computing. There's nothing there with "cheating" - it's how every processor design for the last 40 years has gone. The exploits hit many vendors, and were completely unforeseen because the end-user method of exploiting them (multiple workloads from different end users running simultaneously on a single CPU - virtualization environments, cloud providers, and hosting companies) was unheard of until 30+ years after the design choices were made. Even today you can generally disable those mitigations unless you're a hosting provider (and I WORK in cyber security now) - if someone has access to install software on systems you own or are the sole users for, there are much more efficient and effective ways of exfiltrating data than forcing a rowhammer or transient execution attack on a processor. Those only make sense if you're sharing resources and hardware with someone else, and you can't even control WHAT you get out of it as a result - it's random data that may or may not be useful at any given point in time.

You're effectively saying that someone, somewhere, planned 40 years ago to use a core fundamental aspect of CPU design as a security hole to spy on people. That's... not sane.
Literally no one knew about those attack vectors until some independent company discovered them and broadcast that information to the world. The first malware, exploits etc. seen in the wild occurred well after that publication.
They were theorized at most, until researchers figured out how to exploit them - and even then it's limited circumstances and limited effectiveness.
And to be clear, AMD has a lot of the same vulnerabilities in their chips too.

Intel has out performed AMD for most of the last 25+ years. Since AMD chips fall prey to many of the same vulnerabilities, they aren't the only reason Intel outperformed AMD back in the day.
Yup. Intel had better engineers - and better marketing - and a better product. Same reason they beat Motorola, DEC, SPARC, Power, HP-RISC, etc. You can argue about intel cheating on their marketing shens and exclusivity deals, but the CPU design is what they made of it. And there were PLENTY of missteps along the way (0kb L2 Celerons, 1.13 P3, NetBurst as a whole, 11th gen, the whole 10nm debacle).
 
There were other competing products (Power Leap IIRC) that allowed you to plug a socket 7 CPU into a socket 3 i486 board

Late and someone probably said it already, but no.

There never existed an adapter to plug a Socket 7 CPU into a 486 board. The only way to get a Pentium core on a 486 board is the POD63/83, and those things were dreadful. They were extremely late to market, being released in the 1995 time frame (the 63 in February and the 83 in September), and were hideously expensive. The POD83 retailed at $299, being more expensive than faster normal Pentiums on Socket 5 at the same time. Intel's planned market of business machine upgrades didn't pan out, because most businesses at the time knew technology was moving at a blistering pace and it didn't make financial sense to pay more for a crippled CPU on an aging EOL platform.

The only tangible benefit the POD provided was access to the Pentium's superior pipelined FPU. If you didn't need the improved FPU (and the vast majority of software at the time didn't), better integer performance could be had at a fraction of the price with an Am486 or the Am5x86 released just a few months later. I have a POD83 and several Am5x86 CPUs. The only time I take the POD83 out is if I want to do silly things, like try and run Windows XP on a 486 era motherboard.

PowerLeap was a strange company that made strange CPU adapters, but no such adapter to go from a 486 to a Pentium. No amount of voodoo black magic and witchcraft would be able to make that work. They were mostly known for their Socket 5 to 7, 7 to Super 7, and bizarre 370 to 8 interposers.
 
Late and someone probably said it already, but no.

No amount of voodoo black magic and witchcraft would be able to make that work. They were mostly known for their Socket 5 to 7, 7 to Super 7, and bizarre 370 to 8 interposers.

You are correct about this, I did have the POD83, however, that's where the confusion lies! :senior moment: ;-)
The best POD was the Socket 8 one, IMO, that brought 333MHz PII performance to the plate of aging Pentium Pro workstations. They supported SMP as well.
 
  • Like
Reactions: XoR_
like this
K6-2 was actually pretty good!
The K5 OTOH, had WORSE FPU than Cyrix PR series. But for office only stuff was an OK beater chip. Combined with no name EDO, Amptron board, et al you had a satisfactory office system. Just don't ask about those horrific JTS champ hard drives! More like JTS brown (in name of Cleveland Browns haha)...
It actually wasn't. The K5 was extremely late to market and by the time it arrived it was plagued by subpar performance and the usual compatibility issues that plagued non-Intel CPU's of that era. You are right in that it did have a worse FPU than Cyrix had. To be fair, it wasn't so much that Cyrix's FPU was bad but rather that it worked differently and it required the software developers to implement things specifically for it.
Bulldozer, haha I remember how the AMD fanboys were saying they were going to *destroy* Nehalem, Sandy Bridge and Westmere. LOL
Never was a fan of Duron either but kind of like the K5, beater status.
AMD's hype train was full steam ahead back then. AMD traditionally over-hyped its CPU's and made promises it rarely delivered on. That didn't change until Lisa Su took over AMD.
Socket 7 had a bunch of duds too. Remember IDT Winchip? Rise MP6?
Indeed I do. There were a lot of competing chips back then. Again, most of them suffered from sub-par performance and compatibility issues.
Let's not forget the Pentium overdrive! I swapped out over 300 486 DX/33s in Dell slim desktop formats for these. Ran at 83MHz. Quite a bump in performance particularly FP. About as fast as a native Pentium 66 machine.
That's the problem with overdrive type CPU's. You may have updated the processor, but the older platform bus constrained the CPU's.
What was surprising was the Pentium II overdrive for socket 8 Pentium Pro machines. Extended the life out of quite a few Tyan and Supermicro based workstations with those.
This was an exception to the rule. Effectively, the socket 8 and slot 1 platforms were the same initially with the only difference being the latter supported AGP. For servers, this didn't matter. The Pentium II was actually slower than the Pentium Pro due to the fact that it had off die cache that ran at half the speed of the CPU. At 233MHz, this was often not enough to offset the cache difference and the MMX instructions did little to help either. Overdrive CPU's were usually too expensive and delivered way too little performance to be worth while in most cases in my opinion. That being said, the Pentium II Overdrives are the one exception to the rule as both platforms were so similar that they were effectively equal outside of gaming.
There were other competing products (Power Leap IIRC) that allowed you to plug a socket 7 CPU into a socket 3 i486 board and those were less than interesting as most things designed to save money and keep hardware going that really should be sent to the scrapyards. ;-)
Essentially it was a way to use certain standard CPU's as overdrive CPU's. However, they introduced their own problems into the mix. I've installed a couple of them and they were trash. True overdrive CPU's from Intel or even Cyrix's 5x86 PR120 and PR133MHz CPU's were a better option. Truly, if you were on an i486, the best option was an AMD 486DX4 120MHz CPU or the Cyrix 5x86 133MHz. These performed about like a Pentium 90MHz chip.
 
he Pentium II was actually slower than the Pentium Pro due to the fact that it had off die cache that ran at half the speed of the CPU

...You do know that the Pentium Pro didn't have on-die cache either? It did run at full CPU speed though.

It'd be interesting to compare a Pentium Pro, Pentium II and Pentium II Xeon (it had off die cache as well, but clocked at 100% CPU speed) all clocked at the same speed to see which one was faster. I've unfortunately never been lucky enough to find a Slot 2 motherboard for a reasonable cost. Got plenty of Slot 1 boards around though.
 
Truly, if you were on an i486, the best option was an AMD 486DX4 120MHz CPU or the Cyrix 5x86 133MHz. These performed about like a Pentium 90MHz chip.
I had an AMD 486DX4-120 and it was the first socket 3 chip that required a fan!
It did run well. It ran even better when I put it in a board that didn't have fake L2 cache DIPs! ;-)

Interestingly enough the only chip running at stock speed all its life that suffered junction failure related to electromigration was the Cyrix 6x86 166+. It had a decent cooler with a whiny little Nidec 40mm fan on it too.

The Pentium Pro had on package full speed cache which definitely had an advantage over the Pentium II. And if your application was sensitive to L2 performance it was noticeably faster.

I have slot 2 Xeons but the boards died and were scrapped. (Dell precision workstation junk mostly) But those CPU carts are HUGE!
 
...You do know that the Pentium Pro didn't have on-die cache either? It did run at full CPU speed though.

It'd be interesting to compare a Pentium Pro, Pentium II and Pentium II Xeon (it had off die cache as well, but clocked at 100% CPU speed) all clocked at the same speed to see which one was faster. I've unfortunately never been lucky enough to find a Slot 2 motherboard for a reasonable cost. Got plenty of Slot 1 boards around though.
I can't recall who actually did the testing but this was done back in the day. I remember an article that did a lot of that work. They managed to find a socket 8 board that could overclock the Pentium Pro to 233MHz. The reason for this was being unable to clock a Pentium II down to 200MHz. Since no 200MHz Pentium II model existed, every motherboard only had jumper settings for 233MHz and beyond. I seem to recall the actual Pentium II being a 266MHz model, which was downclocked for the test. There wasn't a Xeon in the mix, but the result was that the Pentium Pro was actually faster than the Pentium II. The conclusion of the article was that the on-die and larger L2 cache size was the culprit. I do not recall whether or not the Pentium Pro was the 512K cache version of the 1MB version but it was full speed cache versus half speed. We knew that Celerons were basically trash without any onboard cache and the Celeron 300A was actually faster than the Pentium II at 450MHz because of that on-die cache even though it only had 128k of it compared to 512k. (If memory serves.) This adds credence to the theory as the Pentium Pro is basically a Pentium II without MMX support and integrated L2 cache.

Obviously the article was academic. I don't think any Pentium Pros could clock beyond 233MHz and anything over 266MHz would have been a pipe dream with the cooling hardware available at the time. Pentium II's launched at speeds far greater than the Pentium Pro from the start so the Pentium Pro couldn't really keep up with them. That being said, a Pentium II Overdrive would have allowed for some parity, but by the time those came out the Pentium II was even beyond those clock speed wise. Of course, there is the matter of AGP graphics as well which rendered the Pentium Pro and socket 8 motherboards obsolete pretty quickly from a gaming perspective.
 
I owned an FX 8320 (Piledriver). It sucked. It ran super hot and the IPC and ST performance was worse than my previous Phenom II 965BE.
I still have a piledriver system. Fx CPU's require aftermarket cooling. Also your motherboard vrms affect performance with the 8 core. Most could run the 6 core no problem, but many had issues with the 8.
Mine runs right alongside my Ryzen 1600
 
In the Power Mac systems that were in my lab at the time, they were sluggish, to say the least, even with 16 MB of memory installed. My 2 year old Pentium 90 was already running circles around them, and it was no surprise that people wanted to use my system instead of those grinding Power Macs using those PPC 601's. If anything, even the Mac Quadra that used a 33 MHz Motorola 68040 felt much snappier and more responsive than those Power Macs.

There are two reasons that PowerPC sucked on Apple machines.

1) During the transition from 680x0 to PowerPC, instead of a clean break between code bases, Apple instead opted to write a 680x0 to PPC emulator and built it into the OS. This was great for a backwards compatibility standpoint, but Apple used it as a lazy excuse to not rewrite the OS to get rid of that legacy 680x0 code into faster native PPC code. Apple didn't rid the classic Mac OS of all of its 680x0 code until OS 9, more than half a decade after the last 68040 machine shipped. But since OS 9 was so bloated from Apples' 3rd party tech buying spree, it would either not run at all, or barely run on early PowerPC machines.

2) Apple made several PowePC machines with severely compromised designs, like the Performa 5200/6200 series. They basically took and older 68040 logic board and jimmy rigged a PowerPC CPU to it. This results in a 64 bit processor crippled to a 32 bit bus, you can imagine the performance implications. This wasn't Apple's first rodeo, they did the same thing with the LC and LC II, a 32 bit 68030 on a 16 bit bus. You can read more here. https://lowendmac.com/1997/performa-and-power-mac-x200-issues/

So a cpu being lower in a single metric for you means it's garbage? Alright. Maybe for you, but no, it does not put it in "worst cpus of all time".

They make my top 10 list for worst CPUs. The whole Ryzen 1000 line, and the AM4 socket line in general was a landfill fire.

Ryzen 1000, 2000 and some 3000 series chips had the dreaded segfault bug, and I was bitten by it three different times on three different Ryzen CPUs in all three of those lines. My CPUs were not supposed to be affected, and weren't in known defective mask revisions, but were defective all the same and I had to wait weeks for RMAs directly to AMD. I also had a 3700x that had defective SMT that would cause the system to crash randomly if SMT was enabled. I won't buy any old generation of Ryzen because now there is no warranty, and you're screwed if you end up with one of the affected mask revisions.

Then there was the whole "too many CPUs to be supported in a 16 MB ROM" problem. I was again bitten multiple times by motherboards that promised support for specific generation Ryzen CPUs out of the box, even advertising it on the lid. Only to be annoyed that the system didn't POST because it had an out of date BIOS installed from the factory. These boards didn't have flashback, and you can't just use an EEPROM programmer to flash them either, because the boards have other data stored in the ROM, and blowing it away would permanently brick the board. By the 5th or 6th board, I stopped buying them online and paid more for local retail so I could drive back and have them reflash the board, instead of waiting weeks for replacements.

I want to like AMD, but they're making it so hard to do with their perpetual landfill fire of problems. AM5 is shaping up to be an even worse disaster with self immolating CPUs now if you have the wrong BIOS installed on the motherboard you install them in. AMD is having a throwback party to 40 years ago with killer pokes and HCFs.
 
They make my top 10 list for worst CPUs. The whole Ryzen 1000 line, and the AM4 socket line in general was a landfill fire.

Ryzen 1000, 2000 and some 3000 series chips had the dreaded segfault bug, and I was bitten by it three different times on three different Ryzen CPUs in all three of those lines. My CPUs were not supposed to be affected, and weren't in known defective mask revisions, but were defective all the same and I had to wait weeks for RMAs directly to AMD. I also had a 3700x that had defective SMT that would cause the system to crash randomly if SMT was enabled. I won't buy any old generation of Ryzen because now there is no warranty, and you're screwed if you end up with one of the affected mask revisions.

Then there was the whole "too many CPUs to be supported in a 16 MB ROM" problem. I was again bitten multiple times by motherboards that promised support for specific generation Ryzen CPUs out of the box, even advertising it on the lid. Only to be annoyed that the system didn't POST because it had an out of date BIOS installed from the factory. These boards didn't have flashback, and you can't just use an EEPROM programmer to flash them either, because the boards have other data stored in the ROM, and blowing it away would permanently brick the board. By the 5th or 6th board, I stopped buying them online and paid more for local retail so I could drive back and have them reflash the board, instead of waiting weeks for replacements.

I want to like AMD, but they're making it so hard to do with their perpetual landfill fire of problems. AM5 is shaping up to be an even worse disaster with self immolating CPUs now if you have the wrong BIOS installed on the motherboard you install them in. AMD is having a throwback party to 40 years ago with killer pokes and HCFs.
You have some pretty bad luck, idk. Saying Ryzen 3000/5000 was "landfill fire" is unimaginable to me. My 3600 ad 5600X worked great. 1600 was a buggy experience at times, but everyone knew that about 1st gen Ryzen. Then going on and calling Ryzen 7000 as "self immolating"...idk. I get it's your personal list based on personal experience, but it's still wild to me lol.
 
  • Like
Reactions: Axman
like this
Then going on and calling Ryzen 7000 as "self immolating"...idk. I get it's your personal list based on personal experience, but it's still wild to me lol.

Do you live under a rock or something lol, self immolating 7000 series CPUs, especially the new X3D parts are all over tech news right now.

Just search "Ryzen 7000 fire" on your favorite search engine. It's not my personal experience, it's fact. I don't own any 7000 series parts, for good reason.

5 years from now, nobody is going to want to touch 7000 series parts. Just like with 1000/2000/3000 series parts because of defects, or in the case of 7000 series parts, because they self immolated or were degraded to the point of failure and be dumped on Ebay. The problem is so bad that if you don't update your BIOS, you're probably actively slowly cooking your CPU to death. AMD has released emergency AGESA updates, followed by board vendors pulling old BIOS revisions from their site entirely and only offering the fixed release.

And to make that 10 foot pole even longer, some motherboards have design flaws pushing insane current into an otherwise idle/off CPU and causing it to fail as well. GN has a great in depth video covering it.
 
Do you live under a rock or something lol, self immolating 7000 series CPUs, especially the new X3D parts are all over tech news right now.
No? I know about the issue. Slap on custom voltage until mobo manufacturers push out good BIOS fixes. Done. Easy.

Are you seriously saying nobody wants Ryzen 3000 parts nowadays? Like...really? I can go right now in FS/FT and find someone wanting to buy older Ryzen chips for builds.

I get it, you had bad luck. It happens. I even acknowledged that's okay because hey, we all have personal experiences. Just wow, it's amazing to see someone have like a polar opposite experience than me.
 
I can get it - I had a TON of fits with my 1700X (mostly due to x370 and the first gen memory controller), but Zen 2 was great.

Now, I ABSOLUTELY have a policy of "never buy an upgrade chip AND an older board at the same time" - eg: No 5XXX and x570 unless you've already got the 570 with 3XXX and can guarantee the bios. No buying Z690 and 13th gen at the same time unless you've got a 12th gen chip and can guarantee it works. I buy into the ecosystem early if I'm going to - and then upgrade if it makes sense (rarely, to be honest - I've done one in the last 15 years, with 1700X -> 3600X for a friend).

Second gen CPUs on existing board designs must be a CPU upgrade - not a net-new purchase. I've avoided that mess that way since the early 2000s when it bit me once. MAYBE way way later on, but then you risk NOS on the shelves biting you in the ass.

The ONLY exception is if it's a brand new board that came out with the new CPUs - Dark Hero x570 I might buy with a 5XXX new at release, because it was designed for Zen3 from the get-go, rather than the older chips. But that's an extreme exception.
 
No? I know about the issue. Slap on custom voltage until mobo manufacturers push out good BIOS fixes. Done. Easy.

Are you seriously saying nobody wants Ryzen 3000 parts nowadays? Like...really? I can go right now in FS/FT and find someone wanting to buy older Ryzen chips for builds.

I get it, you had bad luck. It happens. I even acknowledged that's okay because hey, we all have personal experiences. Just wow, it's amazing to see someone have like a polar opposite experience than me.
FWIW, you shouldn't HAVE to do that. I don't personally like fiddling with voltage anymore - I feed it the best damned cooling I can outside of the exotics and let the self-system do its thing. And also, FWIW, I haven't bothered with Zen+ or Zen3 as upgrades, or touched Zen4 yet (not enough improvement for my use cases over 10th gen Intel or Zen2) - and I also believe in "give it at least 3-6 months before you buy in to know what the bugs are" too. X3D on Zen4 has another 2-3 months before I'd touch it. Takes a bit to know the oddities.
 
FWIW, you shouldn't HAVE to do that. I don't personally like fiddling with voltage anymore
That's fair. And it is an issue. But I'm not gonna sit here and think every Ryzen 7000 part is a ticking time bomb waiting to burn down the house as some have put it. I'm sure they'll be selling just fine on the used market in the future, just like all previous ryzen chips.

No ryzen belongs on a "worst cpus of all time" list. Unless it's a personal list for personal reasons.
 
No? I know about the issue. Slap on custom voltage until mobo manufacturers push out good BIOS fixes. Done. Easy.

Then you don't know about the issue, because setting a fixed voltage doesn't stop the problem from happening. Even if setting a fixed voltage did mitigate the issue, you cannot realistically expect end consumers to do that, nor can you expect system builders to do the same. It's asinine.




That's fair. And it is an issue. But I'm not gonna sit here and think every Ryzen 7000 part is a ticking time bomb waiting to burn down the house as some have put it.

Yes, THEY ARE ticking time bombs. Especially with crap motherboards like the one used in the GN test. The motherboard is FIVE HUNDRED DOLLARS, and it has terrible VRM control to the point of spiking CPUs to death. Hell, the board can't even regulate normally, the voltage steadily creeps up during normal use to dangerous levels. It's nice that you can drop over a thousand dollars on just a CPU and board and have it go up in smoke simply turning it off and on again.
 
They make my top 10 list for worst CPUs. The whole Ryzen 1000 line, and the AM4 socket line in general was a landfill fire.

I guess you got the bad ones and I got all the good ones. I ran a 1400, 4 1600's, a 2300X, a 2600X, and a 2700X all very hard with zero issues (running linux). Right now it's a 3100, 5 3500X's, a 3700X and a 5800X. Still zero problems. The 2700X and 5800X didn't quite run 24/7 but they came close.
 
People are actually saying that Ryzen CPUs are bad? What? The Ryzen 1000 CPUs were AMD’s return to the market.

Ryzen 3000: AMD caught Intel
Ryzen 5000: AMD beat Intel (briefly)
Ryzen 7000: AMD is literally at parity while using significantly less power.

Ryzen is one of the BEST CPU lines. It’s right up there with Athlon 64 X2.
 
If it happens to me I'll eat my words. Got 2 7900Xs running right now.

Let's hope you don't have to. It'd suck to lose a high end CPU to design flaws.

People are actually saying that Ryzen CPUs are bad? What? The Ryzen 1000 CPUs were AMD’s return to the market.

Ryzen 3000: AMD caught Intel
Ryzen 5000: AMD beat Intel (briefly)
Ryzen 7000: AMD is literally at parity while using significantly less power.

Ryzen is one of the BEST CPU lines. It’s right up there with Athlon 64 X2.

I never said anything about their performance. Yes, they were good performers when they work, but they have a crap reputation for asinine problems. The biggest of which is playing roulette on finding a motherboard that will actually boot with your CPU.

GN brought it up in one of the videos linked above, where one *retail* board had shipped with a pre-production BIOS version that even predated the press release. This is NOT an isolated incident, and has happened on a wide scale even back in the AM4 days. That's a problem on AM4, because since they ran out of ROM space for microcode, it's anyone's guess if your board will actually boot. What makes it even worse is that some vendors decided to drop specific CPUs in support of others, and often not document it well. So if you have one of those dropped CPUs, you're SOL if there's some specific fix that would have otherwise applied to you.

In 28+ years of system building, I've never had that problem before Ryzen. Sure, I've had boards that didn't have the correct microcode, but the board would still boot and give you a warning about an unsupported CPU until you flashed the correct BIOS version. AMD could have at bare minimum made a generic microcode that worked across all of their CPUs so you could at least go in and update the BIOS.

AMD has burned me one too many times to trust them anymore, and they're not making it up with the latest AM5 troubles. Performance is completely irrelevant when it doesn't even boot, or in the case of AM5, self immolate. I don't have the stomach to see $1000+ in parts go up in smoke just turning it off and on.
 
Ryzen is one of the BEST CPU lines. It’s right up there with Athlon 64 X2.
FYI Athlon X2 were incompatible with NT5 and I of course mean XP. Trying to use these CPU on XP caused users headaches and created perception (at the time) that dual core systems are problematic with workarounds needed for some programs and games.

This coupled with the fact that Windows XP was the only viable system at that time imho disqualifies these CPU from being called as "BEST CPU line"
Athlon X2 t was slightly faster than Pentium D and used less power but what of it if you had issues or was forced to use Vista?

Ryzen 7000: AMD is literally at parity while using significantly less power.
This is true but not as much as people might be led to believe.
If you power limit eg. Ryzen 7950X and 13900K to say 70W they have very similar performance.

All CPUs have performance/power curve and shape of it dictates power efficiency at any given power level. Intel decided it is better to win benchmarks than appear power efficient so they overclocked their CPUs to levels where power curve wasn't in their favor. This is of course confirmation that yes Zen4 is more power efficient - but only at the level of performance we get in actual products.

Usually people do not power limit their CPUs so being similar at lower power level might not matter that much.
If however someone actually built system to go to small case and be power limited it doesn't mean their only choice is AMD processors.
Unfortunately not a lot of reviewers do tests at limited power so its not well known fact.
 
In 28+ years of system building, I've never had that problem before Ryzen. Sure, I've had boards that didn't have the correct microcode, but the board would still boot and give you a warning about an unsupported CPU until you flashed the correct BIOS version. AMD could have at bare minimum made a generic microcode that worked across all of their CPUs so you could at least go in and update the BIOS.
I have a feeling that AMD decided to cut corners in their CPU development process to get products to the market faster and win metrics which win them market share and earn money.

To have issue-free BIOS shipped in motherboard you have to resolve issues way before you release processor and this only proves AMD cuts corners...

AMD has burned me one too many times to trust them anymore, and they're not making it up with the latest AM5 troubles. Performance is completely irrelevant when it doesn't even boot, or in the case of AM5, self immolate. I don't have the stomach to see $1000+ in parts go up in smoke just turning it off and on.
Zen4 vs Raptor Lake I would say there is very little reason to go AMD
 
Ryzen was make or break for them, AMD was selling down to the walls to keep their x86 division alive at the end of the Bulldozer era. I'd be surprised if they didn't cut corners trying to get it out to market as fast as possible with them imploding at the time.

Their other grievous error was to promise AM4 socket support until a specific date, and not foreseeing the system firmware size limit as being a problem with how much microcode for different CPUs could be stored. Part of that can be blamed on motherboard manufacturers though, more interested in making fancy GUI skins for their UEFI interface that gobbled up huge chunks of the ROM. I have no idea why they thought that was a great idea, most people only look it a handful of times during the system build process and never again. We got along fine with text for 30+ years. We don't need rotating fan icons, animated flames and other nonsense for a firmware setup.
 
  • Like
Reactions: XoR_
like this
Bulldozer for me because I was really looking forward to upgrading to but it ended up being very delayed and such a letdown at launch after being hyped up so much by AMD's marketing. I was hoping their confidence of giving it the FX branding meant a return to form when they were the top dog with the Athlon FX.

In the end, I went Sandy Bridge which was my first Intel CPU in 10 years and I have not looked back since.
 
Zen4 vs Raptor Lake I would say there is very little reason to go AMD
They're both good platforms. Depending on what you're looking to do, either one is fine - I'd probably still lean AMD for productivity (I like real cores, but my workloads can't tell what an E core is and wouldn't know what to do with it if they did), but either one is fine.
 
They're both good platforms. Depending on what you're looking to do, either one is fine - I'd probably still lean AMD for productivity (I like real cores, but my workloads can't tell what an E core is and wouldn't know what to do with it if they did), but either one is fine.
E-core is normal core, just slower than P-cores. Still not terribly slow and actually comparable in performance to Skylake CPUs.
On my 13600KF with E-cores overclocked from 3.9GHz to 4.3GHz I got performance between Core i7 9700 and 9700K in Cinebench on E-cores alone. Games also run fine on them.

Applications do not need special support for E-cores.
 
Back
Top