One of Nvidia's most Controversial GPU

erek

[H]F Junkie
Joined
Dec 19, 2005
Messages
10,989
The Nvidia GeForce4 MX misled many gamers, when it was actually based on GeForce2 technology and slower than GeForce3. But what about retro gaming with Windows 98?

 
7t9h58.jpg
 
It's not like they are treating us differently now :)
Nvidia fucking up with hardware by selling us less for more, AMD fucking up with software by paying devs not to use DLSS.
 
I don't remember any controversy, but if that was the first time previous gen silicon was used in a new product lineup with a new name, wouldn't surprise me. Since then this has been done many times on both sides. Not a big deal to ne, as long as you've read some reviews and benchmark/gameplay testing, you will know what you are getting.

Ahh the days of single slot video cards, no extra power needed.
 
I don't remember any controversy, but if that was the first time previous gen silicon was used in a new product lineup with a new name, wouldn't surprise me. Since then this has been done many times on both sides. Not a big deal to ne, as long as you've read some reviews and benchmark/gameplay testing, you will know what you are getting.

Ahh the days of single slot video cards, no extra power needed.
Have to remember the internet was still... young back then. Folks bought from brick and mortar stores without looking things up online regularly, and there were a lot of folks that got surprised when it wasn't what was expected.
 
I used to go to voodooextreme.com for hardware news, reviews, etc. When that site closed down, I switched to [H].

In the late 90's I think that fits your description of the internet. People on dial-up. I read magazines for information back then. But in 2000 there were tons of review websites already. I don't recall anything about the controversy, but I would have read a review.

I think I went from a Sierra Screamin3D, to a TNT2 Ultra, to a GeForce2mx 400, then won a GeForce3 Ti 500 in a CPL drawing that only 67 people entered. That was a great card. Then won an ATi 9700Pro at Quakecon in a drawing, but didn't hear my name in the live drawing. So they emailed me and shipped it! Woo, my friends were soo jealous. I was playing tons of quake3 at the time, the drivers for the Radeon kept giving me problems.. It was such a weird problem too, it was like the screen was .. bleeding/melting. The image, would break into small squares and they would slide down the screen. It's hard to describe. But when trying to play in a competitive match, it was a problem. About 5 times over the course of 6 months, I would have to shut down and swap back in the GeForce3, often just in time for the scheduled match. ugh.. When a new driver would come out, I woiuld pop the 9700pro back in. But it just kept happening. After 6 months of that I gave up on the 9700pro.

After the GeForce3/9700pro, I went to the Harley Davidson of Graphics cards, aka the GeForce Fx5700 Ultra (I think). Never owned the GeForce4Mx, so maybe I read a review that said it's slower than your GeForce3 :)
 
Last edited:
Have to remember the internet was still... young back then. Folks bought from brick and mortar stores without looking things up online regularly, and there were a lot of folks that got surprised when it wasn't what was expected.
And a lot of us gamers were younger then too! I think I started looking at benchmarks around the GeForce 4 era when I got my 4600Ti... until then, I bought from stores when I needed to upgrade. I was in high school when I got my Voodoo 3 3000 AGP at CompUSA to replace whatever "hand me down" my dad gave me. Once my voodoo no longer cut it in games, bought a GeForce 2 Ti without much thought either.
 
  • Like
Reactions: erek
like this
It's not like they are treating us differently now :)
Nvidia fucking up with hardware by selling us less for more, AMD fucking up with software by paying devs not to use DLSS.
Both NVIDIA and AMD have been rebranding their old chips for decades. The R7 250X was literally an HD 7770 XT GHz Edition. Same GPU, memory, clocks, and everything.
 
I had mx integrated graphics on my Athlon xp system. I was new to gaming and didn't realize it wasn't supposed to be a slideshow until I saved up for a ti4400.
 
And a lot of us gamers were younger then too! I think I started looking at benchmarks around the GeForce 4 era when I got my 4600Ti... until then, I bought from stores when I needed to upgrade. I was in high school when I got my Voodoo 3 3000 AGP at CompUSA to replace whatever "hand me down" my dad gave me. Once my voodoo no longer cut it in games, bought a GeForce 2 Ti without much thought either.
And the benchmarks were super simple too - limited data to work with even if you searched!
 
  • Like
Reactions: erek
like this
Both NVIDIA and AMD have been rebranding their old chips for decades. The R7 250X was literally an HD 7770 XT GHz Edition. Same GPU, memory, clocks, and everything.
The whole 200 series was a rebrand of Pitcairn and Tahiti except for the 285, 290, and the 290X, but in all fairness, AMD rebranded them with the 300 series later on .:ROFLMAO:

And lets not forget G92 aping G80...
 
The whole 200 series was a rebrand of Pitcairn and Tahiti except for the 285, 290, and the 290X, but in all fairness, AMD rebranded them with the 300 series later on .:ROFLMAO:

And lets not forget G92 aping G80...
To be fair, G92 was on a 65nm process node. G80 was 90nm. If you want to get technical, the 200-series was another node shrink of the same Tesla microarchitecture down to 55nm. Fermi was the first real new microarchitecture after NVIDIA was able to rest on their laurels from how good Tesla was for the time.
 
To be fair, G92 was on a 65nm process node. G80 was 90nm. If you want to get technical, the 200-series was another node shrink of the same Tesla microarchitecture down to 55nm. Fermi was the first real new microarchitecture after NVIDIA was able to rest on their laurels from how good Tesla was for the time.
Yeah I was aware of this, which is why I "splurged" on the 9800GTX @ $ 265 (bought one more for $160 for some SLI 6 months later.) which was a 8800GTS 512MB with better clocks.

Then they came out with the the 55nm GTS250 which was a 9800GTX+ with better clocks and 1GB of VRAM. SMH fuckin' Nvidia.
 
Ah, the GeForce 4 MX - the worst time to release a card with no programmable pixel/vertex shader capability whatsoever, considering that games were already starting to require that to run at all, especially Xbox ports. Deus Ex: Invisible War and Thief: Deadly Shadows immediately shoot to mind.

Some higher-profile games like Doom 3 and Half-Life 2 had fallbacks specifically to support those cards, but you lost a lot of performance, and also a lot of visual quality in the latter case due to the missing shader effects.

I still maintain that the GF4MX was probably NVIDIA's lowest moment, moreso than the succeeding FX series that was late to market and got their ass decisively handed to them by R200/Radeon 9700/9800, moreso than the GTX 970's 3.5 GB of actually viable VRAM, and moreso than the RTX 40 Series/Ada Lovelace in general, which is more overpriced than a blatantly lacking architecture.

For Win9x retrogaming, you're not likely to run anything that uses shaders on that OS; that was well into the 2000/XP age when Direct3D 8 and later took off. Thus, something that is essentially a glorified GeForce 2 really isn't too much of a setback, since you still have HT&L and it's just a spruced-up GeForce 256 at heart, a card that made both 3dfx and SGI utterly irrelevant while being very performant in games at the time.

I had mx integrated graphics on my Athlon xp system. I was new to gaming and didn't realize it wasn't supposed to be a slideshow until I saved up for a ti4400.
Having nothing more than an Athlon XP 1800+ with integrated GeForce 2 GTS until it got upgraded to a Radeon 9600 XT, I know where you're coming from.

Look at it this way: you weren't stuck with Intel Extreme Graphics! Now that would've taught you the true meaning of slideshow, not even HT&L on that (something Intel didn't have even on the much later GMA950).
 
I don't even remember this card. Me and my other low paid friends back in the day were BIG fans of the original GeForce2MX. Was the first time most of us ever entered the era of true PC 3D graphics. The first GPU I ever OC'ed and modded with bigger HS/fans. Sigh... memories :love:
 
The GTX 480, most had to game in shorts and tank top with this heat monster.
Custom air or WC was needed for them. Once you had that they were a pleasure to game on. Loved the first one so much, i went ahead and bought another one.
And while yes the GTX 480 was hot, i find it funny that now we're mostly ok with 350W++ monsters be it on the CPU or GPU side :LOL:.
 
Custom air or WC was needed for them. Once you had that they were a pleasure to game on. Loved the first one so much, i went ahead and bought another one.
And while yes the GTX 480 was hot, i find it funny that now we're mostly ok with 350W++ monsters be it on the CPU or GPU side :LOL:.
The difference today is we have much better cooling solutions. My 4090 may be using 400W, but it rarely goes above 60C while gaming.
 
The difference today is we have much better cooling solutions. My 4090 may be using 400W, but it rarely goes above 60C while gaming.
So true. I have a 6800GT in one of my retro rigs and it identifies as a hair dryer when gaming and barely stays at 70C.... lol.
 
The difference today is we have much better cooling solutions. My 4090 may be using 400W, but it rarely goes above 60C while gaming.
Indeed however the amount of watts being dissipated into the room as heat are more. (GTX 480 ~ 250W ) So if someone was sweating with a 480, he will definitely sweat more with a 4090, for instance, in the same room.
Also cases have come a long way since then and are better ventilated.
 
I don't remember any controversy, but if that was the first time previous gen silicon was used in a new product lineup with a new name, wouldn't surprise me. Since then this has been done many times on both sides. .
AMD's Tahiti core was rebranded like 3 times.
The RX 480/470 as well.

Both were great chips.
 
Last edited:
Yeah, I remember my Geforce4 MX very well. Can't believe it's been 20 years. I got mine in march/apr 03. Didn't really want it but somehow winded up with it with a xp 1700+ 512mb pc. About 11 months later in Feb 04 I got a Geforce 4 ti4200 and I've been happy ever since.
 
To then be followed up by the next embarrassment, the entirety of the GeForce FX series.

Followed by 2 gens worth of Tesla rebrands after 8-series (9-series and 200-series), Fermi v1 (400-series), Kepler sandbagging, GTX 970 memory subsystem, The 20-series lack of performance gains and massive price hike, the 40-series for largely the same thing?
 
To then be followed up by the next embarrassment, the entirety of the GeForce FX series.

Followed by 2 gens worth of Tesla rebrands after 8-series (9-series and 200-series), Fermi v1 (400-series), Kepler sandbagging, GTX 970 memory subsystem, The 20-series lack of performance gains and massive price hike, the 40-series for largely the same thing?
I know a lot of people hated the FX series, but upgrading to a 5900 non ultra from my ti4400 was a nice leap in performance, and added dx9 support. I buy most of my hardware used, so maybe I don't mind that gen as much since I didn't pay full retail, but that card served me well and it was a fun card to overclock.
 
To then be followed up by the next embarrassment, the entirety of the GeForce FX series.

Followed by 2 gens worth of Tesla rebrands after 8-series (9-series and 200-series), Fermi v1 (400-series), Kepler sandbagging, GTX 970 memory subsystem, The 20-series lack of performance gains and massive price hike, the 40-series for largely the same thing?
40 series is a massive jump at 4K gaming. Night and day difference between my 4090 and 3090. So I'd disagree on that one specifically.
 
40 series is a massive jump at 4K gaming. Night and day difference between my 4090 and 3090. So I'd disagree on that one specifically.
Yep one hundred bucks more msrp for 75% raster and even more in rt. Plus DLSS3 and better power efficiency. All the 4xxx cards are good but the 4060 in my opinion.
 
I used to go to voodooextreme.com for hardware news, reviews, etc. When that site closed down, I switched to [H].

In the late 90's I think that fits your description of the internet. People on dial-up. I read magazines for information back then. But in 2000 there were tons of review websites already. I don't recall anything about the controversy, but I would have read a review.

I think I went from a Sierra Screamin3D, to a TNT2 Ultra, to a GeForce2mx 400, then won a GeForce3 Ti 500 in a CPL drawing that only 67 people entered. That was a great card. Then won an ATi 9700Pro at Quakecon in a drawing, but didn't hear my name in the live drawing. So they emailed me and shipped it! Woo, my friends were soo jealous. I was playing tons of quake3 at the time, the drivers for the Radeon kept giving me problems.. It was such a weird problem too, it was like the screen was .. bleeding/melting. The image, would break into small squares and they would slide down the screen. It's hard to describe. But when trying to play in a competitive match, it was a problem. About 5 times over the course of 6 months, I would have to shut down and swap back in the GeForce3, often just in time for the scheduled match. ugh.. When a new driver would come out, I woiuld pop the 9700pro back in. But it just kept happening. After 6 months of that I gave up on the 9700pro.

After the GeForce3/9700pro, I went to the Harley Davidson of Graphics cards, aka the GeForce Fx5700 Ultra (I think). Never owned the GeForce4Mx, so maybe I read a review that said it's slower than your GeForce3 :)
Nobody could turn a phrase quite like Billy - RIP.
 
4090 is a massive jump for 4K gaming. The performance and value gets worse and worse and worse and worse moving down the stack...
Well I am not a "value" gamer, so I can't comment much to those cards beyond what I hear form those who have them. The 4090 was worth every penny to me as I finally got the gaming experience I craved ever since I went 4K back on my 2080Ti and it cost the same as my 3090 did. But a few people I game with who have 4070Ti's and 4080's still saw quite a jump as well from the 2xxx series and 3xxx series, but they are 1440p gamers mostly. Although one is 4K on a 4070Ti but uses DLSS when possible.
 
40 series is a massive jump at 4K gaming. Night and day difference between my 4090 and 3090. So I'd disagree on that one specifically.
Sure for the 4090 and 4080. Mid-range doesn't see much uplift if any over the 30-series once you go below the 4070 and even the 4070 is lackluster as it doesn't match the previous gen flagship card like 70 cards usually do, yet has a price hike, so my point stands.

Similarly the 2080 Ti was really the only massive performance boost over the previous gen 10-series in its stack. So I think its a decent analogy. Feel free to disagree.

Well I am not a "value" gamer, so I can't comment much to those cards beyond what I hear form those who have them. The 4090 was worth every penny to me as I finally got the gaming experience I craved ever since I went 4K back on my 2080Ti and it cost the same as my 3090 did. But a few people I game with who have 4070Ti's and 4080's still saw quite a jump as well from the 2xxx series and 3xxx series, but they are 1440p gamers mostly. Although one is 4K on a 4070Ti but uses DLSS when possible.
So you're point is the most expensive card in the stack...cool. I think my point still stands when the entire stack is considered, the 40-series is quite lame on cost/performance metric when compared to the previous gen, except for the 4090 oddly enough. I agree the 4090 is entirely a new standard of performance with a massive uplift. That drops off massively once you go below the 4070 Ti...which let's be honest, should be a 4070 (non-Ti) at a cheaper price. Last I checked the 3070 (non-Ti) matched the 2080 Ti while the 4070 (non-Ti) barely matches a 3080 and its the 4070 Ti with a much higher cost that matches the previous gen 3090 Ti. So its definitely a misbrand and a price hike in the 40-series for basically every product in the stack below the 4080.

Again, feel free to disagree. I am looking at the entire product stack when making my statement, you seem to only be looking at the 4090 and using that to say everything is fine. It's not and everyone knows it.
 
Last edited:
Sure for the 4090 and 4080. Mid-range doesn't see much uplift if any over the 30-series once you go below the 4070 and even the 4070 is lackluster as it doesn't match the previous gen flagship card like 70 cards usually do, yet has a price hike, so my point stands.

Similarly the 2080 Ti was really the only massive performance boost over the previous gen 10-series in its stack. So I think its a decent analogy. Feel free to disagree.
The difference is that TSMC 12nm used in Turing didn't have any real improvement over 16nm used in Pascal. The draw of the Turing generation was the first generation of ray tracing support. Samsung 8nm in Ampere similarly wasn't a big improvement over TSMC 12nm, but the difference was it being a lot cheaper to make, allowing more transistors to be packed into a chip at a lower cost compared to TSMC. Moving back to TSMC for 5/4nm in Lovelace there is a conundrum: The process is a huge improvement over both TSMC 16/12nm and Samsung 8nm, but it is a lot more expensive to make chips out of it. The result is that in order to keep the product segmentation such that people can afford to buy into the midrange, compromises need to be made, which is why we see such diminishing returns moving down the product stack with Lovelace. AMD is dealing with the same reality, but they are trying to compete with NVIDIA by being a loss leader like they always have. It's not sustainable, which would go to the news that AMD won't be trying to compete in the high end with RDNA 4.
 
The difference is that TSMC 12nm used in Turing didn't have any real improvement over 16nm used in Pascal. The draw of the Turing generation was the first generation of ray tracing support. Samsung 8nm in Ampere similarly wasn't a big improvement over TSMC 12nm, but the difference was it being a lot cheaper to make, allowing more transistors to be packed into a chip at a lower cost compared to TSMC. Moving back to TSMC for 5/4nm in Lovelace there is a conundrum: The process is a huge improvement over both TSMC 16/12nm and Samsung 8nm, but it is a lot more expensive to make chips out of it. The result is that in order to keep the product segmentation such that people can afford to buy into the midrange, compromises need to be made, which is why we see such diminishing returns moving down the product stack with Lovelace. AMD is dealing with the same reality, but they are trying to compete with NVIDIA by being a loss leader like they always have. It's not sustainable, which would go to the news that AMD won't be trying to compete in the high end with RDNA 4.
Oh in that sense you are totally right about process nodes. However it doesn't fully explain the price hikes which also seem to be motivated by getting used to higher margins from the pandemic / crypto mining era. 4090 only increased $100 over the 3090 MSRP, so I am not sure I believe that process node costs are entirely to blame here.

And again, I think my point about performance for most of the stack rings true still. 4090 is an island unto itself. Using it to say the whole stack of 40-series products are "fine" seems odd.
 
Back
Top