Far Cry 5 - 11 Older DX11 Video Cards Benchmarked

EMdubs

Limp Gawd
Joined
Jan 27, 2013
Messages
434
Hey people,

I did some Far Cry 5 benchmarking with some older DX11 cards and wanted to share my results. I tested at 1080p and 720p, using the Low, Normal, and High preset.

Here are the results for the 1080p and 720p Low - the rest of them are in the video -

Test System:
5820K @ 4.0GHz
Asus X99-A
16GB DDR4 3000MHz
W10 Pro 64-bit
Latest NVIDIA + AMD Drivers
15.7.1 Drivers for the HD 5000 and 6000 cards.

Testing Methodology:
Captured 55 seconds of the built-in benchmark using FRAPs. Results shown are not averages over multiple runs, as only consistent runs were used.

Sorry if it's hard to read - took a screen capture of the video.



720plow.PNG


1080p low.PNG



 
Hmm wonder if I could run this on my old 3770 and GT640 system...
 
Thanks for doing this. It is helpful to owners of old video cards to determine whether to upgrade, and also to decide what settings if they keep the old gpu.
 
Hmm wonder if I could run this on my old 3770 and GT640 system...
Sarcasm? If not then it should be pretty clear that you would stand no chance if you look at the benchmarks. A 6870 is literally twice as fast as a GT 640 and even it is only getting 31 fps on low at 720p.
 
This shows how much Nvidia was gimping Ubisoft before AMD partnered with them. An HD 7970 destroying a GTX 780 which is a generation newer. Nvidia is still riding the DX11 train and ensuring developers use their proprietary simulated D3D Async Compute instruction which does not benefit Radeon cards. True Asynchronous Compute is far superior and we see that in DX12 titles like Forza 7 and Wolfenstein 2 The New Colossus where the Vega 64 destroys the 1080ti and the rest of the Radeon lineup greatly defeat their Nvidia counterpart. Nvidia intentionally forces developers to not use True Asynchronous Compute because if they did AMD would show it's true performance over Nvidia.
 
Last edited:
This shows how much Nvidia was gimping Ubisoft before AMD partnered with them. An HD 7970 destroying a GTX 780 which is a generation newer. Nvidia is still riding the DX11 train and ensuring developers use their proprietary simulated Asynch Compute which does not benefit Radeon cards. True Asynchronous Compute is far superior and we see that in DX12 titles like Forza 7 and Wolfenstein 2 The New Colossus where the Vega 64 destroys the 1080ti and the rest of the Radeon lineup greatly defeat their Nvidia counterpart. Nvidia intentionally forces developers to not use True Asynchronous Compute because if they did AMD would show it's true performance over Nvidia.

Citations needed...and a lot of them.
 
This shows how much Nvidia was gimping Ubisoft before AMD partnered with them. An HD 7970 destroying a GTX 780 which is a generation newer.
Ignoring the rest of your post and focusing on this, since the rest is pure conjecture.

When the GTX 780 was released it was only around 10% faster than the 7970 GHz STOCK. Since that time, AMD has gotten rid of a lot of the driver overhead that hampered their performance in DX11. Coupled with the continuing rehashes of the GCN architecture and the accompanying support it's no surprise that the 7970 GHz is that much faster today. It was already a beast of a card back in the day.
 
This shows how much Nvidia was gimping Ubisoft before AMD partnered with them. An HD 7970 destroying a GTX 780 which is a generation newer.

ignoring all your ignorance and trollposting, Far Cry tittles ALWAYS have favored A LOT AMD, Far cry 3, Far cry 4, Primal and 5 show this same behavior with nvidia vs amd counterparts, in far cry 4 we see things as the 7970ghz/r9 280x performing at GTX Titan/780ti levels and that was a heavy nvidia sponsored title.
 
This shows how much Nvidia was gimping Ubisoft before AMD partnered with them. An HD 7970 destroying a GTX 780 which is a generation newer. Nvidia is still riding the DX11 train and ensuring developers use their proprietary simulated Asynch Compute which does not benefit Radeon cards. True Asynchronous Compute is far superior and we see that in DX12 titles like Forza 7 and Wolfenstein 2 The New Colossus where the Vega 64 destroys the 1080ti and the rest of the Radeon lineup greatly defeat their Nvidia counterpart. Nvidia intentionally forces developers to not use True Asynchronous Compute because if they did AMD would show it's true performance over Nvidia.

True Asynchronous Compute?

Is this like a True Scotsman?
 
True Asynchronous Compute?

Is this like a True Scotsman?
They are both different. Microsoft's DX12 Asynchronous Compute is used by AMD, Nvidia's D3D Async Compute is only used by Nvidia and only benefits Nvidia cards. Asynchronous Compute works at a hardware level while D3D Async Compute works at a software level. The Nvidia Pascal does not have Asynchronous Compute hardware, it only has a firmware that they created their D3D Async Compute instruction for.
 
Last edited:
They are both different. Microsoft's DX12 Asynchronous Compute is used by AMD, Nvidia's Asynch Compute is only used by Nvidia and only benefits Nvidia cards. Asynchronous Compute works at a hardware level while Asynch Compute works at a software level. The Nvidia Pascal does not have Asynchronous Compute hardware, it only has a firmware that they created their Asynch Compute instruction for.

Documentation please.
 
"Microsoft's DX12 Asynchronous Compute" is just an optional extension to DX12 and not mandatory. It was, surprise, added to DX12 by AMD as per the rules of DirectX levels. Calling it Microsoft's DX12 is disingenuous, at best.
 
"Microsoft's DX12 Asynchronous Compute" is just an optional extension to DX12 and not mandatory. It was, surprise, added to DX12 by AMD as per the rules of DirectX levels. Calling it Microsoft's DX12 is disingenuous, at best.
It was developed by Microsoft not AMD. Microsoft was supposed to implement it in DX11.2 which is why AMD released GCN that has Asynchronous Compute Hardware. Microsoft delayed the implementation until Direct X12. This is also why AMD had to develope mantle in 2013 to take advantage of Asynchronous Compute because they kind of got screwed on the deal. This is Microsoft's Asynchronous Compute page.
https://docs.microsoft.com/en-us/windows/desktop/direct3d12/user-mode-heap-synchronization
 
ignoring all your ignorance and trollposting, Far Cry tittles ALWAYS have favored A LOT AMD, Far cry 3, Far cry 4, Primal and 5 show this same behavior with nvidia vs amd counterparts, in far cry 4 we see things as the 7970ghz/r9 280x performing at GTX Titan/780ti levels and that was a heavy nvidia sponsored title.
You are correct. Farcry is one of the Ubisoft titles that performs well on both, unlike Watch Dogs 1 and 2, The Division, all of the Assassin's Creeds, For Honor, Ghost Recon.
 
Ignoring the rest of your post and focusing on this, since the rest is pure conjecture.

When the GTX 780 was released it was only around 10% faster than the 7970 GHz STOCK. Since that time, AMD has gotten rid of a lot of the driver overhead that hampered their performance in DX11. Coupled with the continuing rehashes of the GCN architecture and the accompanying support it's no surprise that the 7970 GHz is that much faster today. It was already a beast of a card back in the day.
Yes the HD 7970 was a beast. The Geforce GTX 680 was it's direct competitor at the time of release. The GCN architecture proved to be superior 4-5 years later but by then it was too late. Many former Radeon users jumped ship and went to Nvidia. Nvidia performs better in DX11 because it uses Tier 3 DX11 while AMD still uses Tier 2 DX11. AMD did not think it was necessary to jump to Tier 3 DX11 when Nvidia did because AMD was working on Mantle.
 
Yes the HD 7970 was a beast. The Geforce GTX 680 was it's direct competitor at the time of release. The GCN architecture proved to be superior 4-5 years later but by then it was too late. Many former Radeon users jumped ship and went to Nvidia. Nvidia performs better in DX11 because it uses Tier 3 DX11 while AMD still uses Tier 2 DX11. AMD did not think it was necessary to jump to Tier 3 DX11 when Nvidia did because AMD was working on Mantle.

I'm still using a R9 280X, though I'm looking to upgrade in a couple months. I think I've gotten more life out of this card than any other card except maybe my old Radeon 9700 pro from wayyyyy back.
 
I'm also still using a 280x and planning on upgrading nearer the end of the year.

In my case, it replaced an HD5770 which also lasted me a long time.
 
Out of curiosity, what was the last card to come out with the 7970 gpu in it? I'm sure it got re-badged more than once.
 
Was it the 280X? And then the 285/380 or whatever had some minor tweaks to them. Not sure too many rebrands ahah
 
I have a Tri -X R9 280 that if I remember can do 1Ghz but it can also play on my 4K Seiki 60Hz using the Club 3D dp to HDMI 2.0 connection as if AMD keeps driver support I see it having life awhile longer as the 384 Bit memory ring bus is why it was the monster it was.

but The OP should show a 2200g or 2400g vs older tech.
 
i am also running a R9 280X card and wonder where it fits in this line up of older cards ?
 
Back
Top