AMD clobbers Nvidia in updated Ashes of the Singularity

Interesting. I'm curious to see if this advantage for AMD carries over to all DX 12 games. Only time will tell.
 
Haha, wow! First Hitman, then DOOM, and now this? I guess this is what you get once you stop having to deal with the black box of GameWorks.
 
  • Like
Reactions: Yakk
like this
Interesting and I would love to see more development to this. I was always Nvidia guy, but this can only be a good thing, back to competition, cheaper prices, more power, more choices, victory for consumers! :D
 
At this point I am just glad that AMD has gotten some good publicity. Competition soon would be nice.

True, I've been with Nvidia a while and I wouldn't mind giving AMD a go again. If the trend with DX12 games continues I might switch to them next round.
 
Interesting, good to see this. I'm glad they invested in these features.
 
Well I think the title is a bit inflammatory, I am an AMD supporter, all AMD hardware, but I prefer less hostility if and when possible. But that being said this is interesting and it does beg a few questions. But first wanted to post this from the article, for posterity:
upload_2016-2-25_20-5-28.png


Ok now for the questions, most rhetorical:

1 Did AMD push for this release? I mean it has been getting to be a major topic of late, async especial when it was noted that Nvidia had async disabled in AotS. So most of the benchmarks weren't proving that Nvidia could do async as most believed. Now we have a benchmark that shows async with the ability to toggle on and off for each vendor. It was like a week or 2 ago that AMD mentioned being the only vendor to run async (obviously a jab at Nvidia) when they released the intent to actively create a DX12 path with Hitmans next game. So I guess it would be reasonable to think AMD was all for this release of AotS benchmark to hammer home their statement with Hitman.

2 Nvidia still has async disabled at the driver level. Sorry gonna have to stop and post proof as to deter the impending hate:
Update: (2/24/2016) Nvidia reached out to us this evening to confirm that while the GTX 9xx series does support asynchronous compute, it does not currently have the feature enabled in-driver. Given that Oxide has pledged to ship the game with defaults that maximize performance, Nvidia fans should treat the asynchronous compute-disabled benchmarks as representative at this time. We’ll revisit performance between Teams Red and Green if Nvidia releases new drivers that substantially change performance between now and launch day.
Ok now that is out of the way. So... the question remains: can Nvidias current cards run async in a POSITIVE performance way. Sure they will likely enable a software solution but that will only allow it to enable but not guarantee positive results and not nearly to the degree AMD has. Of course that is standing on whether there is actually a hardware solution on current Nvidia cards (most believe not).
 
It's definitely something to follow along.

Will Pascal have an implementation like AMD or will it be more on the software side of things?

To early to tell but good to see AMD is still investing.
 
This and the fact that Multi-core CPU's benefit from DX12, my FX-8350 machine may keep up with my i7-4790k machine in some games in the future.
 
The /r/AMD sub is having a field-day with this story.
Oh and as expected, there's yet another hate-thread on Kyle in that sub-reddit. The angry 13-year old behavior of /r/AMD never ceases to amaze me haha.

And in true testament to that sub-reddit's true focus; the anti-HardOCP thread is at #2 while the actual benchmark threads are at #10 and #11.
 
Well seems there was an attempt to deflate AMDs party but it didn't last long:

Instrument error: AMD, FCAT, and Ashes of the Singularity benchmarks | ExtremeTech
First, some basics. FCAT is a system NVIDIA pioneered that can be used to record, playback, and analyze the output that a game sends to the display. This captures a game at a different point than FRAPS does, and it offers fine-grained analysis of the entire captured session. Guru3D argues that FCAT’s results are intrinsically correct because “Where we measure with FCAT is definitive though, it’s what your eyes will see and observe.” Guru3D is wrong. FCAT records output data, but its analysis of that data is based on assumptions it makes about the output — assumptions that don’t reflect what users experience in this case.

AMD’s driver follows Microsoft’s recommendations for DX12 and composites using the Desktop Windows Manager to increase smoothness and reduce tearing. FCAT, in contrast, assumes that the GPU is using DirectFlip. According to Oxide, the problem is that FCAT assumes so-called intermediate frames make it into the data stream and depends on these frames for its data analysis. If V-Sync is implemented differently than FCAT expects, the FCAT tools cannot properly analyze the final output. The application’s accuracy is only as reliable as its assumptions, after all.

I found the article interesting, more so because of the Directflip and DWM part of the discussion. Learning new things.
 
Well hang on, Guru3D updated their review with the following:
Update: hours before the release of this article we got word back from AMD. They have confirmed our findings. Radeon Software 16.1 / 16.2 does not support a DX12 feature called DirectFlip, which is mandatory and the solve to this specific situation. AMD intends to resolve this issue in a future driver update.

That puts an entirely different narrative than the one that ExtremeTech is insinuating here :p
 
Well hang on, Guru3D updated their review with the following:


That puts an entirely different narrative than the one that ExtremeTech is insinuating here :p
Actually both seem to be playing with words. Directflip is part of DWM but it is the compositing part that is interesting or misleading. Before DWM there was DCE (desktop compositing engine) so that seems misleading. But also there is the requirements of DX12/WDDM 2.0:

Direct3D 12 API, announced at Build 2014, requires WDDM 2.0. The new API will do away with automatic resource-management and pipeline-management tasks and allow developers to take full low-level control of adapter memory and rendering states.

Which seems to negate the need for Directflip to be mandatory. Honestly cant find anything that says it is or what compositing is in relation to directflip with DX12.
 
Nvidia performed wonderfully. Very few people buy cards based on future titles, especially for the 2 or so years maxwell was/is king. This allowed nvidia to focus on hardware features that mattered in the current ecosystem.

AMD misperformed. Focused on features that had no bearing in the current marketplace, and therefore lost significant market share. By the time these titles launch, we may very well see pascal, or at worst, the wait for pascal will be relatively short. Pascal will likely have an updated implementation. If true, this was the smartest bet by nvidia.
 
Nvidia performed wonderfully. Very few people buy cards based on future titles, especially for the 2 or so years maxwell was/is king. This allowed nvidia to focus on hardware features that mattered in the current ecosystem.

AMD misperformed. Focused on features that had no bearing in the current marketplace, and therefore lost significant market share. By the time these titles launch, we may very well see pascal, or at worst, the wait for pascal will be relatively short. Pascal will likely have an updated implementation. If true, this was the smartest bet by nvidia.

This is about how I feel, too. I've always bought my cards based on performance in current games which is why I've gone with Nvidia the last few rounds. Maybe this will change? Who knows.
 
The /r/AMD sub is having a field-day with this story.
Oh and as expected, there's yet another hate-thread on Kyle in that sub-reddit. The angry 13-year old behavior of /r/AMD never ceases to amaze me haha.

And in true testament to that sub-reddit's true focus; the anti-HardOCP thread is at #2 while the actual benchmark threads are at #10 and #11.


Really, how are your antics better?

Really tired of the childish snideness of people who get joy at calling out other people.


And between the 2 articles, the Extremetech one is at least legible in English. From a journalism standpoint, yeah, that matters in terms of credibility.


Each to their own. AMD is poised for the future, provided they can convince people to take advantage of tech already baked into their cards. They already have a path back in with DX12 adopting Vulkan into itself. But we really won't know until more games come out. Which cooler, smarter heads have already said.


As far as Kyle, or any site for that matter, its my own personal belief that you don't know who really is on the "take" anymore, and even the way the gaming "journalism" industry operates makes me incredibly skeptical of any tertiary sites that could still tie in to that industry. Then again, its not like Kyle could come here and say "I'm clean!" and have a skeptic like me believe him, so I at least appreciate the no-win situation an honest tech journalist can often find themselves in.
 
Really, how are your antics better?

Really tired of the childish snideness of people who get joy at calling out other people.


And between the 2 articles, the Extremetech one is at least legible in English. From a journalism standpoint, yeah, that matters in terms of credibility.


Each to their own. AMD is poised for the future, provided they can convince people to take advantage of tech already baked into their cards. They already have a path back in with DX12 adopting Vulkan into itself. But we really won't know until more games come out. Which cooler, smarter heads have already said.


As far as Kyle, or any site for that matter, its my own personal belief that you don't know who really is on the "take" anymore, and even the way the gaming "journalism" industry operates makes me incredibly skeptical of any tertiary sites that could still tie in to that industry. Then again, its not like Kyle could come here and say "I'm clean!" and have a skeptic like me believe him, so I at least appreciate the no-win situation an honest tech journalist can often find themselves in.

It's funny, because reddit is having a field day but [H] gave the Nano a gold award.
Conclusion

Most people were pissed because what AMD tried to do was straight up corrupt the entire review industry. Give us good reviews or we'll cut you off.

There's always some kind of bias but [H] seems to have always been as in the middle as possible and to me gives praise where it's deserved.
 
Anyone that takes anything posted on reddit as being knowledge has issues. I stepped into and out of that cesspool in less than a week. Yes I am aware that there are exceptions, but that does not excuse the festering vacuole that is Reddit.

On the actual news, that is great. I loved my old HD Radeon until its fans broke. I get bored of knowing that my next GPU will be nVidia. I get excited that some competition might finally push single GPU's into the 4k performance territory.

This is win/win, only an idiot stands religiously beside a brand.
 
Well hang on, Guru3D updated their review with the following:


That puts an entirely different narrative than the one that ExtremeTech is insinuating here :p

DirectFlip is for Windows 8, not Windows 10, and is a WDDM 1.2 required feature not WDDM 2.0.
 
It's funny, because reddit is having a field day but [H] gave the Nano a gold award.
Conclusion

Most people were pissed because what AMD tried to do was straight up corrupt the entire review industry. Give us good reviews or we'll cut you off.

There's always some kind of bias but [H] seems to have always been as in the middle as possible and to me gives praise where it's deserved.


In general yes, I'd agree with that regarding [H].

However, one thing I've learned is nothing lasts forever. Just because they have been, doesn't mean they always will be.


Hell, for the longest time for me, Newegg was the shit in terms of taking care of its customers. Now, however, it's just plain shit. They got one last build out of me based on a no-longer-deserved reputation. And I won't patron them any further because of the shit they pulled. Anyway, I digress. My point is that H has been mostly fair until now, although there are times where I have over the last couple years detected a bit of anti-amd in Kyle's personal comments here. However, if AMD did indeed try to do what you said they tried, then to be honest, I wouldn't blame Kyle for that - that would piss me off too.

Anyway, I really want AMD to bounce back, compete, and yes, even kick the living shit out of Intel and NVidia. But it has nothing to do with being an AMD fanboi. Rather, I am a fan of competition, and remember fondly when AMD first came out of nowhere with the Athlon and had Intel, who at that point was pouring more $$ into marketing than engineering, on their heels for the better part of 5 years or so because they were offering better performance with cheaper cost, and it made it cheaper and easier to build performance machines back in the day.

I long for another era like that, and that can only happen if someone comes along and shakes things up the way AMD did to intel back then. Therefore, if they found a way to convince the industry to better use their architecture that otherwise had been lying dormant by convincing Microsoft to adopt Mantle Successor Vulkan into their DX 12, I am all for AMD doing so. All for all those open source initiatives that are really designed to get the industry to make games that will more directly benefit their cards.

Because at the end of the day, its no different than what Nvidia has been quietly doing with GameWorks, and some of their shinnanegans, or some of the back room threats intel would do to try to get manufacturers to not carry AMD back in the glory days. Its a position of fear, and that fear turns into action which benefits us, the consumers, then a previous monopoly is challenged by an up and comer or a comeback kid who can go toe to toe.
 
I upgraded from an i3 to an FX8320E recently, and gaming performance went from barely able to run a game at 1080p on minimum settings to being able to max games out across three screens while running multiple game servers at the same time. All parts other than mobo and processor were held constant.
yeah, no.
 
yeah, no.

Yeah, yes. I am on a 6700k at home now from an FX 8350 well doing 4k and guess what? There is little to no difference in gaming. Mind you, other things feel a little quicker but that could be because of the newer chipset, DDR4 and improved IMC more than anything else. I like the upgrade but, I have been regretting it for a bit here and there because of the $800 I spent but did not need to yet.
 
Yeah, yes. I am on a 6700k at home now from an FX 8350 well doing 4k and guess what? There is little to no difference in gaming. Mind you, other things feel a little quicker but that could be because of the newer chipset, DDR4 and improved IMC more than anything else. I like the upgrade but, I have been regretting it for a bit here and there because of the $800 I spent but did not need to yet.
little to no difference between CPUs in a GPU constrained workload. wow, what an incredible observation you've made.
Believe it or not, but it's true. Better yet, GPU-only tests ran exactly the same, only in games where the CPU was needed (aka all of them) did performance drop.
what i3?
 
Interesting and I would love to see more development to this. I was always Nvidia guy, but this can only be a good thing, back to competition, cheaper prices, more power, more choices, victory for consumers! :D

Agreed, as a current NVidia user, I welcome these developments. I'm not likely to switch to AMD anytime soon for the simple fact that I've recently purchased a G-Sync monitor, but competition is only going to drive innovation and performance ahead faster and typically at a lower cost.
 
Out of curiosity. Would a company jump to hardware that has lower market saturation in a market that's already beginning to stagnant?
 
Out of curiosity. Would a company jump to hardware that has lower market saturation in a market that's already beginning to stagnant?
This instance isn't typical for sure in the fact that the developer is interested in making a cutting edge game using new techniques. I don't think they are interested in the slightest as to who sold what or what share of the market has what hardware. Sure it must be considered but it seems to me with their forthright nature thus far and the fact they have been the only ones to tout new features and abilities now accessible with DX12, that market has little to do with it.
 
Haswell. i3-4130
there are so many things wrong with what you've said i really don't even want to bother. in games where the 8320 can be used to its potential, which is a very limited set, yes it would be quite a bit faster. in the vast majority of games which use 4 or less threads i don't, in any way, see how there could be a substantial difference between the two as the 4130 is 46% faster than the 8320 per thread.

i think these convenient videos illustrate my point pretty well: 4130 and 8320 with a 280X, 4130 and 8320 with a GTX 960. of course the 8320's performance will increase in the future but you're talking about right now as well as in a much more GPU limited scenario; what you're saying is an absurd exaggeration.
 
I don't think nVidia gives a damn right now about spending resources to shove out a pissing contest driver for an in-development game in order to showcase a beta level test to the public on current hardware that will be "old news" when said game actually gets released because the next gen GPUs will be on the shelves...and I can't say that I blame them one bit. I'll wait for multiple DX12 titles to get tested on Arctic Islands and Pascal before I get all giddy over it.
 
there are so many things wrong with what you've said i really don't even want to bother. in games where the 8320 can be used to its potential, which is a very limited set, yes it would be quite a bit faster. in the vast majority of games which use 4 or less threads i don't, in any way, see how there could be a substantial difference between the two as the 4130 is 46% faster than the 8320 per thread.

i think these convenient videos illustrate my point pretty well: 4130 and 8320 with a 280X, 4130 and 8320 with a GTX 960. of course the 8320's performance will increase in the future but you're talking about right now as well as in a much more GPU limited scenario; what you're saying is an absurd exaggeration.
But you can OC the 8320, a great deal getting to 4.4Ghz (seeing 4.6Ghz as avg but not feeling that) which in most cases would overcome any real short comings. Not the least of which, the i3 might likely be paired with a less than stellar Mobo based on cost and the fact it is an i3, whereas the 8320 would require better than a basement variety Mobo to operate with any amount of decency. Just outright denying a claim based on just the processors is shortsighted, just as much as stating there is a huge difference when little is known of the whole platform. And sorry to say but anyone recommending an i3 for gaming over even an 8320 is just :facepalm:.
 
I don't think nVidia gives a damn right now about spending resources to shove out a pissing contest driver for an in-development game in order to showcase a beta level test to the public on current hardware that will be "old news" when said game actually gets released because the next gen GPUs will be on the shelves...and I can't say that I blame them one bit. I'll wait for multiple DX12 titles to get tested on Arctic Islands and Pascal before I get all giddy over it.
But the game releases next month. And I seriously doubt it is because they don't care. It is far more likely that they really cant do async and DX12 will put AMD in a far better performance position. Hence why they are saying nothing and likely why they have the function turned off at the driver level.
 
there are so many things wrong with what you've said i really don't even want to bother. in games where the 8320 can be used to its potential, which is a very limited set, yes it would be quite a bit faster. in the vast majority of games which use 4 or less threads i don't, in any way, see how there could be a substantial difference between the two as the 4130 is 46% faster than the 8320 per thread.

i think these convenient videos illustrate my point pretty well: 4130 and 8320 with a 280X, 4130 and 8320 with a GTX 960. of course the 8320's performance will increase in the future but you're talking about right now as well as in a much more GPU limited scenario; what you're saying is an absurd exaggeration.

I'm just reporting what I see showing up on the screens in front of me. With the i3, my Fury actually ran passively 90% of the time, because it was constantly waiting on the CPU and spending a lot of time idling. It's not just one game either, all of the games I've played in the couple of months since I upgraded have had a huge performance increase. My 4GHz overclock probably helps a bit.

But you can OC the 8320, a great deal getting to 4.4Ghz (seeing 4.6Ghz as avg but not feeling that) which in most cases would overcome any real short comings. Not the least of which, the i3 might likely be paired with a less than stellar Mobo based on cost and the fact it is an i3, whereas the 8320 would require better than a basement variety Mobo to operate with any amount of decency. Just outright denying a claim based on just the processors is shortsighted, just as much as stating there is a huge difference when little is known of the whole platform. And sorry to say but anyone recommending an i3 for gaming over even an 8320 is just :facepalm:.

I can only get my processor to 4.0GHz, limited by the motherboard VRMs, darn mATX form factor not having any good AM3+ motherboards. I can actually get 4.7GHZ out of the processor, but it overheats the balls out of the 5-phase VRMs.
 
But the game releases next month. And I seriously doubt it is because they don't care. It is far more likely that they really cant do async and DX12 will put AMD in a far better performance position. Hence why they are saying nothing and likely why they have the function turned off at the driver level.

Is it confirmed to be releasing next month? All I can find is the statement "When it's done" (ala DNF) on the website.
 
Nvidia performed wonderfully. Very few people buy cards based on future titles, especially for the 2 or so years maxwell was/is king. This allowed nvidia to focus on hardware features that mattered in the current ecosystem.

AMD misperformed. Focused on features that had no bearing in the current marketplace, and therefore lost significant market share. By the time these titles launch, we may very well see pascal, or at worst, the wait for pascal will be relatively short. Pascal will likely have an updated implementation. If true, this was the smartest bet by nvidia.
Look at performance in recent current titles. You'll see that the trend is already happened. No waiting for future games. And not everyone upgrades every generation. People who bought Kepler got a short lived card and it's looking like Maxwell is the same. I wouldn't be happy with that if I'd bought a 780, then upgraded to a 980 and realized I could have bought a 290X and been in the same (or now better) performance category all along without replacing it.
 
Look at performance in recent current titles. You'll see that the trend is already happened. No waiting for future games.
wut

according to gamegpu, since they're the only source i can find that does reliable benchmarks for most new games:
at 1440p, a stock Fury X is faster than a stock 980 Ti in 3 out of 15 games (all the games where gamegpu tested the Fury X) up to the hitman beta, and one of those (doom) will almost assuredly be fixed by drivers because a 980 Ti being slower than a 290 and a 970 being slower than a 7950 are obviously not realistic results. hitman will probably improve as well. in a few of those the 980 Ti was almost 30% faster, too.
at 1440p, a stock 290X (290 in a couple games) is faster than a stock GTX 970 in 5 out of 15 games, the rest it was either the same or faster.
gamegpu benchmarks imgur

then you look at custom cards which are basically all you can buy for nvidia and see that most 980 Tis can be overclocked to at least 20% faster than stock (some factory overclocked cards are 20% faster out of the box), up to ~27%. the Fury X gains about 5% performance from overclocking according to tpu.
290X has similar overclocking potential to the 970 according to tpu.

so, wut?
 
Last edited:
Back
Top