AMD APU Summit Mantle Keynote And Tech Presentations Today

jww20

Weaksauce
Joined
Jan 9, 2003
Messages
72
AMD APU Summit Live Stream November 13, 2013 Wednesday

1:15 pm – 1:45 pm PST (4:15 pm - 4:45 pm EST)

Keynote, Johan Andersson,
Technical Director, Electronic Arts
BattleField 4 Developer


Rendering Battlefield 4 with Mantle

In this keynote, Johan will discuss how the Frostbite 3 game engine is using the low-level graphics API Mantle to deliver excellent performance in Battlefield 4 on PC and future games from Electronic Arts. He will go through the motivation for developing and using a lower level API and concrete details on how it fits into the architecture and rendering systems of Frostbite. Advanced optimization techniques and topics such as parallel dispatch, GPU memory management, async compute will be covered as well as sharing experiences of working with Mantle in general.



Mantle Unleashed: How Mantle Changes What Is Possible On The PC

GS-4145 Dan Baker, Tim Kip, Guennadi Riguer Oxide / AMD

Wed 5:00 pm – 5:45 pm PST (8:00 pm - 8:45 pm EST)

“Mantle solves a problem PC developers have struggled with for years,” said Oxide co-founder Tim Kipp. “We’re closer to the graphics hardware than ever before, which lets us see dramatic increases in performance on Mantle-enabled systems. At the same time, the low cost of including Mantle support doesn’t prevent us from developing Nitrous for all modern graphics hardware, which makes it attractive to us as businesspeople.”

What makes Nitrous unique is its Simultaneous Work and Rendering Model (SWARM). Specifically, the engine renders calls automatically from whatever CPU core is most available. This allows for a vastly larger number of high-fidelity 3D objects to be rendered to the screen at the same time.
 
i'm excited for this!


10839183846_ddaf0d7b86_o.jpg


AMD APU And Graphics Rockets With Mantle Tech
 
http://www.youtube.com/watch?v=tDPgJB2x7dQ
Lots of explaining without any solid numbers. I skipped through it. If you have no programming background you probably can skip it.

One of the most promising thing I can see from this is that Mantle will allow developers to split up drawing calls between the different cores on the CPU. He also talks about trying to get AMD's competitor(s) on board. Also, I didn't realize everyone at AMD was German,

Also here is a video of some guy from Dice showing how much Kaveri sucks at playing BF4:
http://www.youtube.com/watch?v=mqxnQfDM8Io
 
Last edited:
was interesting to see the changes , was expecting to see how mantle performed.
Nothing. i would've taken an accelerated anything with mantle to show the diffrence in performance . Gonna have to wait till bf4 patch i guess...
the waiting ....uhg!
 
was expecting to see how mantle performed.
Nothing.
What? They had some numbers tech report had a good article on it. Either you have to stop trolling or improve those reading skills.

Jorjen Katsman of Nixxes, the firm porting Thief to the PC, mentioned a reduction in API overhead from 40% with DirectX 11 to around 8% with Mantle. He added that it's "not unrealistic that you'd get 20% additional GPU performance" with Mantle.
 
What? They had some numbers tech report had a good article on it. Either you have to stop trolling or improve those reading skills.

Woah don't go pointing fingers! My man was right! Statements like "not unrealistic that you'd get 20% additional GPU performance" are vague and not what were looking for. It's not unrealistic that you call 3 people a troll each 24 hour period. See, not very solid is it?
 
Vague general performance numbers are best you're going to get since game-to-game benefit will vary greatly.

If you want a specific number that fits all situations and games you're asking for something that is impossible to deliver. That information that was given was as solid as it could possibly be.

atom said:
It's not unrealistic that you call 3 people a troll each 24 hour period.
Its pretty easy to check my post history to see this is false so I hope you're trying to use hyperbole or something.
 
Puts Nvidia in an interesting position.

If, per Andersson, Nvidia will be able to achieve most of Mantle's gains by adding their extensions to Mantle and the initial full Mantle SDK will be released in 2014 and given to an open standard body, there is likely to be very wide developer support for Mantle.

AMD's inclusionary policy is very powerful. Developers and publishers will just want Nvidia to get with the Mantle program and stop bothering them with proprietary solutions. If Nvidia spurns Mantle it will be seen industry wide as a ego driven dick move trying to block what developers have been wanting for years and now have in their grasp.

No really hard numbers yet, but a 20% or better performance gain is firming up. Given ever widening Mantle support, a growing list of Mantle games, and AMD's very aggressive AIB pricing, that 20% performance differential will take a chainsaw to Nvidia's AIB profit margins or it's market share or both.

Nvidia spurns Mantle and faces increasingly hostile developers and publishers and a deteriorating market position or gets on board to the developers delight and avoids market position disaster.

A no win vs. win win. What oh what is JHH going to do? Enquiring minds want to know.
 
Puts Nvidia in an interesting position.

If, per Andersson, Nvidia will be able to achieve most of Mantle's gains by adding their extensions to Mantle and the initial full Mantle SDK will be released in 2014 and given to an open standard body, there is likely to be very wide developer support for Mantle.

AMD's inclusionary policy is very powerful. Developers and publishers will just want Nvidia to get with the Mantle program and stop bothering them with proprietary solutions. If Nvidia spurns Mantle it will be seen industry wide as a ego driven dick move trying to block what developers have been wanting for years and now have in their grasp.

No really hard numbers yet, but a 20% or better performance gain is firming up. Given ever widening Mantle support, a growing list of Mantle games, and AMD's very aggressive AIB pricing, that 20% performance differential will take a chainsaw to Nvidia's AIB profit margins or it's market share or both.

Nvidia spurns Mantle and faces increasingly hostile developers and publishers and a deteriorating market position or gets on board to the developers delight and avoids market position disaster.

A no win vs. win win. What oh what is JHH going to do? Enquiring minds want to know.
 
Puts Nvidia in an interesting position.

If, per Andersson, Nvidia will be able to achieve most of Mantle's gains by adding their extensions to Mantle and the initial full Mantle SDK will be released in 2014 and given to an open standard body, there is likely to be very wide developer support for Mantle.

AMD's inclusionary policy is very powerful.

Developers and publishers will just want Nvidia to get with the Mantle program and stop bothering them with proprietary solutions. If Nvidia spurns Mantle it will be seen industry wide as a ego driven dick move trying to block what developers have been wanting for years and now have in their grasp.

No really hard numbers yet, but a 20% or better performance gain is firming up. Given ever widening Mantle support, a growing list of Mantle games, and AMD's very aggressive AIB pricing, that 20% performance differential will take a chainsaw to Nvidia's AIB profit margins or it's market share or both.

Nvidia spurns Mantle and faces increasingly hostile developers and publishers and a deteriorating market position or gets on board to the developers delight and avoids market position disaster.

A no win vs. win win. What oh what is JHH going to do? Enquiring minds want to know.
 
Puts Nvidia in an interesting position.

If, per Andersson, Nvidia will be able to achieve most of Mantle's gains by adding their extensions to Mantle and the initial full Mantle SDK will be released in 2014 and given to an open standard body, there is likely to be very wide developer support for Mantle.

AMD's inclusionary policy is very powerful ... and very persuasive.

Developers and publishers will just want Nvidia to get with the Mantle program and stop bothering them with proprietary solutions. If Nvidia spurns Mantle it will be seen industry wide as a ego driven dick move trying to block what developers have been wanting for years and now have in their grasp.

No really hard numbers yet, but a 20% or better performance gain is firming up. Given ever widening Mantle support, a growing list of Mantle games, and AMD's very aggressive AIB pricing, that 20% performance differential will take a chainsaw to Nvidia's AIB profit margins or it's market share or both.

Nvidia spurns Mantle and faces increasingly hostile developers and publishers and a deteriorating market position or gets on board to the developers delight and avoids market position disaster.

A no win vs. win win. What oh what is JHH going to do? Enquiring minds want to know.
 
bottom right corner of the post, select Edit, and just replace the text with something like "deleted". the post remains but at least the wall of text disappears.
 
what i meant i was expecting a tech demo! (my fault for reading rendering batltefield4 and expecting such)
yes numbers are great and backed with graphs and the inner workings i can believe that there's truth in an actual quantitative increase in performance or optimization.

kaveri and unified memory and mantle. looking nice
 
Puts Nvidia in an interesting position.

If, per Andersson, Nvidia will be able to achieve most of Mantle's gains by adding their extensions to Mantle and the initial full Mantle SDK will be released in 2014 and given to an open standard body, there is likely to be very wide developer support for Mantle.

AMD's inclusionary policy is very powerful ... and very persuasive.

Developers and publishers will just want Nvidia to get with the Mantle program and stop bothering them with proprietary solutions. If Nvidia spurns Mantle it will be seen industry wide as a ego driven dick move trying to block what developers have been wanting for years and now have in their grasp.

No really hard numbers yet, but a 20% or better performance gain is firming up. Given ever widening Mantle support, a growing list of Mantle games, and AMD's very aggressive AIB pricing, that 20% performance differential will take a chainsaw to Nvidia's AIB profit margins or it's market share or both.

Nvidia spurns Mantle and faces increasingly hostile developers and publishers and a deteriorating market position or gets on board to the developers delight and avoids market position disaster.

A no win vs. win win. What oh what is JHH going to do? Enquiring minds want to know.

NV can get that same boost or better with Maxwell
as i thought this is Hail Merry for AMD out side of one engine and an indy game it has no support
it IS NOT the PS4 or XB1 low level SDK and if it COULD support non-GCN hardware why not there own? and IF i can then your not going to gains much bigger then 10% in most cases
DX11.2 already does a lot of this
NV solution is use an ARM cpu to off load the over head doing the SAME THING with out needing sofware support and making it work in nearly all games and 3d aps

GCN cant be extend much more and its locked in with Mantle other the more shaders cores/texture units and ROPs

lack if any real demos is also an issue why not show it off if its so good? even a tech demo

or the gains are so small other then one engine that they got on board no one else wants to deal with it
NV cant just change to GCN nor should they want to GCN is out dated compared to what they have planed in Maxwell

want an API thats not controlled by MS that works on ALL OS? we have that its called OpenGL and it can get as low level as you want
and almost no one uses that
why? its a pain to deal with
AMD has to come up with some killer dev tools to combat what MS has and that fact most games can ported right over from the XB1 now with little extra needed
 
any api has to have dev tools developed for it at base.
expecting mantle not to have to have tools buitlt for it is ridiculous or claiming that they are more complex or slow by magical bias.

Anderson made the frostbite engine and brought it to dice and he's also the one whois pitching
for mantle. He's a developer at base and has been working for 10 years. To believe this is a fancy amd bs marketing tactic is foolish.

if u saw yesterdays developer video , Anderson kept repeating that it wasn't locked into amd gnc hardware (kinda odd but he should know),

http://www.hardocp.com/news/2013/11/14/amd_mantle_technical_deep_dive_apu13_event
video from pcper from the event

shows exactly how the coding is optimized, parallelized , and is different than how dx and opengl work. Should be good for Crossfire , apus, trifire (which as been troublesome)

waiting is the hard part...bf4 is gonna be the tech demo. At least it will be a game and the [H] testing methodology will be better for comparison sakes.
 
What? They had some numbers tech report had a good article on it. Either you have to stop trolling or improve those reading skills.
The comparison presented here is actually fairly meaningless. Are we dealing with an efficient, well-designed D3D11-based renderer and a relatively efficient implementation of Mantle? In other words, is this a worst-case for D3D11 versus a best-case for Mantle? Is it the other way around, or in between?

I can make a D3D11 renderer perform fantastically well for a given workload or use the API quite sloppily and with little regard for efficiency. One can't really get a sense of how the API itself allows for greater performance without additional detail about its actual implementations.

NV can get that same boost or better with Maxwell
How? Maxwell brings a unified memory architecture, but the API outlook for this technology is currently uncertain. Other details about Maxwell are at best sparse.
 
@Wonderfield. The really good thing is that DX11 will still be around so you can just continue using it and eschew the Mantle headaches.
 
NV can get that same boost or better with Maxwell
as i thought this is Hail Merry for AMD out side of one engine and an indy game it has no support
it IS NOT the PS4 or XB1 low level SDK and if it COULD support non-GCN hardware why not there own? and IF i can then your not going to gains much bigger then 10% in most cases
DX11.2 already does a lot of this
NV solution is use an ARM cpu to off load the over head doing the SAME THING with out needing sofware support and making it work in nearly all games and 3d aps

GCN cant be extend much more and its locked in with Mantle other the more shaders cores/texture units and ROPs

lack if any real demos is also an issue why not show it off if its so good? even a tech demo

or the gains are so small other then one engine that they got on board no one else wants to deal with it
NV cant just change to GCN nor should they want to GCN is out dated compared to what they have planed in Maxwell

want an API thats not controlled by MS that works on ALL OS? we have that its called OpenGL and it can get as low level as you want
and almost no one uses that
why? its a pain to deal with
AMD has to come up with some killer dev tools to combat what MS has and that fact most games can ported right over from the XB1 now with little extra needed
You really don't know what you're talking about. Oxide games had a demo and they demonstrated that the touted 100,000+ draw calls number was sane and worked fine in reality.
 
Its pretty easy to check my post history to see this is false so I hope you're trying to use hyperbole or something.

No, not hyperbole; I was simply using a sentence in the same context as the article posted, and again you took it as I was saying something with value.
 
You really don't know what you're talking about. Oxide games had a demo and they demonstrated that the touted 100,000+ draw calls number was sane and worked fine in reality.
Useful, but I question what extent being able to submit 100,000 draw calls without bogging down too badly is all that useful.

Mantle's in an interesting position here: those passionate for delivering on efficiency are focusing on reducing draw call overhead, and they're delivering on that pretty well. And in such applications, draw call overhead is probably the least of one's concerns: you're more likely bound by something else. On the other end of the spectrum, you have developers who aren't very heavily pursuing optimizations in reducing draw calls by instancing, atlasing/aliasing texture resources and so forth. But in order to achieve performance under those scenarios, you have to tell those guys to use a low-level rendering API rather than the one that, by comparison, holds your hand. That's...not going to happen.

There's a lot of emphasis on draw call overhead reduction, but I really don't think that's all that meaningful. It's a metric that there's a lot of emphasis being put on because it's the only thing that's really relatable, not because it's going to be all that impactful.
 
The number is bonkers for sure, but the scalability is obviously there.

And there's a lot of other nice features, like multi GPU becoming more explicit. Use cases like having one card burning away at heavyweight global illumination while the other handles everything else is very cool. Getting rid of semantics like vertex buffers and just saying "it's a block of memory" is welcome. Like whoa we can just malloc GPU memory.

Mantle looks pretty well thought out to me. Especially moving towards hUMA, that kind of thinking is probably the way to go.
 
Getting rid of semantics like vertex buffers and just saying "it's a block of memory" is welcome. Like whoa we can just malloc GPU memory.
It's that level of being able to fiddle more directly that's going to enable new sets of interesting optimizations. Not just in terms of more efficient resource streaming, but it's going to permit some interesting opportunities for hardware discovery. Being able to really see what GCN wants in terms of data layout and where it starts to suffer, and having the opportunity to actually do something about that as opposed to that being at the whims of the API and the driver.

I'm curious as to whether we're going to see any kind of middleware built atop Mantle that tries to ease some of the complexity without sacrificing too much performance, as that could end up being a fairly interesting middle ground.
 
lol and 20% is exactly what I have been thinking.
20% porting an engine from one API to another is pretty good for no hardware changes.

There are a lot of interesting features in Mantle that aren't available in D3D or OpenGL.

I think that since they've stated it's a thin hardware abstraction layer and other vendors can use it, it's definitely a good direction and I hope that NVIDIA gets on board.
 
Nvidia's free to use Mantle - in their future GPUs. Given the development timetable of engineering a GPU I'm going to have to call Nvidia+Mantle "not anytime soon".
 
NV can get that same boost or better with Maxwell
Probably not, supposedly you're only looking at a typical 5% performance improvement with the ARM core working to alleviate driver overhead and at best 10% if some of the comments on B3D are to believed. Maxwell will be faster than Hawaii for the most part due to the usual reasons: its a new GPU with more ALUs, cache, shaders, and other architectural improvements. AMD will have to release a new GPU to compete against that, the performance or bang vs buck lead will switch, and everything starts over again.

The real big long term problem for nV is they're going to see little to no benefit from all the optimizations that are going to start leaking over from the console side of things to PC's.

as i thought this is Hail Merry for AMD out side of one engine and an indy game it has no support
Middleware engine support is huge from a top developer and that alone would show your "hail mary" idea to be incorrect. All they need do is get Mantle support into Unreal and they'll probably have more than half the games out there using Mantle. Unreal already supports a bunch of API's (DX9-10-11, OGL, Stage3D, Javascript, WebGL) one more at this point would be no big deal for them. We already have an idea of how much it'd cost, $8 million or so, which even for AMD isn't a lot of money at all.

it IS NOT the PS4 or XB1 low level SDK and if it COULD support non-GCN hardware why not there own? and IF i can then your not going to gains much bigger then 10% in most cases
Quotes from developers contradict you here, until shown otherwise I'll take their word over yours.

DX11.2 already does a lot of this
NV solution is use an ARM cpu to off load the over head doing the SAME THING with out needing sofware support and making it work in nearly all games and 3d aps
In theory DX11.2 does a lot of what Mantle is supposed to, in practice its entirely ineffective. The problem with nV's ARM CPU is that it is going to have to coordinate what its doing with the CPU + main memory over the PCIe bus which is relatively high latency and slow for that task. Memory/cache coherency is a absolute bitch and there is no work around that nV can do short of paying developers to write games that support their hardware. That is why they're not going to get the same level of performance improvement that AMD is going to get with Mantle.

want an API thats not controlled by MS that works on ALL OS? we have that its called OpenGL and it can get as low level as you want....AMD has to come up with some killer dev tools
Almost no one uses OGL because its actually starting to get worse with API overhead than DirectX + driver support isn't what it used to be. Mantle already works with DirectX HLSL compilers, they don't need to develop anymore tools than that.
 
20% porting an engine from one API to another is pretty good for no hardware changes.

There are a lot of interesting features in Mantle that aren't available in D3D or OpenGL.

I think that since they've stated it's a thin hardware abstraction layer and other vendors can use it, it's definitely a good direction and I hope that NVIDIA gets on board.
the "lol" was just because 20% was what I was thinking it would actually be not that 20% is bad.
 
The comparison presented here is actually fairly meaningless.
Based on what? He gave peak numbers for API overhead reduction and a "real world" performance number too. Why don't you'd tell us how you'd "benchmark" a API or at least link to a study that shows a better way to provide performance numbers? Otherwise you're comment here doesn't make any sense.

Are we dealing with an efficient, well-designed D3D11-based renderer and a relatively efficient implementation of Mantle? In other words, is this a worst-case for D3D11 versus a best-case for Mantle? Is it the other way around, or in between?
So I already mentioned that game performance benefit will vary quite a bit. You're asking for something no one can deliver. That is true for any API or middleware. Its just a tool and its up to the developer to use it and some will use it better than others.

I can make a D3D11 renderer perform fantastically well for a given workload or use the API quite sloppily and with little regard for efficiency. One can't really get a sense of how the API itself allows for greater performance without additional detail about its actual implementations.
So you know that implementation matters but you're still asking for impossible numbers? What is AMD supposed to do? Build a "perfect" implementation and do a tech demo a la "New Dawn" and the like? That doesn't tell you anything about what game performance would be like in actual playable games. At best it'd just be some PR marketing material and no one would care after a month or 2. The information we've already been given is far better than something like that:
Jorjen Katsman of Nixxes, the firm porting Thief to the PC, mentioned a reduction in API overhead from 40% with DirectX 11 to around 8% with Mantle. He added that it's "not unrealistic that you'd get 20% additional GPU performance" with Mantle.
 
Based on what?
Based on exactly what I said.

Why don't you'd tell us how you'd "benchmark" a API or at least link to a study that shows a better way to provide performance numbers?
I'm not suggesting that this isn't a good way to relate the two APIs in terms of performance. In fact, I'd argue it's the most relevant way to do so. I am, however, suggesting the lack of good information as to how each renderer is implemented makes the number fairly meaningless. Was that unclear?

You're asking for something no one can deliver.
Implementation details are "impossible to deliver"? Because that's really all I'd ask for.

I'm suggesting to you that the comment is not particularly telling of anything other than the fact that one D3D11 implementation of unknown quality is apparently slower than one Mantle implementation of unknown quality. You don't disagree with this?

So you know that implementation matters but you're still asking for impossible numbers?
I'd like to see implementation details more than I would concern myself about numbers. The numbers only become of any significance when there are implementation details.
 
Based on exactly what I said. I'm not suggesting that this isn't a good way to relate the two APIs in terms of performance. In fact, I'd argue it's the most relevant way to do so.
Just because you want impossible implementation specific numbers doesn't make the information given meaningless. To say that the information given is meaningless and then say making a comparison between the 2 API's is the most relevant way to compare performance are 2 very different and contradictory things too.

Implementation details are "impossible to deliver"?
Without the actual source code of the renderers (DirectX and Mantle for a given game) and a understanding of the details of it implementation specific performance numbers are useless. Even if you do have the source and understanding of a given renderer it won't give you much if any insight on how another renderer will perform when doing the same task because they're written differently. Since neither you or I or anyone but the developers will have the source to the renderers the details you want are impossible to get. The best anyone could do realistically has already been done and released.

I'm suggesting to you that the comment is not particularly telling of anything other than the fact that one D3D11 implementation of unknown quality is apparently slower than one Mantle implementation of unknown quality.
Do you have any reason to believe that the DX11 renderer was gimped on purpose or poorly written? The guys doing the work seem to be pretty experienced and good if their previous work is anything to go by. They sure as heck don't work for AMD either.
 
Back
Top