Wafer on Wafer layering

Joined
Mar 19, 2011
Messages
1,023
Just read this today and instantly thought of all the talk about AMD leveraging their infinity fabric on GPUs; and well, there goes that idea.

This will hopefully be a big boon for both GPU makers, but as things stand right now Nvidia will still rule the roost until AMD manages to come up with a high end GPU that can compete with the high end TI's and Titan's.

https://pcgamesn.com/nvidia-amd-tsmc-3d-gpu?amp
 
Last edited:
Looks very interesting. I especially like the way it wouldn't necessary see the it as multi GPUs, but as one single chip. This could very well be thefuture moving forward.
 
The main roadblock to this is cooling. Thicker silicon is going to take a LOT more to cool it down.
 
The main roadblock to this is cooling. Thicker silicon is going to take a LOT more to cool it down.

As per the article:

"massively more powerful graphics cards without increasing the size or density of their silicon"

Without increasing the size or density I imagine that cooling such a chip wouldn't take very much more as long as the chip doesn't require twice the power to drive. I am of course just speculating since very little info has even been released about this new technology so far.
 
As per the article:



Without increasing the size or density I imagine that cooling such a chip wouldn't take very much more as long as the chip doesn't require twice the power to drive. I am of course just speculating since very little info has even been released about this new technology so far.

They aren't increasing the 3D density of the chip, but they are increasing the amount of transistors per 2D area, meaning you'll have two chips worth of power but the cooler can only contact a single-chip's worth of area. Not to mention you'll have a chip full of hot transistors with their closest connection to a heatsink being another chip full of hot transistors.

I'm not saying it's a problem they haven't looked at or even solved, but I don't see them advertising the solution.
 
As Kazeohin says, thermal conductivity of silicon isn't so flash. Don't see this being viable for high end. Maybe for layering lower power areas, giving more area for main logic area.
 
I believe that something along these lines is the way forward. It is quite obvious that game developers have no interest in mGPU support in games. And seeing as we are getting close to how small they can manage these chips. Some sort of multi layered chip is the only way forward or to develop a whole new process.
 
I wonder if we might see coolers on both sides of the card? 1.5-2 slot cooler on the front like current cards which cools the die and PCB components and another .5-1 slot cooler on the back for the die (or heatpipes, larger passive sink, etc)?

edit: punctuation fail
 
Back
Top