Ryzen processors codenamed 'Strix Point' will be released in 2024 - integrating Zen 5, RDNA 3+, and XDNA 2 architecture

erek

[H]F Junkie
Joined
Dec 19, 2005
Messages
10,942
Is Strix Point already known to the collective here?

“AMD's SVP of GPU Technology and Engineering R&D, David Wang, was next on stage to dive a little deeper into the newly official RDNA 3+ and XDNA 2 architectures. It really was just a little deeper, as he mainly talked about the current-gen Hawk Point. For example, perhaps the biggest thing we learned about RDNA 3+ was its name... Previously we have reported on sightings of RDNA 3.5 drivers, but now it looks like the GPU series also sometimes identified as GFX115X will be dubbed RDNA 3+. Without a major version number update, we expect only slight tweaks to the now-familiar RDNA 3 in the upcoming Hawk Point APUs.”

1711140595944.png

1711140639857.png

Source: https://www.tomshardware.com/pc-com...egrating-zen-5-rdna-3-and-xdna-2-architecture
 
Last edited:
Is Strix Point already known to the collective here?

Strix Point likely to be launched this year end but released in next year

AMD is set to launch Strix Point & Strix Halo APUs with Zen5 by end of 2024.

Strix Point :— monolithic 4nm apu, up to 12(4p+8e) zen5 cores, up to 16 RDNA 3.5 CUs, no infinity cache
 
Real dumb questions. Does this thread include new CPUs? Or just GPUs?

What does an APU do that is different from a CPU or GPU?
 
The problem with these APUs is cost. $330+ APU + $250-300 ITX motherboard makes these things more expensive to buy than console, or even regular desktop parts and a discrete GPU. This is a far cry from the early AM4 days when you could spend $175 combined and get something like a 2200G/2400G and an ITX board. It just doesn't make any sense to buy these outside of premium builds. It's just not going to be remotely competitive with say the glut of used PCs with ampere GPUs or even the upcoming PS5 pro--and it's gonna cost way more.

It's great for livestyle millionaires who want to buy a $1200 mini pc for their youtube channel or for decor purposes. Price/performance is going to be atrocious with these products.
 
The problem with these APUs is cost. $330+ CPU + $250-300 ITX motherboard makes these things more expensive to buy than console, or even regular desktop parts and a discrete GPU. This is a far cry from the early AM4 days when you could spend $175 combined and get something like a 2200G/2400G and an ITX board. It just doesn't make any sense to buy these outside of premium builds. It's just not going to be remotely competitive with say the glut of used PCs with ampere GPUs or even the upcoming PS5 pro--and it's gonna cost way more.

It's great for livestyle millionaires who want to buy a $1200 mini pc for their youtube channel or for decor purposes. Price/performance is going to be atrocious with these products.
There are some still fairly small cases which fit uatx boards, which can be had for $100 or less (the cases and boards both). Wish there were more, though. The good ones are quite expensive and aren't all laid out as I'd like.
 
It's great for livestyle millionaires who want to buy a $1200 mini pc for their youtube channel or for decor purposes. Price/performance is going to be atrocious with these products.
Or in laptop with very fast LPDDR5x
 
Apparently there is a leak

Strix Point will continue to use a single-chip design, but the CPU core will be the current highest 8-core Zen 4 , 16 threads will be increased to 12 core Zen 5 and 24 threads. At the same time, the GPU will also be increased from 12 RDNA 3.1 CU units to 16 RDNA 3.5 CU units. The NPU computing power will be increased to 50 TOPS, and the TDP specification is 45W~65W.

The huge Strix Halo APU is even more exaggerated. It adopts a chiplet design and has a total of two 8-core Zen 5 CPU chips, increasing the number of cores to 16 cores and 32 threads, plus an SoC chip with up to 40 RDNA 3.5 CU units.

It is worth noting that the Strix Halo APU will add a 32MB MALL cache to the SoC. Its function is similar to the current Infinity Cache, which can reduce the use of memory bandwidth and improve GPU performance.

A Taiwanese manufacturer revealed that the Strix Halo GPU The performance is even comparable to the RTX 4060 Laptop, and the NPU computing power can reach up to 60 TOPS.

https://twitter.com/hkepcmedia/status/1783444777574518845

25183022-c6ab8d03e3d0a68dec1f190.jpeg
 
Apparently there is a leak

Strix Point will continue to use a single-chip design, but the CPU core will be the current highest 8-core Zen 4 , 16 threads will be increased to 12 core Zen 5 and 24 threads. At the same time, the GPU will also be increased from 12 RDNA 3.1 CU units to 16 RDNA 3.5 CU units. The NPU computing power will be increased to 50 TOPS, and the TDP specification is 45W~65W.

The huge Strix Halo APU is even more exaggerated. It adopts a chiplet design and has a total of two 8-core Zen 5 CPU chips, increasing the number of cores to 16 cores and 32 threads, plus an SoC chip with up to 40 RDNA 3.5 CU units.

It is worth noting that the Strix Halo APU will add a 32MB MALL cache to the SoC. Its function is similar to the current Infinity Cache, which can reduce the use of memory bandwidth and improve GPU performance.

A Taiwanese manufacturer revealed that the Strix Halo GPU The performance is even comparable to the RTX 4060 Laptop, and the NPU computing power can reach up to 60 TOPS.

https://twitter.com/hkepcmedia/status/1783444777574518845

View attachment 649988

According to HKEPC, a 144-page document featuring specifications of AMD Ryzen CPUs based on Strix Point and Strix Halo series has been leaked.

AMD is currently scheduled to deliver a keynote at Computex, possibly involving updates to Zen5 for desktops and mobile series. Hopefully, this will also include Ryzen 9000 aka Strix series.

https://videocardz.com/newz/amd-ryz...a3-5-cus-lp5x-8000-memory-and-32mb-mall-cache
 
Makes me wonder if maybe AMD is moving over to an SoC approach instead of the standard BGA and Socket one they currently have for Mobile...
 
Also valuable die space on the chip has been needlessly taken up by Microsoft’s requirement to allocate AI capabilities on the hardware
More than the previous XDNA AI chips of Phoenix ? AMD started announcing in 2021 or so and have them in their mobile product for while, before any microsoft requirement or that ChatGPT was known by the public at large:

june 2022:
https://www.amd.com/en/newsroom/press-releases/2022-6-9-amd-details-strategy-to-drive-next-phase-of-growth.html#:~:text=AMD XDNA, the foundational architecture,and AI Engine (AIE).

AMD CDNA™ 3 architecture, which combines 5nm chiplets, 3D die stacking, 4th generation Infinity Architecture, next-generation AMD Infinity Cache™ technology, and HBM memory in a single package with a unified memory programming model. The first AMD CDNA 3 architecture-based products are planned for 2023 and are expected to deliver more than 5X greater performance-per-watt compared to AMD CDNA 2 architecture on AI training workloads
  • AMD XDNA, the foundational architecture IP from Xilinx that consists of key technologies including the FPGA fabric and AI Engine (AIE). The FPGA fabric combines an adaptive interconnect with FPGA logic and local memory, while the AIE provides a dataflow architecture optimized for high performance and energy efficient AI and signal processing applications. AMD plans to integrate AMD XDNA IP across multiple products in the future, starting with AMD Ryzen™ processors planned for 2023.
  • To unify AI programming tools, AMD also announced a multi-generation Unified AI Software roadmap that will allow AI developers to program across its CPU, GPU, and Adaptive SoC product portfolio from machine learning (ML) frameworks with the same set of tools and pre-optimized models.
Wasn't AMD a bit of a pioneer in that regard ? I doubt their planned 2024-2025 product would have not reached Microsoft low minimum requirement.
 
Last edited:
More than the previous XDNA AI chips of Phoenix ? AMD started announcing in 2021 or so and have them in their mobile product for while, before any microsoft requirement or that ChatGPT was known:

june 2022:
https://www.amd.com/en/newsroom/press-releases/2022-6-9-amd-details-strategy-to-drive-next-phase-of-growth.html#:~:text=AMD XDNA, the foundational architecture,and AI Engine (AIE).

AMD CDNA™ 3 architecture, which combines 5nm chiplets, 3D die stacking, 4th generation Infinity Architecture, next-generation AMD Infinity Cache™ technology, and HBM memory in a single package with a unified memory programming model. The first AMD CDNA 3 architecture-based products are planned for 2023 and are expected to deliver more than 5X greater performance-per-watt compared to AMD CDNA 2 architecture on AI training workloads
  • AMD XDNA, the foundational architecture IP from Xilinx that consists of key technologies including the FPGA fabric and AI Engine (AIE). The FPGA fabric combines an adaptive interconnect with FPGA logic and local memory, while the AIE provides a dataflow architecture optimized for high performance and energy efficient AI and signal processing applications. AMD plans to integrate AMD XDNA IP across multiple products in the future, starting with AMD Ryzen™ processors planned for 2023.
  • To unify AI programming tools, AMD also announced a multi-generation Unified AI Software roadmap that will allow AI developers to program across its CPU, GPU, and Adaptive SoC product portfolio from machine learning (ML) frameworks with the same set of tools and pre-optimized models.
Wasn't AMD a bit of a pioneer in that regard ? I doubt their planned 2024-2025 product would have not reached Microsoft low minimum requirement.
I think, the die size of the AI component would have increased based on Microsoft requirements
 
I think, the die size of the AI component would have increased based on Microsoft requirements
They say up to 77 TOPs, not sure how it work/count and so on in details, but that the double microsoft requirement of 40 TOPS, how can we conclude that a mimimum requirement announced in March 2024, impacted that product designed years ago that seem to blow over it by a lot?
 
They say up to 77 TOPs, not sure how it work/count and so on in details, but that the double microsoft requirement of 40 TOPS, how can we conclude that a mimimum requirement announced in March 2024, impacted that product designed years ago that seem to blow over it by a lot?
What use case does AMD have to sell a 77 TOPS APU

Putting a 3d v cache would be a better use case I think.
 
What use case does AMD have to sell a 77 TOPS APU
A bit of chicken and eggs, you need the performance on a large userbase for the apps to come, but Phoenix had all of this and it was not to get some Microsoft sticker no ? And the product and the increase of the XDNA 2 NPU was announced in 2023.

Right now, if they got good enough out of the box it will be things like image-sound correction for video conference with lower battery usage (first gen, the gpu on the laptop if they had one were much better I think), text-to-voice and the other way around, by 2026 it will be the third party app ecosystem, like your local code and other assistant.

One that could be popular to be ran on the cpu instead of the gpu would be physic infered instead of calculated for stuff like solidworks or video game:

View: https://seyedhn.medium.com/real-time-physics-simulations-and-machine-learning-90e6e1ef1b3d
The neural network could emulate the physics up to 5000 times faster than the physics solver. (in 2021, that the kind of magnitude step to have everyone walking around with realistic muscle deformation, hair and clothes physics, real time custom physic collision deformation of them and the environment, etc...)

Where does the idea that this time it was to respect some Microsoft marketing demand, specially when the numbers does not seem to match and the timing is so close..

Putting a 3d v cache would be a better use case I think.
Does it need to be one or ther other (has the name size it is over the core not on the die), and 3dvcache cpu still have igpu on them, strix will probably have them like the previous gen that had almost the same NPU,was it just a speculation or something that you know ?
https://www.techpowerup.com/cpu-specs/ryzen-9-7945hx3d.c3263
 
Last edited:
NPU,was it just a speculation or something that you know ?

Speculation in another forum. Not sure if this was reported anywhere

Worse bit about the massive AIE is that in order to make room for it, AMD chopped off the original plan of slapping on a SLC. Something that would have been incredible for both CPU and iGPU performance. But yaaaaaaaaaaaay AI bubble.

https://forums.anandtech.com/thread...dge-ryzen-9000.2607350/page-346#post-41186038

Edit:

Reported in wccftech

AI Craze May Have Nerfed AMD’s & Intel’s Upcoming Chips: Strix APUs Originally Had Big Cache Which Boosted CPU & iGPU Performance​


https://wccftech.com/ai-craze-may-h...big-cache-which-boosted-cpu-igpu-performance/
 
Maybe their was internal talk of the 40Tops target way before it became public, but considering how long ago XDNA 2 was announced and how much higher than microsoft requirement to get the TAG they made those products, AI craze having changing the budget in 2023 instead of Microsoft seem more likely.

Look at discrete GPU increase in that regard (or playstation 300 Tops rumored), nothing to do with Microsoft. Up to 100 Tops and so much variation between the offering when you need 40 for the sticker (none are particularly close to the limit)..... does not seem to be a decision being forced by it.

How much actual result via inference can be done with that budget is likely to have made them really cheap, chip budget wise.
 
Wasn't AMD a bit of a pioneer in that regard ? I doubt their planned 2024-2025 product would have not reached Microsoft low minimum requirement.
To put in perspective, their cpu line-up name:
AMD Ryzen AI 9 HX 170

Does it look like AMD was forced by Microsoft rules to allow some sticker on laptop to add AI capability to their chips or they are the one pushing it the most ? They called them Ryzen AI
 
Back
Top