AMD Ryzen CPUs See Up To 27% Performance Boost In Cyberpunk 2077 With Unofficial SMT Fix

kac77

2[H]4U
Joined
Dec 13, 2008
Messages
3,319
AMD Ryzen CPUs See Up To 27% Performance Boost In Cyberpunk 2077 With Unofficial SMT Fix

"PCGameshardware did just that and the results showed improvements of up to 27% with the new unofficial patch applied. Using the AMD Ryzen 7 7800X3D CPU, an 8 Core 3D V-Cache chip, the CPU saw an average FPS gain from 108.3 FPS to 137.9 FPS. This shows that there is still a major problem with the game and its optimization around 8-core AMD Ryzen CPUs which has yet to be addressed by the developers."
 
Last edited:
Link to the mod project.
https://github.com/maximegmd/CyberEngineTweaks

From the 1.05 notes.
[AMD SMT] Optimized default core/thread utilization for 4-core and 6-core AMD Ryzen(tm) processors. 8-core, 12-core and 16-core processors remain unchanged and behaving as intended. This change was implemented in cooperation with AMD and based on tests on both sides indicating that performance improvement occurs only on CPUs with 6 cores and less.
 
Last edited:
Weird to see this going around again. Cyberpunk Version 1.05 was December 2020 and they're up to 1.63 now, the hex edit is well-known, and CyberEngineTweaks does this with a mod menu toggle.
 
Weird to see this going around again. Cyberpunk Version 1.05 was December 2020 and they're up to 1.63 now, the hex edit is well-known, and CyberEngineTweaks does this with a mod menu toggle.
I thought it was old news as well, but you know how things get dug up, and suddenly something from 4 months or longer is at the top of your "new" news feed.
It happens and at least it gives me a reason to post a link to the CyberEngine project.
 
Only problem with this is Cyberpunk isn't really a seriously FPS sensitive game. I played through it at 50-60. It was fine. It was lower settings or put up with a bit less FPS. In single player games I tend to go for bling over FPS as long as I have enough FPS. Of course I was kind of doing the can't do this in most other games play-through -- cyberdeck, quickhacks, stealth, etc. You'll want more FPS if you play it like an FPS.
 
Only problem with this is Cyberpunk isn't really a seriously FPS sensitive game. I played through it at 50-60. It was fine. It was lower settings or put up with a bit less FPS. In single player games I tend to go for bling over FPS as long as I have enough FPS. Of course I was kind of doing the can't do this in most other games play-through -- cyberdeck, quickhacks, stealth, etc. You'll want more FPS if you play it like an FPS.
Very true, Cyb is not exactly a "reflexes shooter". That said I've still experienced a LOT of CPU bottlenecking even at lower framerates. The texture loading (Streaming System) and crowds/traffic (Community System) can be major CPU hogs especially with mods. I created and maintain the "Realistic Traffic Density" mod (on Nexus) and lemme tell ya, boosting NPC density will murder a CPU. Same with using lots of high-rez textures- all the game textures are compressed into freaking oblivion for distribution and decompression eats up precious CPU cycles during in-game asset streaming.
 
This kind of shit pisses me off. I think it was Skyrim that when someone forced the game to use SSE instructions, there was a noticable performance boost. Why developers don't enable these seemingly easy performance tweaks is beyond me. Their priorities are messed up.
 
This kind of shit pisses me off. I think it was Skyrim that when someone forced the game to use SSE instructions, there was a noticable performance boost. Why developers don't enable these seemingly easy performance tweaks is beyond me. Their priorities are messed up.
There are downsides, it fixes things for more than 8 cores but breaks it for fewer.

Game engines are hard, and scaling them is tricky, design around too many cores and you make things bad for anybody with fewer. And there are a lot of 6/12 and 4/8 setups out there. Make something that works there and you are good for more but design something that needs 8/16 and you don’t necessarily scale down.

I mean if they wanted to seriously boost performance they would all be using AVX512 for handling textures and physics but what sort of systems would that limit them too?
 
This kind of shit pisses me off. I think it was Skyrim that when someone forced the game to use SSE instructions, there was a noticable performance boost. Why developers don't enable these seemingly easy performance tweaks is beyond me. Their priorities are messed up.
Some versions of Cyberpunk (mostly older ones) use AVX instructions and get slightly higher performance when CPU-bound at the cost of higher CPU power draw. They keep removing AVX though, I guess for compatibility with old ass CPUs.
 
There are downsides, it fixes things for more than 8 cores but breaks it for fewer.

Game engines are hard, and scaling them is tricky, design around too many cores and you make things bad for anybody with fewer. And there are a lot of 6/12 and 4/8 setups out there. Make something that works there and you are good for more but design something that needs 8/16 and you don’t necessarily scale down.

I mean if they wanted to seriously boost performance they would all be using AVX512 for handling textures and physics but what sort of systems would that limit them too?
I think the only consumer CPUs that have AVX512 are the 11th gen Intel and AMD Zen 4 chips. The p-cores in 12th & 13th gen can do it, but the e-cores can't so they disabled it for the entire chip. There are hacks to enable it on earlier Alder Lake steppings if all the e-cores are disabled. Other than that Xeons and the now defunct X-series (HEDT) have supported it since Skylake. So I think it'll be a while, especially if Intel doesn't add support for it to e-cores. They might though, since Xeons have had it for a while and they have all e-core CPUs in the pipeline.

AVX2 is a different story. Not using AVX2 in a AAA title would annoy me. Support for that goes all the way back to Haswell & Excavator. OTOH people are still running Sandy Bridge and Ivy Bridge chips. I'm thinking we'll start seeing AVX2 required for games when Win10 goes EOL. I'm pretty sure all the chips that Win11 officially supports have it.
 
As for AVX 2 support, many games don’t use avx2. Optimising using avx2 is often unnecessary.

Easy way to tell if a game uses avx is to set an avx offset in the bios of an intel machine and watch clock speeds while playing.
 
There are downsides, it fixes things for more than 8 cores but breaks it for fewer.

Game engines are hard, and scaling them is tricky, design around too many cores and you make things bad for anybody with fewer. And there are a lot of 6/12 and 4/8 setups out there. Make something that works there and you are good for more but design something that needs 8/16 and you don’t necessarily scale down.

I mean if they wanted to seriously boost performance they would all be using AVX512 for handling textures and physics but what sort of systems would that limit them too?
I can understand that, but then why not add a detection into the game where it knows how many cores you have and configures the game appropriately?
 
Who even plays Cybersuck 2077 anymore anyways

After wasting money and a hundred plus hours the game is an abysmal toilet flush gone wrong
 
I can understand that, but then why not add a detection into the game where it knows how many cores you have and configures the game appropriately?
They would be the first if they did. Very few programs do that unless you are the one compiling it.
 
Back
Top