Is the 7950X3D or 13900K better for Unreal Engine development/Blueprint?

Hello,

This might be a bit of an abnormal question for this forum but I thought perhaps someone here could help me. I’m building a new high-end PC for productivity and some gaming and I’m really struggling to pick between the 13900K and 7950X3D. I have read tons of reviews from places like Toms Hardware, Puget Systems, and much more, and after all that I’m still sitting firmly on the fence and just can’t decide. I thought perhaps the deciding factor that could push me over the edge could come down to whichever slightly better benefits performance in Unreal Engine development.

From what I understand the 13900K tends to be better in productivity-related applications, especially single-threaded
but there are a few situations where the 7950X3D beats it such as compiling shaders in the case of UE.

So I guess I’m wondering if anyone knows if either the 13900K or 7950X3D better benefits Unreal Engine development tasks. I’m primarily an artist so I’m mostly interested in art-related tasks in UE. But also, what about a blueprint? From the little I understand about the inter-workings of Unreal Engine blueprint is single-threaded. So would that mean that the CPU best at single-threaded workloads (13900K) wins out here? Does AMD’s 3D V-Cache benefit UE development (other than compiling shaders) or blueprint scripts at all? Maybe if I wanted to spawn a ton of complex actors each with their own logic calculations would the 13900K’s or 7950X3D win out? I realize that’s super specific but it’s just an example of something I might do. (And before anyone says it, yes I understand that pushing a game to its limits on the best hardware will severely limit the audience that might play my games/experiences, but just indulge me for a moment here).

Any suggestions?

You’ve done your research. AMD comes ever so slightly on top, but then again, it really depends on the very task you’re focusing on.

Any suggestions?

My take on this is that you cannot actually go wrong here. If you can wait a month or two for Intel’s refresh, the 14th gen is around the corner. Expect 10-15% extra performance from a very hot 14900K potentially turbo-hitting 6 GHz.

But also, what about a blueprint? From the little I understand about the inter-workings of Unreal Engine blueprint is single-threaded. So would that mean that the CPU best at single-threaded workloads (13900K) wins out here? Does AMD’s 3D V-Cache benefit UE development (other than compiling shaders) or blueprint scripts at all

I can’t imagine a scenario where you’d find an observable or measurable difference that is not utterly neglectable.

1 Like

For blueprint reactivity / snappyness and compiling, nothing beats having a top of the line m2 ssd around 5000 to 7000 mo/s, i had huge spaghetti blueprints on hard disk before and changing 3 times from regular “ssd” at 500Mo /s from a hard disk was already a huge improvement, but going m2/ motherboard direct connect nvme’s and brand new tech M2 Ssd able to transfer upto 5000 / 7000 Mo/s to put unreal engine on and the project u are working on, a much better / enjoyable experience, it’s true too for asset loading and streaming in the editor. a 7800X 3D is more then fine. you want write and read speed more then anything else for ease of use / comfort of use and loading / compiling speed of the editor. (also i recommend 64 GB of ram)

Because when you test, do a lot of try and errors while blueprint editing, it is extremely frustrating to have a slugglish / slow compiling and slow reactivity editor and i’ve experienced a lot of that and time waste because of it and it was mostly because of good old hard drives, even samsung evo ssd about 500 mo/s are slowly becoming “meh” tier working with ue 5.0+ versions (totally fine with ue4)

1 Like