Ue4 use is sporadic at best - usually when required by a client, or paid for, or on a legacy project.
We rely very heavily on c++, actually we really only run things off source in custom built engines, so basically only c++.
Blueprint has a few uses on smaller things and prototyping.
For instance, you wouldn’t go code a custom class to get one actor to turn and face another.
In general, the engine has really just gotten worse these past 3 or 4 years.
It started to dip around .22 with nonsensical updates that added stuff which sounded good in theory but were never fully realized (like RTV for one).
During covid it seriously fizzled down into the most uncosinstent clunk of bloatware with a side of real chaos thrown in by the nonsensical physics switch of physics engine with the same nomencalture.
By now, the latest engine can’t even run a vulcan project properly. They have had countless core issues and attempts at fixing them which I don’t think went anywhere… at least considering the current trend of forum posts which bash on the very same issues.
Nanite could have been a welcome addition, but most people turn it off because it makes things worse or non-functional.
Lumen seems the same I guess, since shadows get blurred and other issues come up with it on.
Compared to direct competitors, ray tracing is trash at best, since cry engine does it a heck of a lot gaster being done volumetrically
(Though Cry has other shortcomings, like needing to code a bunch of things yourself so its not for everyone).
Topic, yet non engine related…
They doubled down on meta humans, which now have a galaxy worth of bones and wierd control rig setups.
They created some sort of an “animator” thing that apparently takes your face and puts animation on a metahuman via an iphone (only saw the demo really, so cant comment to much beyond: all of the stuff that was filmed was already in existence and possible by just doing some modeling leg work.
They added more fluff to the engine, calling it ai powered where in reality the only thing AI in the engine is the BlackBoard tree you build yourself… not that those level replication things don’t look cool on tape btw, the fact is they’ll never work on a real project and they all know it.
(And sure, their idea is it gets you close so you can do editing, which is somewhat fair. Id rather just do the acrual work, but thats just me).
Fortunatly (or unfortunately if that’s your view point) I just never really shared any of your optimism towards this engine getting better…
Most projects (all of which use 100% custom made assets anyway) were migrated out.
I don’t really know of anyone who published at all (indy ofc) on anything that’s not 4.25 or below in the past 3 or 4 years.
I do know of several who cannot publish if they update their peoject(s) due to the endless slur of bugs it introduces…
One thing the engine is good for at the moment, is to pass motion capture into and use it as a studio/recording spot.
This is only because the face capture layers on top of any livelink input and is free to use, while also allowing you to view the actual final performance of an actor’s face (only as good as you rigged the poses anyways, but its a step up from most DCC since they dont have the same low treshold of precision the engine does, and therefore render the expression much better than the engine, which leads you to believe you have something when you in fact really don’t. Running in engine you are at least sure its as good as you can get within it)…