Why C++ for Unreal 4?

To worm:

[]
I’m not following the discussion closely but this only shows that performance is critical. It matters to an extent that Unity is going out of their way to get C# converted to C++.
[/]

Well, it’s interesting, but the IL2CPP initiative actually started as a way to make a native/non- port of the Unity Player for he web, using WebGL.
So, the main target here, as you can see, is actually Javascript engines - and that is all they are currently willing to confirm.
The performance-benchmark was really out of any context, and was of a disjointed implementation of the Mandelbrot-Set, so you can extrapolate anything practical out of that, except for theoretical-capability to convert C# code into C++ code, that in certain-situation would yield similar performance-characteristics.
They are a VERY LONG way from having a fully-stand-alone implementation of the grand-vision of being able to compile fully-working games in C++ for ANY platform.

Really, if you look-up what Microsoft is doing with “.Net Native”, and especially WHY they are doing it, you would quickly discover that it has nothing to do with performance per-se. IF they get performance-increase, it’s a nice side-effect, but it’s not the main goal.

What IS the main goal?

Well, there are many, and it depends.

When it comes to Microsoft, their main worries is are:

  1. Launch-time : Some of their Windows Phone apps, sometimes currently take “minutes” to load-up…
  2. Portability : A lot of platforms don’t have a .Net-Framework in them, and even those that do, have different combinations of versions of it, so they want to be able to package their apps with the relevant framework-code, in the relevant version, but ONLY that, so it wouldn’t balloon in size.

When it comes to Unity, the main reasons are:

  1. Upgrading .Net-Version support in a portable way : There is virtually infinite chatter around the Unity community about how out-dated the version of .Net that they support (I know it’s actually Mono, I’m talking feature-set version). Basically, it’s a “LEGAL” issue now… Mono used to be Open-Source, but now most of the development is done as a proprietary-solution by a company named Xamarin. They essentially “hijacked” this Open-Source project and “monopolized” it, by actually hiring all of it’s main contributors to work on their flavor of it… So Unity-Technologies basically got royally-screwed… The version that UT forked all these years ago was under a different license, and they couldn’t keep it up-to-date after a certain version of it when the licence-model changed - it’s a very long and complex/messy story, so I’m paraphrasing here - the bottom-line is, the version of ,Net that they are using (again, in terms of feature-set), is roughly 8 years old now (It’s mostly 2.0, with some specific 3.5 features like Generics), and their community is raging and roaring about that for a few years now…
    So they need another route for cross-platform support going forward, that doesn’t rely on the Mono project in it’s current evolution-route…
    THIS is the BIG ISSUE here - anybody claiming otherwise, has a lot to catch-up on…
  2. Packaging : The main problem they currently have with that, is that they have to basically include all of their engine (and their Mono implementation with it), for every App that is built… What they hope to accomplish long-term, is the ability to “tree-shake”-away all of the code that is not being used, and this is something that most C++ compiles already do very well. So in converting the C# code into C++ in the build-process, they get to benefit from this ability, and shrink-down their build-artifacts dramatically - which is important for “mobile”, but actually CRITICAL for “web”…

By braking the dependency of Editor-version of .Net and the target-platform’s support, they get to finally upgrade their .Net-version supported-feature-set.
But as the dude said in the video, “…the .Net upgrade is contingent upon the success of IL2CPP for all supported platforms…”.
Meaning, if the IL2CPP initiative doesn’t pan-out, Unity users will not get their long-awaited upgrade…

Nowhere in this entire fiasco is “performance” ever mentioned as a major-factor…
So you can’t claim that they currently have performance-issues with the current C# solution - that is said nowhere(!).
It is a side-effect and a “potential” benefit, for “some” cases, and that’s how they are treating it - it’s all they’re willing to commit to, for now.

[]

I dislike that sentiment. It’s too often out of context.
It’s usually rehashed by people:

  1. who don’t care enough about performance
  2. who are in an environment where they don’t have to care
  3. who conflate [caring about optimizing, learning to optimize and profile] with the notion of “premature” (needless) optimization]
    [/]

I have seen many treatments on this phrase - yours is the first I encounter.
It is unanimously accepted and understood that there is a tendency for people to “intuit” about performance-factors in their code, “before” they test it, and it is also very well-known and acknowledged that MOST intuitions in this area are evidently-wrong, most of the times - and in either directions(!)
So, really, I have no idea where you’re getting this notion from…
If someone uses this phrase to trick-himself into justifying being lazy about profiling/optimizing, then that someone completely missed the point of this phrase - and in any case probably has poor personal-integrity…
To take such an example and extrapolate from it, and generalize it to the point of diluting it’s message entirely - I don’t even know what to label such an attitude… Looks suspiciously reminiscent of somebody just wanting to argue for the sake of arguing…

[]
We live in a world where people use different GPUs and CPUs. Only a 10% difference in frames can make the difference between enjoyable vs. unplayable.
[/]

Quite right - and 9 times out of 10 this 10% difference would come from non-game-logic areas…
So, you see… It’s not about just “performance” in-the-abstract, it’s about the question “where are most of the bottlenecks most of the times”.
And when it relates to scripting, you would be very hard-pressed to demonstrate that “this is where most of the bottlenecks are most of the times”…
Which bring us back to the importance of profiling, and the dangers of premature-worries, that most of the times have everything to do with our poor intuitions, and nothing to do with reality.

Ultimately, it’s all trade-off either way - the smart thing too do is make the best ones.
Most cases, you’d get much more game-improvements (and a better game by the end of the deadline), by being able to do make many more iterations for your ideas, and for most cases, the price you pay in run-time performance is negligible and completely justified - most run-time-costs would come from other areas most of the times - if you happen to be developing a game that is somehow outside of this curve, then you’re out of luck - no rational game-engine-developer would optimize for your use-case - you’re within what’s-called “an outlier” in statistics, and you’ll have to just live with that…