To :
I have recently watched many lectures about C++ from many sources - specifically, talks by a guy name Bjarne Stroustrup, which happened to invent C++…
Here is one of them
You know, it is interesting how little he shares with you in regards to idealizing C++. In fact, most of what he says is showing C++'s weaknesses and dangers.
Here are some examples:
A very simple example is
vector<vector<double>> v;
In C++98, this is a syntax error because >> is a single lexical token, rather than two >s each closing a template argument list. A correct declaration of v would be:
vector< vector<double> > v;
I consider this an embarrassment. There are perfectly good reasons for the current rule and the evolution working group twice rejected my suggestions that this was a problem that was worth solving. However, those reasons are language technical and of no interest to novices (of all backgrounds – including experts in other languages). Not accepting the first (and most) obvious declaration of v wastes time for users and teachers.
The error messages that come from slight errors in the use of a template, such as a standard library algorithm, can be spectacularly long and unhelpful. The problem is that the template code’s expectations of its template arguments are implicit. Consider again find_if():
template<class In, class Pred>
In find_if(In first, In last, Pred pred)
{
while (first!=last && !pred(*first)) ++first;
return first;
}
Here, we are making a lot of assumptions about the In and Predicate types. From the code, we can see that In must somehow support !=, *, and ++ with suitable semantics and that we must be able to copy In objects as arguments and return values. Similarly, we can see that we can call a Pred with and argument of whichever type * returns from an In and apply ! to the result to get something that can be treated as a Boolean. However, that’s all implicit in the code. The standard library carefully documents these requirements for forward iterators (our In) and predicates (our Pred), but **compilers don’t read manuals**.
Try this error and see what your compiler says:
find_if(1,5,3.14); // errors
Another “embarrassment” is that it is legal to copy an object of a class with a user-defined destructor using a default copy operation (constructor or assignment). Requiring user-defined copy operations in that case would eliminate a lot of nasty errors related to resource management. For example, consider an oversimplified sting class:
class String {
:
String(char* pp) :sz(strlen(pp)), p(new char[sz+1]) { strcpy(p,pp); }
~String() { delete] p; }
char& operator](int i) { return p*; }
private:
int sz;
char* p;
};
void f(char* x)
{
String s1(x);
String s2 = s1;
}
After the construction of s2, s1.p and s2.p points to the same memory, this will be deleted twice, probably with disastrous results. This problem is obvious to the experienced C++ programmer, who will provide proper copy operations or prohibit copying. However, the problem can seriously baffle a novice and undermine trust in the language.
He is very much against your attitude of idealism, and is very vocal about this view of his. I am also reading one of his book these days, quite an old book, his favorite, you might like it, it’s called: “The Design and Evolution of C++” (a.k.a: “D&E” for short).
Here are some quotes:
[]
Programming Languages
Several reviewers asked me to compare C++ to other languages. This I have decided against doing. Thereby, I have reaffirmed a long-standing and strongly held view: Language comparisons are rarely meaningful and even less often fair. A good comparison of major programming languages requires more effort than most people are willing to spend, experience in a wide range of application areas, a rigid maintenance of a detached and impartial point of view, and a sense of fairness.
[/]
[]
Many C++ design decisions have their roots in my dislike for forcing people to do things in some particular way. In history, some of the worst disasters have been caused by idealists trying to force people into’ 'doing what is good for them." Such idealism not only leads to suffering among its innocent victims, but also to delusion and corruption of the idealists applying the force. I also find idealists prone to ignore experience and experiment that inconveniently clashes with dogma or theory.
[/]
In short, his attitude towards C++, is much more modest and pragmatic then the attitude you have demonstrated here.
As for the performance-argument, it is probably the most irrelevant in this context - Epic’s personnel has repeatedly stated that Blueprints are around 10X slower then native-c++, and yet they still recommend them for most scripting tasks, as they were developed specifically for that (which was a very large and substantial investment on their part). So, Epic does not share your worries about performance. Nor does Unity-Technologies, CryTek, or any game-engine development company in the history of gaming. Virtually any AAA game that has been successful in the last decade, has had some sort of a scripting-environment embedded in it - whether it be proprietary or commercial, home-grown or open-standard, every game has it. Period. Including the most performance intensive ones. And there is reason for that - it has contributed to the game’s success, in that it enabled quick-iterative workflows, which is the foundation for any successful creative-process. This rules-out C++ if only for compile-time reasons alone. Some companies went so far as to build their own integration to languages like “D” which have orders-of-magnitude faster compile-time, with negligible performance-costs (if at all), and endured the suffering of having to maintain that integration, only for the benefit of a fast iteration-time - it IS THAT important. The quality of a game is measure on many more axises than just raw-performance. Since it’s a creative-collaborative-process, developers don’t really know what works well until they try it out. It’s more of a “discovery” process then an “engineering” one - and so, in that space “iteration-time” is KING. The more iterations a team may perform in the time they have before launch, the better the game will end-up being - it’s that simple.
And as for your numbers about C++ vs C#, no one is countering that - it was measured, and is a fact.
The question is NOT:
“Is this true?”
The question is rather:
“Is this relevant?”
I’ll give you a simple example that would demonstrate how in some cases performance is irrelevant:
Say you want to play Quake3 Arena. But not just play it by yourself - you want to arrange a tournament of a 100 people, and you are in charge of building the machines that would host the event. You are evaluating which graphics-card to buy for these machines.
You have 2 choices, and both cost exactly the same:
- A card that takes 100W, and runs Quake3 at 300fps.
- A card that takes 200W, and runs Quake3 at 600fps.
Which would you choose?
If going by raw-performance alone, the answer is obvious - you pick option 2.
But wait, is it even possible at all to see the differences between them?
Lets say that you have super-fast monitors that are connected to these machines - they have a 200hz refresh-rate.
Is that enough?
Well, even with option one, you are still not able to even see all the frames that are being cranked-up by that card…
What do you gain by buying option 2?
Absolutely nothing.
What do you gain by buying option 1?
A much lower electricity bill…
You could also take your cars example, just put it in this kind of location:
You get the picture?
As for C# performance, for the case of unity, you should watch this lecture:
https://www…com/
*** SPOILER: At minute 33 he shows “the numbers” - Native C++ scored 31, “Converted”-C# scored 33 (lower is better).
Here is a time-coded link to the benchmark-charts:
https://www…com/#t=2005
And a note on that, the main reason they are doing this, is NOT performance, but portability and launch-time - similar reasons for why Microsoft is doing something similar with their “.Net Native” initiative.
For VegasRich:
[]
I HATE macros. I can’t even count the times that some jack *** used a macro and didn’t add a brace, or some other little bull **** thing and it lead to hours of debugging hell.
[/]
[]
I work in C#, C++, Lua, Java, or whatever gets the job done fastest. Any language is only as productive as the coder using it - if you’re unproductive then you’re unproductive. Understand: the problem is YOU.
[/]
Given that “debugging” is a major part of programmer-productivity, this makes your 2 arguments logically-inconsistent with each other…
Bottom-line, performance is NOT free - it has “costs”.
In most cases, the costs are in increased complexity and reduced productivity.
There is no serious systems-engineer that would claim otherwise.
This is why you MUST weigh the costs against the benefits, to find out if it is justified.
In most cases it isn’t.
That’s why there is a famous phrase:
“Premature optimization is the root of all evil”
Look it up…