How do bigger projects deal with C++ compile times?

I use a continuous integration (CI) service (Jenkins and Vagrant) that is plugged in to my Gitlab server. As we do pushes to the various branches, the continuous integration server will do the compile and run some tests. We normally push our branches when we are satisfied we have a stable feature.

Most of the work is done inside Visual Studio and the editor and the CI service handles everything for me. This includes doing Linux Client and Dedicated Server builds as well as the Windows Client and Dedicated Server builds. I haven’t managed to get the Mac side of things working … but we are okay with doing manual builds of that.

Nice. So, how would separating modules working? What kinda modules would you have? One for each game mode, something like that? One for each level?

Regarding Blueprints, which doesn’t have this compile time problem, how do you guys go about it in mid or big size projects? What’s the sweet spot proportion between coding in BP and in C++? You still do most of your project in C++ then? Wouldn’t it be possible to do a good chunk of it in BP and avoid the compile time?

That looks like a lot to deal with. What do you think it would be possible to do to make it better? Or by the nature of C++ is simply not possible? Did you see something in UE4 Roadmap that can make this more practical?

Thanks!

The granularity of module separating depends on project and team - for example, you could put use a reward/progression module, vechicle physics module, etc…
The cost of using more modules makes inter-dependent references harder - this is the trade-off.

Bp / c++ division is also project/team dependent. BP is great for artists, great for prototyping, and easy to change/enhance. There are some things that must be done in code - although these are getting fewer all the time. Seems to vary greatly between developers - some embrace bp others seem averse.

One more thing - on a large team, you can include binaries in the repo so that artists and BP only users do not need to compile at all.

Highly recommend Perforce over Git for a large team due to locking of binary assets.

Also have not seen it mentioned, if I need to test some logic specifically I put it seperately, test only that function and then compile just that. (like in its own project) - consider it like little tiny testbeds to test certain functions etc

If that works then I move on to implementing into main branch, here you just will need to wait I am afraid and then see what you just broke :wink: Also before I forget - there is RECODE, I am using that with CRY and saves a lot of time. Google it.

Hmmmm that’s interesting. But how can you put it separately in its own ‘project’ if it needs to be at least together with other parts of the gameplay in order for you to test it? You mean if there’s this jump function you are implementing, you put it separately together with all the rest of the code needed to support that feature?

Yeah thats the gist of things, yes you can argue that it might slow general workflow down, but at the end of the day if I want to test AI behavior do I really need all those textures and maps and god knows whatelse you might have as your map/game logic? Look at an example on how StarCitizen is doing it, from looking at their Around The Verse videos when they show the teams working on different aspects of the game, all developers are using the base code of the current dev build but run their own test maps to test out their own specific things they are working on, in GIT essentially on their own branch. Then once a feature is ready for implementation it gets implemented into main dev branch, then handed to QA to test, then back to bug fixing, then tested again until fully working as per SCOPE, and then merged with main branch and deployed to all others as base.

Remember this is very simplified. But you get the idea. And YOU are the one who knows what is required or not by that specific module. Thats why a good and well commented Design Document is of utmost importance, even more important the bigger your project goes. So ultimatively you are the one deciding what can be developed outside of main project and then later merged into it.

Here is how I do things:
Prototype → test → fix → test → merge into main dev → test → fix → test → publish onto master — rinse and repeat until all features are implemented

Yes this might take more time at the end but at least you have a clean flow and testbeds for certain things later on which you can now extend.

The network game I am working on at the moment is getting quite big with all the subsystems and functions in place that I am currently sitting at around 1-5min compile times with quite a beefy PC. So it is something that unfortunately you’ll have to endure through I am afraid. Get side activities at the ready or go for some fresh air while compiling etc :wink: Nothing beats a fresh breeze.

And the above doesn’t just apply to game programming, these are workflows you can utilise for all sorts of projects.

Hope this helps

Nice! In which Around the Verse vídeo do they point this out specifically, can you remember? I’d like to check it out.

But your argument raises again the same question for me: maybe someone who already has experience working with C++ and compile times would rather just keep working the way he/she does instead of bothering changing anything, but for a newbie, why would this be a good tradeoff? If performance is the big plus, honestly I don’t know, most people complaining about performance in Unity/C# are people here in Unreal forums hehe. I believe both engines/languages can run into performance issues if not correctly optimized, and with tech advancements this becomes less and less an issue especially if we’re not talking about the biggest and more complex games, or am I mistaken?

What would be the other points to answer the question: Why would a newbie choose to deal with compile times when they could choose not to (by choosing a different language/engine)?

None in particular really, just looking at their setup and I can tell you with a high amount of certainty that this is how they work. Otherwise how do you imagine would they be able to split all this work between the studios? I remember Chris mentioning as well (some time ago) that it takes them 3+ hours to compile everything. Then again they have DevOps engineers etc that take care of automation and build routines / cycles etc. So like I said unless you have a big team and the monetary resources to support it, there is no way around wait times, just clever (#lazy) engineering to do only the necessary bits in their own environments.

To be honest, everytime I compile Cryengine I cross fingers haha :wink: And let’s not get into the conversation around Unity or C#. While Unity might be good to learn initial concepts many of the long term programmers that have worked in C++ and written custom engines etc will tell you that while it is “cool” you are not as free as with Cry or UE. Also if you want graphical fidelity w Unity you simply cant get it (unless you’re willing on spending hundreds in plugins and then hoping that they all work together.). Unity was my goto engine for Android mobile games, but since Unreal and Vulkan are a thing I am switching even that stuff over as well. Unity in my humble opinion still needs to grow up.

And to answer your last bit there… well no one is forcing anyone to use Unreal. And if you are a “newbie” to C++ etc, I do not believe the right point of entry should be the engine but rather a basic learning course in C++ itself to at least understand all basics, including Pointers, References and inheritance & structures. Once you get the basics down you can start looking into Unreal programming. Until then Blueprints are a great way to learn :slight_smile: When I started back in the 1993 I came from Ansi C and was using Borland C++ (you can google that one :smiley: ) writing console applications for fun and school projects. Or just using asm {} to hack the schools printer etc. Name me a AAA engine that is open source like Unreal or Cryengine with the same fidelity and options… I looked into many, stuff like Irrlicht etc comes to mind, while they all tried to do something specific and lacked versatility at the end of the day and you could only make a certain type of game, while with Unreal and CryEngine you could go and be creative. Ok Unity as well :wink:

Anyways hope this helps.

Thank you very much! Your answer for sure did help me clarify some questions.

This one might be too obvious to mention, but think and plan a little bit before messing with header files. The more files include your changes, the longer it will take. Try to leave the rapid iteration to the CPP files.

For AAA sometimes classes are growing so fast that they reach enormous levels. I discovered that some crucial gameplay classes like Pawn have actually 120 000 lines of code. They are divided into separate files like Pawn01.cpp Pawn02.cpp … Pawn12.cpp because there is 10k lines of code limit for each file. Not sure who started that but it happens.

  • So just think forward, divide code into smaller classes (managers, factories, libraries, components).
  • Try to reuse code as much as you can with static libraries / helpers.
  • Make new modules for independent code (Audio, Gameplay, Input, Interface, Session, Online, Types, UI, Utility, Event, Extension, LifeCycle etc). In the end having more modules in your project helps.
  • Use interfaces to communicate and to not creating additional dependencies
  • IWYU - include actually what you use, not more, even CoreMinimal includes a lot of libraries.

Incredibuild helps but what if your project looks like /UE4 Engine itself? :smiley:

I would say like almost everyone here, advanced setups, code management, and a tool like Increadibuild.

@Tefel 120k lines of code in a single class? I mean holy ■■■■ I know that AAA games can get complex, but to me this just sounds wrong. It’s an example of how bad/hard managment can get if 100 people have to work with the same class I guess?. I’m pretty sure that everything is getting dumped into that base class for no good reason.

heh… I’ve seen over 20 lines of “if ( … || … )” just for a GTA character to enter a car without being thrown to the moon.

AAA code can become very dumb very often.

@strife I wonder how it all started. I think this happens when another project starts on top of existing codebase. Before I didn’t know it’s was possible to have so many lines in just one class. If you don’t plan splitting your code since the beginning it’s hard to do it later on, even code reviews are not changing much.

Holy thread necromancy.

Clearly things have improved quite a lot in the 4 years since this thread started – with live reloads, and the newer options to hot reload into a live session, obviously things have become better.

That said, C++, and as much, the structure of Unreal don’t really make it super easy to crunch code down into small units.

— sort of a tangent follows —
Gaming, as an industry, itself is lightyears behind the rest of the world in processes, quality, and quite frankly, pay scales. While there may be some (and likely are!) places where you’ll find modern processes that include things like code review, pair programming, continuous integration, automated testing, actually caring about not just “does the code work” but “is the code of high quality”, these things are not the norm. That relates back to the idea that places might potentially have code units that are thousands of lines long. Very few care about the cognitive overload involved with working with such insanely sized code units. Very few care about the amount of time spent to build it (JUST THROW MORE CPU AT IT). Very few care about the code being structured in a better way than just – it works. Because no one will ever have to build anything new with this code, it’s a game, and whatever the next game is, will have totally different code.
— end tangent —

The smaller the code units you write, the less time you’re going to have to spend building them. Don’t build a 10k line anything, ever. I’ve just spent the last several years in Javascript world, and the usual advice these days, is that if a code unit is larger than 2-3 screen displays full, it’s too large.

There’s simply no “one size fits all.” The JavaScript community is known for being dumb about projects at scale, or even just general code engineering. They keep re-inventing wheels the industry already left behind in the '80s. (The leftpad debacle was a perfect illustration, and it really hasn’t gotten much better since then.)

Smaller units are often helpful, as long as they are not coupled. But when you have tons of coupling, a big unit that knows about all the coupling will sometimes serve better. Trying to fracture a large, closely-coupled system will often end up with runtime performance that’s 10x slower because of the necessary indirection. When your application is intended to calculate quarterly reports for the back office of some enterprise, that’s a TOTALLY FINE choice – it’s more important it’s right, and the next developer who comes along can easily read and understand it, than whatever the code runtime is. For games, this is not so – gameplay code needs to deal with tons of special cases because of the inherent problem domain, and performance is absolutely crucial.

That doesn’t mean that every solution requires big, dense classes and files. Sometimes, breaking parts apart actually shows opportunities for optimization, and even parallelization. There are many cases where schedule, inertia, or inexperience means that people write bigger and denser code than they have to, and when you find those, taking a bit of time to make it better delivers value to all parts of the team. It’s just not the case that EVERY large, dense, piece of code can have this hammer applied to it. Sometimes, the problem is just complex, inherently.

The best solutions is in my opinion when you end up building a better vocabulary, typically at a higher level. For example, we don’t need to write a “rolling barrel simulation” and a “fixed rock simulation” and a “falling debris simulation” and then try to make them all talk together – we raise the level of abstraction to rigid bodies with algorithmically defined collision manifolds, and use a general solver for running “the simulation” where everything interacts. The draw-back is that you now have to think in terms of forces, torques, and contacts, rather than whatever features were important to your particular rolling barrel in the “rolling barrel simulation.” Higher layer of abstraction, more powerful primitives, ability to talk about higher-level concepts, ability to parallelize and optimize in the higer-layer domain – all good things and the reason we use a physics engine in most 3D games today.

In fact, that’s what the Unreal Engine is – it defines a Gameplay Framework that lets you talk about Actors, and Pawns, and Components, and Assets, rather than having to talk about structs and pointers and blocks of texture data.

I think there may be a higher-layer set of primitives that can help with gameplay/entity code, but it’s not at all clear what that looks like, and nobody has managed to invent and popularize it yet. If you can define and build it, you may be setting the stage for the next big engine for the 2030s :slight_smile:

Epic use meta-programming sporadically in some parts of the engine, but it’s perfectly viable to write gameplay code in almost completely abstract way:

Calling Functions: A Tutorial - Klaus Iglberger - CppCon 2020 - YouTube
Back to Basics: Templates (part 1 of 2) - Andreas Fertig - CppCon 2020 - YouTube
Back to Basics: Templates (part 2 of 2) - Andreas Fertig - CppCon 2020 - YouTube

The thing is… make sure your peers will be happy with that lol

The other thing is that after the code is thrown away, often so are all the people. :stuck_out_tongue:
Or pay is flat unless you’re the CEO / Mgt ofc. Game dev is merciless that way! :mad:
BTW: Is there any connection between @eblade and UE3-UDK-UT: Blade[UG]?

> Sometimes, the problem is just complex, inherently.

Right. I think we just said the same thing – it’s generally better (almost always so) to have well-written, relatively small units. Surely in this environment, you’re not likely to get your class sizes down to 60-80 lines (especially as Epic’s style standard is very heavy on line-breaks :smiley: ), especially not the ones that compose together a lot of other pieces, but I would say that it would be very very likely that if someone – particularly a professional – had a 10K line Pawn class, that their development processes are probably broken.

(to be fair, I don’t actually believe someone has a pawn that size, but someone upthread claimed it, i think they said 100K though)

In Unreal we compose things together by breaking them up into multiple classes, UObjects, Actors, or Components – in a “normal” structure, you’ll see that you have a PlayerController, and a Pawn, and one of those may have a reference to an InventoryManager, and then that has a reference to your Inventory items. If you have 10K lines that cannot be separated from each other on a basis like that, I’d really have to question that. FWIW, the composition of many things in Unreal could probably be improved by making more use of Components and less use of Actors. (although I really haven’t done any exploration in that vein, as i’ve been away for most of the last decade)

In any case, if you take a basically empty game, and you make a negligible change to an otherwise empty code unit, and it takes longer than a few seconds to rebuild in live update mode, then you’ve got a problem somewhere that is outside of Unreal.

If your specific code takes a very long time to build, whenever you make a change, then maybe you have some poor structuring.

> BTW: Is there any connection between eblade and UE3-UDK-UT: Blade[UG]

It me!