the number of processes is hardware determined (the IDE will only ever have access to a number of processes less then the total number of threads available on the CPU. it is less because the IDE is using at least one to perform the task, and then the OS will not allow access to some threads it is using. I am not exactly sure what the policy is for E-Core/P-Core configurations.
the number of Actions is determined by the number of “files” and binaries needing to be built, which that comes from mostly the uses in the #included list structure the thing that was changed resides at (this will cover inheritance, composition, and immediate referencing for static function calls), but if the change is only to the cpp files most of the changes will be due to the reflection system, and if you look most of the changes will be to “*.generated.cpp”. which comes from the “Unreal Build Tool”
if I have a class that only inherits from, and has components of existing classes, then a change to its cpp file will result in 2 actions (1 for the “file.cpp” itself, and 1 for the “file.generated.cpp”)
if I have a class that is #include list 10 times directly, but some of those are in the #include list in other cpp files the resulting actions would be 1+10+"number of other #include instances, and most all of those will be the “*.generated.cpp”
if the change is made to a .h file the number of resulting actions will be nearly double if a change was made to a .cpp. as the Tool-Chain (the C++ compiler your IDE uses, then the “Unreal Header Tool” and “Unreal Build Tool”). the reason for many of these changes is that the “Unreal Header Tool” and “Unreal Build Tool” are rather aggressive in marking files as “dirty” and needing to be recompiled, mostly because both the “.generated.cpp" and ".generated.h” explicitly call out line numbers of the actual files (this is the reason why modifying a line above the “GENERATED_BODY()” macro makes VS think that it no longer exists because the call-out for it is line explicit)
but to generate those numbers the Tool-Chain needs to traverse the entire Solution to mark files as dirty; which is completely processor speed dependent and real time text parsing (VS tries to optimize this by only looking at files that have been changed, but the Unreal-Header-Tool and Unreal-Build-Tool sometimes just try to traverse the entire solution because “reasons” )
why it is sometimes faster to clean and rebuild is because the Tool-Chain doesn’t need to traverse the Solution to mark things as dirty to then build, it throws everything away, and starts again. it still needs to figure out the #include hierarchy but that “should be fast” in theory the building specific should be faster, but in reality for “complex” #include hierarchies the IDE might have “a moment” and then the Unreal-Header-Tool, and Unreal-Build-Tool needs to take its turn (I don’t think that a Clean+Build resets the “*.generated.h” and “*.generated.cpp”)
Thank you for explaining more than I’d expected. I just thought it took more than it should be. It looks abnormal. Maybe my 8700 is a little old for this.