Hello!
My team is looking at setting up a build farm job to run “engine automated tests”, to catch regressions in both workflows and game runtime when we make changes to Unreal Engine code. We’re already using automated tests for our own code and content and are quite satisfied with it, but this post is specifically about Unreal Engine code. I’ve been reading through the documentation and forum posts, and I’m having a hard time figuring out what that entails.
My 2 main questions are:
* What should be considered an “engine automated test”?
* Which of those should I expect to pass successfully on an official Unreal Engine release?
I’m aware of the Low Level tests ( Low-Level Tests in Unreal Engine | Unreal Engine 5.6 Documentation | Epic Developer Community ), the Automation Test Framework ( Automation Test Framework in Unreal Engine | Unreal Engine 5.6 Documentation | Epic Developer Community ), and the EngineTest project (I didn’t find any online documentation for this one, but there’s a readme.md file in the folder, and there’s a few references to it in the forums). The documentation is pretty good at explaining how to use the frameworks, how to run them, but not so much on the expected usage for testing Unreal itself. I’m also aware of the “Engine Tests” category in the Unreal Editor “Test Automation” window, but I’m not clear what those categories mean.
Regarding the expectations, this one matters a lot. We really don’t want to spend our time investigating Unreal Engine tests that are already known to fail. For example, using code and content directly synced from the Epic P4, in our release, the EngineTest project fails to cook on its own due to a missing dependency on the “CommonUI” plugin. After manually fixing this error, the tests themselves fail with an error. Running the Low Level tests also results in a few failures. Running the “Engine Tests” from Unreal Editor “Test Automation” window eventually aborts the run and kills the editor. I’m guessing that for all of these, the problem is we either run tests that aren’t supported, or expected to succeed, or we’re running them incorrectly. But I really don’t know which one that is.
Even if running a full battery of test isn’t expected for games, is there a “safe subset” for us to rely on? Alternatively, what tests would you recommend that we run before sending a pull request for an engine change on Github?
We’re currently using 5.6.1.
Thank you,
Dom