I have gone through the thread [Content removed] and the talk related to the Automated Performance Testing https://www.youtube.com/watch?v=aqNL8tdfIHI by Matt Oztalay and had the following questions.
Firstly, thanks for a great tool, talk and to that other UDN thread, it explains why the plugin helps in comparison to the existing tools such as CSV profiler, perf report tool, insights, etc. so that helps sort 1 major question out of the way.
Is this plugin going to be supported for Long term?
I still don’t follow how this works for mobile device like Android/IOS devices and especially when it comes to CI/CD pipelines as well. how do we pass the OS level permissions, my current assumption is that these are passed with adb commands through a script on the CI/CD pipeline.
How is the gather CSV data retrieved automatically from the mobile devices?
Does the Mobile device need to be connected to the build machine via USB or at least stay alive within the same network for the build machine to install the nightly build on the device directly?
Since on Android the Commandline argument needs to be passed via the intent activity arguments or the UECommandline.txt that needs to be pushed on to the device via adb commands or the AndroidFileServerTool (uses adb under the hood anyways). How does the plugin handle this when setting specific map or game mode?
What about the UI flow? can that be captured or does this need to run in a map/mode that ensures UI interactions do not get in the way?
Lastly a minor feedback - The delays, warm ups, soaks, cooldowns are all in time as units but my personal preference would have been for them to be in Frames as it’s not necessary that 2 frames are running equally on all devices at all times so being able to specify this in frames seems slightly more correct to me personally. Please let me know your thoughts on this as well.
Hey there! To clarify the previous EPS case, it’s not that it’s meant to replace CSV, Perf Report Tool, Insights, etc. so much as it’s meant to glue it all together.
To answer your questions:
I’ve got some work on my plate on the framework for 5.8
Interaction with target platforms is handled through Gauntlet’s device manager
The CSVs generated by the tests are gathered by Gauntlet at the conclusion of the session and brought back to the host machine’s GauntletTemp folder for later processing
As long as the mobile device is accessible to the host machine through the platforms’ systems you should be fine. For example, if you can see the device in the list of abd devices, then Gauntlet will be able to pick it up and deploy to it
Again, because this is all running through Gauntlet, the underlying platform support is already taken care of
UI flow is still an open question, but it’s come up recently. Right now the tests are expected to be “Launch the game, open the map with the specified game mode, switch between static cameras/do a flythrough/run a replay, exit”. None of the current test controllers have any support for UI flow testing.
I like the idea of having the option to set the delays in terms of # frames, I’ll take that back to the group and see what we can come up with!
Hope that helps, let me know if you’ve got any other questions!
We are currently on 5.5.4 and I am not aware of any plans that the team might have for Engine upgrades as of yet but I had a few more questions -
Since in the video the command is mentioned which would generate and add the AutoPerfTests.xml file to the project along with some other files I assume but this does not seem to work in 5.5.4, so I assume these were added in 5.6 later? So, is the plugin still functional in 5.5.4 or is a plugin only upgrade path recommended for now from the 5.7 release branch (Congrats on the Release).
RunUAT.bat AddAutomatedPerfTestToProject -Project=pathtoProject* The video also mentions getting the 5.6 version of certain sample projects like Lyra, CitySample but how does one get specific versions of the sample projects? Do they also need to built with those specific engine version source code since one of the requirements for the plugin is to have code (Assuming Engine Code) necessary is my understanding. Please correct me if that’s an misunderstanding on my part.
Last part that I don’t follow 100% is if I still need to do the Buildgraph changes mentioned in the video or not and similarly the UAT code below -
Hey there! So sorry I missed that y’all were on 5.5. I don’t think I spelled it out clearly in the video, but we made some changes to other systems in the engine for 5.6 to help us reduce the number of boilerplate files we needed to create and populate to get everything working right from the BuildGraph and Gauntlet stages. There’s not a direct upgrade path from 5.5 that’d be easy to backport. If you’re keen on leveraging the framework, but you’re not in a position to upgrade your project to 5.6, I can talk you through the broad strokes of the changes and highlight what we had to do to overcome those limitations in 5.5. Keep in mind, you’ll need to make your own BuildGraphs and UnrealTest nodes for this to work and I can’t really guarantee any of this will work.
Prior to 5.6, the engine didn’t support including .csproj files from the Plugins directory
So, you need to create an Automation project (MyProject.Automation.csproj) which explicitly includes the two .cs files from the plugin.
I recommend duplicating one from an existing sample like CitySample (which actually has these included already), or MedievalGame
In that automation project, you’ll have to subclass AutomatedSequencePerfTestNode. You can an example of this in CitySample’s CitySampleTest.AutomatedPerfTest.cs
Creating Test nodes using the same underlying UnrealTestNode with parameters wasn’t supported in BuildGraph, but we still needed a way to hook into the PerPlatform/PerConfiguration/PerTest loop in the BuildAndTestProject base build graph. Otherwise BuildGraph would have to create two separate agents, one for the default TargetTestList and one for the APT tests you’d want to run.
In 5.5 we added the AdditionalTestNodes macro which is expanded at the end of the Platform/Configuration/Test loop inside BuildAndTestProject.
You’ll need to create a BuildGraph like “MyProjectAutomatedSequencerTest.xml” that looks like this:
<?xml version="1.0" encoding="utf-8"?>
<BuildGraph xmlns="http://www.epicgames.com/BuildGraph" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.epicgames.com/BuildGraph ../../../../../../../Engine/Build/Graph/Schema.xsd">
<Option Name="MapSequenceComboNames" DefaultValue="" Description="+ separated list of names, referencing the names of Map/Sequence combos defined in Automated Perf Test Project Settings" />
<!-- Add more MapSequenceCombo names here as more biomes are added the project -->
<Do If="'$(MapSequenceComboNames)' == ''">
<Property Name="MapSequenceComboNames" Value="PerfSequence" />
</Do>
<!-- Pull in the file that defines all the Automated Performance Common Arguments -->
<Include Script="$(RootDir)/Engine/Plugins/Performance/AutomatedPerfTesting/Build/Inc/AutomatedPerfTestCommonSettings.xml" />
<!-- Pull in the macro for defining automated sequence perf test nodes -->
<Include Script="$(RootDir)/Engine/Plugins/Performance/AutomatedPerfTesting/Build/Inc/AutomatedSequencePerfTestMacro.xml" />
<!-- Extend the Additional Test Nodes macro by expanding the Automated Sequence Perf Test macro here to add those tests to the overall test nodes-->
<Extend Name="AdditionalTestNodes">
<Expand Name="AutomatedSequencePerfTest" TestName="%YOUR_TEST_NODE_NAME_HERE%" MapSequenceComboNames="$(MapSequenceComboNames)" AdditionalArguments="$(AutomatedPerformanceCommonArgs)"/>
</Extend>
</BuildGraph>
Then in your root BuildGraph, before you include BuildAndTestProject, you’ll set the AdditionalTestNodesExtendFile property so that BuildAndTestProject properly picks it up
Back then we also didn’t have any of the other test types set up, so it was just “Perf”, and we hadn’t yet implemented the iteration counts. And I think for local reports we’re expecting that you’ve got a folder in Project\Build\Scripts called “PerfReport” with the XMLs you need for PerfReportTool to generate the reports. Again, you can copy that over from CitySample basically 1:1, but it may need some tuning. The other thing that was missing back in 5.5 was the CSV metadata to indicate to the perf report tool that there were CSVs generated from different MapSequenceComboNames in the same test run, which makes aggregating the reports a bit of a challenge.
Like I said, things were still very early days and kinda fragile back then. I’m hoping I’ve given you enough to try and piece together what you need to do to get up and running. Good luck!
Sorry for the delayed response, we decided to temporarily put in stop gap of our own for now, might reach out again in the future if and when we decide to do an Engine upgrade.
Appreciate the assist and Looking forward to testing it and providing more feedback in the future.
For our stop gap solution we built a UI interface using ImGUI for Android only that utilizes the CSV Profiler, PerfReport Tool and ADB commands. You can check out the latest release if you want form the link below. Technically it optimizes the workflow fairly minimally but still fulfills our needs by enabling any end user to collect data and generate reports locally.
That’s great to hear! I’m glad you’ve got something set up. If you end up moving to 5.6 one thing I’ll highlight is SlateIM as a native option for immediate mode UI debug tools in Unreal.