AutomationTesting: Logging Errors in Succeeding Tests

Code logging Errors and Warnings in succeeding tests.

I want a test to succeed while freely logging Errors and Warnings from the tested code.

A specific use-case:
I am testing a process that will read a file and do some operations on the content. I write a test that will feed the process an invalid file, which would cause an Access Violation if the code is not written to take care of such malicious files.
When reading the file and detecting something like reading memory locations that lie outside the file-size reserved memory, I will log an Error and return an error code.

In short: I am testing code that logs an Error, but I want the test to succeed.

The only option I found is SetSupressLogs(true), which is not a nice option, because now we do not have any more precise feedback in the case of the test failing.

I can see that a lot of the standard Engine automation tests will ‘succeed’ with warnings. So Epic seems to have the same problem?

Generally: What I log in the code that does stuff has nothing to with how I evaluate the return-values of that code for the status of the test.
I would expect a test to be marked as success when I don’t set the RunTest return value to false.


/**
 * Try to import a file that has invalid content. 
 * This should also return false and not crash the editor if possible.
 */
IMPLEMENT_SIMPLE_AUTOMATION_TEST( FTestReadInvalidFile, "ReadInvalidFile", EAutomationTestFlags::ATF_Editor)
bool FTestReadInvalidFile::RunTest(const FString& Parameters)
{
	const FString FilePath(TestDataPath + TEXT("invalid_file.file"));
	SetSuppressLogs(true); // not nice!
	// the following line will log an Error, complaining about what is wrong with the file
	bool Result = StuffFileReader::ReadStuffFromFile(FilePath);
	//SetSuccessState(true); // doesn't matter
	TestFalse(TEXT("Handle corrupt file"), Result);
	return true; // doesn't matter
}


Does anyone have a better solution to this? How would you write such a test?
Could the system maybe be reworked to not completely suppress the logs, but just ignore them for evaluation of the test, but still pass the complete text that was logged so it will be displayed in the frontend?

The problems come with the case that we use code that actually really creates an Error, logs an Error, that says that the underlying code is not working, but the code we test still does the ‘right thing’ with ‘wrong input’.
Taking a look at pytest, which uses exceptions to validate tests, for such a case there is the with pytest.raises statement How to write and report assertions in tests — pytest documentation, to ignore certain exceptions, the ones you expect your code to raise, but not others, coming from code that code might depend upon.
UE automation testing doesn’t use exceptions, but logging, so we would need something like ‘ignore a certain logging type / group’ to come close to that versatility. I didn’t look up how we would solve such a case with GoogleTest though.

1 Like