Practically speaking the answer is no. Screenspace means it is working only with the data for what is in the gbuffers, which means it has no data about anything that isn’t visible to the camera which results in there being no light/shadow contribute for anything that isn’t directly on screen, this includes occluded objects though that’s not likely to be a common issue for a top-down game.
In theory, it is possible to do using brute force, by rendering at a larger resolution and then cropping the visible region so that the area it doesn’t have data for is limited. This is, of course, insanely expensive as a huge chunk of your render time is now spent on pixels that are not even visible to the user. On top of that it doesn’t fully resolve the problem, it just moderately mitigates it. Occluded meshes and large offscreen objects still won’t be correctly considered.
I’m not sure Unreal even has an easy way of implementing this, short of using a render target and a fullscreen mesh plane. Regardless, I would strongly recommend against it either way.
Your best option in my opinion is to either use baked lighting or DFAO