POM isn’t very suitable for large depth ranges, you end up with a lot of distortion and disocclusion artifacts, and very high cost. The approach used in Spaces looks like it is just a flat texture (or you could use a stereoscopic texture, selecting which eye image to use based on the viewport X coordinate) with a simple view based offset with a constant distance to shift the perspective. The BumpOffset node will accomplish that. You could also sample a cube map texture using the camera vector to create the impression that you are looking inside a big space.
A more complete and complex solution would be to define clipping planes in your material, and either masking out pixels beyond those planes using alpha, or the clip(x) function in a custom node. You could technically use anything as a mask, which you invert for objects on the other side of the portal. All you need is for that mask to be computed consistently in screen space inside each material. Making that work with lighting is very complicated though.