Just getting into this myself, here’s where I’m at, which is an amalgam of a few different resources I’ve found.
I’m taking my left and right images, and putting them into a single image, with the left across the top half, and the right across the bottom, like this: https://code.blender.org/wp-content/uploads/2015/03/gooseberry_benchmark_panorama.jpg
In a material blueprint, the screenposition node gives you coordinates of the pixels, returned as (0,0) in one corner, to (1,1) in the opposite corner. That means you can test if a pixel is on the left or right side of screen by seeing if its screenposition x value is greater than or less than 0.5.
You can then do some tricks to the uv’s of your sphere mesh, so assuming they’re in a regular spherical 0:1 range, you can take the top half of the texture, scale it out to the full uv range, and in another branch take the bottom half of the texture, and scale it out to the full uv range.
Armed with those 2 things, you get the attached network. If the pixel is on the left side of the screen, take the top half of the image, expand it to the full size of the sphere. Otherwise if the pixel is on the right side of screen, take the bottom half of the image, expanded to the full size of the sphere.
Not sure if this is the most efficient way to do this, but its working well for me so far.