Then it would result in reduced image quality by predicting (procedurally reconstructing) the odd lines of a full sized canvas. In video technology it might works since the video signal is a bit blurry therefore the end result will not quite suffer from visible artifacts and quality loss. In computer graphics we always talk about discrete pixels and there is no room for such quasi pixels. Without serious digging into GPU bios and modifications, i don’t think you would have any chance to produce interlaced output directly from your GPU, as it is designed to work with full sized canvas only. The engine is implementing this standard as well, by using the api’s the drivers provide.
Edit:
This actually brings up back to my previous post where i mentioned that the vector and geometry informations would require to be calculated twice per time frame, and this not only means producing pixel informations of geometries but actually calculating all the triangles on the gpu. Doing this twice for a full size canvas would result in performance loss, and equals the situation when you have twice that many triangles in a scene. As for the pixel output side of things, producing 2 half frames versus one full frame, there is no performance differences, because the GPU is a multi threaded chip where the individual threads actually work in a parallel manner, thus makes no difference when and how you generate the digital images at the end.