The difference between a TextureSample and a TextureObject, is that a TextureSample has already been looked up only once for each pixel on the screen. That position is defined by the input UVs at each pixel. At that point, when you are reading the output of the texture sample, it is no longer even a texture, but a simple color value for each screen pixel. Each pixel can be thought of as its own little processor that runs independently of the pixels next to it (for the most part anyways). All reference to the texture is completely lost after it has been sampled. There is no way to blur it like that, unless you manually place a bunch of sample nodes with offset coordinates, which plenty of people do. Material function “Blur Sample Offsets” provides the vectors to do so.
A TextureObject on the other hand, is an actual reference to the texture itself, and it allows multiple samples from different UV coordinates to be taken, using either code or material functions.
I think you mean that doing a simple downsample will result in visible bilinear filtering artifacts, rather than a smooth blur. That is true, but for extreme blurs, downsampling is still the first step of the process, depending on the level of blur desired. It takes less samples to remove the bilinear artifacts than it does to grow the blur radius at full resolution. Still, for many applications, you can simply use Mip Bias to look up the mip maps, which would be like what BrUnO suggested.
If you want a higher quality blur than the simple mip bias downsample, you need to use multiple samples, either manually or by passing a Texture Object to a function that will do it for you. There are methods to cut down on the number of samples, such as looking up at UV coordinates with a 1/2 texel offset to get one level of blur for free, or by passing your blur iteratively through a few render target passes.
Using a scenecapture just to get the post process is indeed a pretty complicated setup and probably has a relatively bulky overhead cost, but it will require use of all of the above methods to roll your own blur that is faster than the gaussian post process in ue4. At what point that would tip in favor of the fixed overhead mentioned is a bit of an unknown as it could vary widely on different hardware.
So this begs the question, why exactly are you so adverse to texture objects?
edit a hacky method you could try is adding a small fraction of “Dither Temporal AA” to your texture UVs. By default that will cause a bit of a directional streak, but if you rotate the values over time it can spread it out. It will look smeary and messy though and objects moving over it will leave trails. Only works well if TemporalAA is used.