It looks like UE4’s handling of shader type precision is completely dysfunctional currently:
In GlslBackend.cpp, when bDefaultPrecisionIsHalf is true, it actually defaults to high precision in GetPrecisionModifier. This looks like a straightforward coding logic bug, and it forces all variables to highp no matter how you declare them in the shader files.
HLSL low precision types like min10float are not parsed at all by the cross compiler, and simply cause mobile shaders to fail to compile.
If you declare a uniform member with EShaderPrecisionModifier::Fixed, it emits variables of type “fixed”, which is not even a valid HLSL keyword. It comes from Nvidia’s Cg language, which Unity uses as a base for its shaderlab language, and mostly looks like HLSL but isn’t.
This is a huge problem for mobile, as it makes shader math take twice to four times as long as necessary to compute. It also fails to make good use of current PC hardware, as the latest Nvidia cards also run at twice the speed with 16 bit floats, but to take advantage of it you need to use min16float. HLSL type “half” is what the shader code is using now, but that type is essentially deprecated and just defaults to 32 bit on all current compilers.
I have been working to try to fix some of these issues, but they are pretty pervasive and it is slow going. Getting it to translate “half” to “mediump float” was fairly easy, and a big speedup, but getting any support for “lowp float” touches a lot of systems because the HLSL parser has no equivalent. UE4 doesn’t get anywhere close to optimal shader performance on mobile currently, and it really holds UE4 back on mobile VR especially.