Texture read performance costs?

I am trying to understand details about shader optimization, and i cant find a clear anwser to some of my questions. I know that many of them are strongly hardware related but any hint would be helpful. Here are my questions:

1.Is the performance cost higher for performing a texture read of a bigger texture then a smaller one?

2.Would texture size matter if there was enough space to store them in any way other than game loading time?

3.Are different texture compression settings influencing shader performance and how?

4.How much longer dose texture read take or cost than simple math instructions?

What I found out so far:

  1. Yes biger tekstures are a bit slower - I saw some mention of that in updated optimization guide for u4, i dont know how much though, Its probably very hardware dependent
  2. As far as I could found out only in a way mentioned above
  3. Yes, unfortunetly i dont know exactly how in most cases. Normal map compresion in unreal uses derive normal Z whenever reading normals.
  4. Couple times longer as I gathered, probably order of magnitude times longer, altough I couldnt find substantial information. Most likely becouse its very hardware dependent
  1. Yes

  2. Yes, will affect GPU performance adversely.

  3. Yes, the less memory the texture occupies, the faster the fetch would be.

  4. Can’t compare that directly. Texture fetches and math is done on different parts of the GPU. Take 50-70 instructions as a very rough estimate.