[General C++] Getting wierd results with using ++i multiple times in one statement

I’m having to use ++i four times in one statement, trying to convert four sequential bytes into a 32 bit integer:


uint32 Value = (uint32)((Bytes[CurrentByte] << 24) | (Bytes++CurrentByte] << 16) | (Bytes++CurrentByte] << 8) | (Bytes++CurrentByte]));

If CurrentByte starts at 0, I would expect this to be the same as


uint32 Value = (uint32)((Bytes[0] << 24) | (Bytes[1] << 16) | (Bytes[2] << 8) | (Bytes[3]));

However, it’s treating it by only using the final incrementation - i.e.


uint32 Value = (uint32)((Bytes[3] << 24) | (Bytes[3] << 16) | (Bytes[3] << 8) | (Bytes[3]));

I’m curious as to why this is? Do increments only happen at the end of a statement? The following code works as I would expect it:


Bytes[CurrentByte]; // is the same as Bytes[0]

Bytes++CurrentByte]; // is the same as Bytes[1]

Bytes++CurrentByte]; // is the same as Bytes[2]

Bytes++CurrentByte]; // is the same as Bytes[3]

The c++ standard does not specify which order multiple “++” operators are evaluated in, until it reaches a “sequence point.” The exact definition of a sequence point can be looked up in the standards document if you’re really curious. But please don’t write code that depends on it, because it’s hard to read and confusing.
It seems your compiler, in the version you’re using, with the compile options you’re using, chooses to run all the pre-increments before the evaluation of the fetches, which it is totally allowed to do – pointer dereference or integer bit do not cause sequencing.

It turns out that the code you’re trying to write is actually less efficient anyway. Adding by 4 and adding by 1 has the same cost, so trying to add by 1 four times is actually less efficient, and harder to read.

So, the way you’d typically want to write this is:



uint32 value = ((uint32)bytes[0] << 24) | ((uint32)bytes[1] << 16) | ((uint32)bytes[2] << 8) | ((uint32)bytes[3]);
bytes += 4;


Darn - the index isn’t actually zero, it is actually a variable. I was using the increment each time because it was neater than


uint32 Value = (uint32)((Bytes[CurrentByte] << 24) | (Bytes[CurrentByte + 1] << 16) | (Bytes[CurrentByte + 2] << 8) | (Bytes[CurrentByte +3]));

Guess that’s the way it is though! Thanks for the explanation!