This came up when I was binary or-ing something with (int256)1 << 32 and somehow getting the 0th bit set as well as the 32nd; it seems that TBigInt often produces incorrect results when you shift by a multiple of 32. For instance, if I have a TBigInt<256> with the bits {7, 2, 1, 0, 0, 0, 0, 0}, shifting it left 32 bits produces {7, 7, 3, 1, 0, 0, 0, 0}, rather than the desired {0, 7, 2, 1, 0, 0, 0, 0}. Looking at the source for TBigInt::ShiftLeftInternal(), it appears that this is because the code banks on any int32 shifted 32 bits or more producing 0, when in actuality, it leaves the original value unchanged (although I’ve heard that might be compiler-dependent).
At any rate, I though I’d just thought I’d post this here, it’s not a big issue for what I was using int256 for (I basically wanted a huge bitfield enum, and losing just the multiples of 32 doesn’t mean much). I have to think that shifting by a multiple of 32 could be optimized just by rotating the underlying uint32 array, anyway. Thanks!