Severe loss of precision with random integers

I just want a sanity check here. I was trying to display a random integer seed (0 -> max int32) to players using a converted hex value and noticed that my least significant digits were always coming back as 0’s. Looking at UKismetMathLibrary::RandomInteger leads to FMath::RandHelper and this inline:

static FORCEINLINE int32 RandHelper(int32 A)
	// RAND_MAX+1 give interval 0..A) with even distribution.
	return A>0 ? TruncToInt(Rand()/(float)((uint32)RAND_MAX+1) * A) : 0;

Is it me, or is this dropping a lot of precision for large integers? Any particular reason for the float instead of double conversion here?

#include <stdint.h>
#include <limits.h>
#include <iostream>

template <typename T1, typename T2> inline size_t const find_numerical_errors(T2 const min, T2 const max)
	size_t error_count = 0;

	for (volatile T2 i = min; i < max; ++i) 
		if ((T2)((T1)i) != i)

	return error_count;

int main()
	std::cout << "float/int32_t errors: " << find_numerical_errors<float, int32_t>(INT_MIN, INT_MAX) << std::endl;
	std::cout << "float/uint32_t errors: " << find_numerical_errors<float, uint32_t>(0, UINT_MAX) << std::endl; 

	std::cout << "double/int32_t errors: " << find_numerical_errors<double, int32_t>(INT_MIN, INT_MAX) << std::endl;
	std::cout << "double/uint32_t errors: " << find_numerical_errors<double, uint32_t>(0, UINT_MAX) << std::endl; 

	char wait;
	std::cin >> wait;

	return 0;

float/int32_t errors: 4143972351
float/uint32_t errors: 4211081215

double/int32_t errors: 0
double/uint32_t errors: 0