About a year ago I ended up with a floating point value that was something like 1.0000000000078 when it should have been 1. Tore my hair out for hours trying to get the piece of crap embedded vendor locked device to just make it 1.
It's almost like some useless person created a variable with a distinct set unlikely to be higher than the hundreds as a floating point - when it obviously should have been an int.
Nah, it makes sense to use a floating point number here, since unless the test is marked out of a factor of 100 then there will likely be a fractional value as the final percentage. The mistake was not rounding the final displayed value.