Whilst I have read some background on floating opint numbers not being able to accurately store numbers such as 0.1, I am not clear why converting from single to double is so inaccurate.
Converting a single 0.1 to a double 0.1 yeilds
0.10000000149011600
But 0.1 can be represented in a double more accurately, even if not perfectly as
0.10000000000000000
This stems from the single representation of 0.1 actually equating to 0.10000000149011600 in binary.
However, we know that the single has a particular level of precision.
I would like a conersion function that converts single 0.10000000 to double 0.1000000000000000.
I can appreciate that the standard conversion functions would need to still exist as real numbers for engineering would suffer less overall inaccuracies with the existing rounding functions.
One other question:
Can the convertsion be affected by the OS/framework/hardware or will it be consistent on different machines? Are there any settings that can change the conversion behaviour?