View Single Post
Old 28 December 2022, 11:33   #28
Thomas Richter
Registered User
 
Join Date: Jan 2019
Location: Germany
Posts: 3,233
Quote:
Originally Posted by Bruce Abbott View Post

I'm not asking for infinite precision, just that when I type in two different numbers that should be represented as different binary numbers in the selected format, I don't get the same number out.
Sorry, I'm confused. That looks like a requirement of an ascii to IEEE conversion, which is something different. The topic here is on IEEE to ASCII conversion. If you perform this type of "optical rounding", the map from IEEE to ASCII is certainly not injective, saying that two different inputs will map to the same output, exactly because you round. That type of loss you probably like to avoid.



Quote:
Originally Posted by Bruce Abbott View Post
Obviously. But in the 'real' world we want those floating point numbers to represent the actual numbers we are trying to use. So if I type in a nice round decimal number I want to see it come out the same.
That's more a "lying to the user" thing. If a particular number cannot be represented precisely as floating point, and the resulting IEEE number is closer to another number whose decimal representation "is not so nice", well, then that's it.



Quote:
Originally Posted by Bruce Abbott View Post

In most real-world applications numbers are rounded for display anyway, because humans have trouble dealing with huge numbers of digits.
That depends on the applications. For scientific applications, that may or may not be the right thing to do. If my input has uncertainty in it, because it was measured with limited precision, rounding does make sense because the fractional part has no meaning anyhow. If the number is the output of a mathematical computation (and that was the case in my application - a Mandelbrot fractal renderer) rounding does not make sense because the numbers are precise.


Quote:
Originally Posted by Bruce Abbott View Post


Why is because that is what the programmer probably entered. Buy hey, next time I write a program with floating point literals in it, I will deliberately type in 99.99999999999999876 instead of 100, just to mess with you.
You can try that with DMandel if you like to. What will happen is that the ascii to IEEE conversion will pick the IEEE number closest to your input, which will be 100.0 likely, and then on output conversion, represent it as 100.0 precisely. If that number is not 100.0, well, the conversion from IEEE to ASCII will not print 100.0, and rightfully so. The problem with optical rounding is that it may print an output as 100.0 even though the input number is *not* 100.0, and that number can be represented exactly.




Quote:
Originally Posted by Bruce Abbott View Post



So you would prefer to present the user with 99.99999999999999876 instead of 100, even though the last 3 digits are essentially random?
Yes, I would. Simply on the basis that the IEEE binary number is not 100.0. Quite simple, actually. Print what you got. Not print "what looks nice".


Quote:
Originally Posted by Bruce Abbott View Post




If you want to display fp numbers totally accurately then do it in hex (or binary for the true masochists). That way there is zero loss of precision. Might be a bit harder for a human to digest though...
That's the point why it is not done, of course.


That is, of course, the reason why in particular applications, BCD arithmetics is used instead of binary arithmetics. It avoids losses when converting from and to human readable formats. However, BCD arithmetics has other problems, namely that the average rounding loss when performing arithmetic operations (such as +,-,*,/) is higher than with binary arithmetics. So you win at one end, but loose at another.


In terms of "inner precision" (rounding loss within the number format), binary is the best.
Thomas Richter is offline  
 
Page generated in 0.04321 seconds with 11 queries