req. code for convert float to ascii
Hi,
Floating-point stuff isn't my cup of tea at all, but sometimes i find fp values in existing code and can't do much with them. It appears the OS does not provide anything to show these. I know C standard lib can do this, but using C is not ideal for me. So if someone here has short, easy-to-use, accurate float display routine that does not have external dependencies, you would make someone happy :great Note that if you wanted a new idea for a coding contest, this may be it too :D 32-bit floats should be enough for now, but other sizes are welcome too. |
Quote:
But I'm curious about it. I guess only integer instructions are accepted.. |
Quote:
|
you mean displaying a float as text?
I would create a simple SAS-C program, disassemble it and pick on the conversion part. |
If this is for debugging rather than presentation, I'd just show them as raw hexadecimal. Plenty of tools exist that can show the IEEE interpretation of a 32 or 64-bit hexadecimal. Next up the complexity scale is showing it in decimal scientific notation but still as a power of two exponent. Generally it will always be a decimal strictly between 0 and 0.5 with some power of two exponent. Thats only about as complicated as converting integer to ASCII.
If it's just for general presentation in other contexts and you need to display it as a decimal with power of ten exponent... Well that's going to more complicated. I would probably reconsider my position on dependencies in this case. |
Quote:
Quote:
It's not as easy as it looks because navigating in disassembled C library code inside of a program isn't exactly straightforward. It may also have lots of options i don't need, or depend on external libraries. Source for such C lib code would help, though. Quote:
Quote:
Quote:
Quote:
But hey, if it was easy, i'd just have done it and not opened a thread here ;) |
Quote:
You find functions here: http://aminet.net/dev/lib/ThorLib.lha This is a link library containing a couple of support functions, in particular converting between various floating point types and ASCII. Link library means that only small segments of the entire library appear in your binary, not the whole thing. In particular, in thor/conversions.h, you find the following: Code:
Code:
void __regargs SToX(shortfloat *in,extendedfloat out); With those two, you get the functionality you need. There are also some lower level functions that give you a bit more freedom in formatting. |
Quote:
Is source code available ? Quote:
|
Quote:
If you're only looking for a function that is doing its job, then you would not need sources. Just link to it, and you're be happy. It's BTW assembler, there is no really good way to do this conversion from C. Quote:
32 bytes should be enough. The precision is 16 digits if I recall, plus the exponent. |
Quote:
Ideally the method must not use floats itself, just extract exponent and mantissa (which is easy) then do its job on integers. |
Ok then, I'll walk you through. It will be a series of posts then because the entire thing is a bit too long to fit into a single post (plus, it will not be very helpful in this form). Yes, it is surely all integer, it does not need any of the math libraries and neither the FPU. The math libraries do not provide you with the precision I wanted, and the FPU may not be available.
The general outline is the following: First, the main function AToX() is a small wrapper around a lower level function which extracts the sign, the decimal exponent of the number in binary form, and the mantissa as a sequence of ASCII digits. From that, one can clearly build the desired string by placing the decimal dot in the right place and adjusting the exponent, plus some "optical rounding" if desired. This function also covers some special cases such as 0, +INF, -INF and the NANs. The lower levels are more interesting and I will provide some snippets one after another. First step is to extract from the floating point number an approximation of the decadic logarithm of the number. This takes the binary exponent and the leading bits of the mantissa to compute an approximation of log10(input). This is where the "magic constants" are. This is an approximation that is fixed up later. Second step is to normalize the input such that the mantissa comes between 0.1 and 1. This takes the log10() from the first step, then runs into a multiplication chain which multiplies the number by powers of 10 of the form 10^(2^n), using the binary expansion of the exponent from the first step. That is the long but algorithmically not so complex part. Third step is to perform some rounding and normalization of the resulting mantissa, i.e. to ensure that it is really between 0.1 and 1, plus fixup of the exponent from the first step. The last step is just continuously multiplying the extracted mantissa by 10, and always removing the next decimal digit from its integer part, storing it in the output buffer. That is also fairly trivial. The entire conversion, however, goes over several subroutines, so it is a bit lengthy. I'll start with the next post with the first step. |
Why not just convert to FFP and then use fpa to convert?
http://amigadev.elowar.com/read/ADCD.../node015B.html Edit: it's fpa not fta. |
Here is the first part of the job. Creating a normalized representation.
Code:
;;; ************************************************************** The purpose of the function above is to normalize the input number to one that is between 1 and 0.1, and an approximation of the decimal exponent is returned in d0. More on TimesIntPowTen later, even though the function itself is rather trivial. |
1 Attachment(s)
Quote:
The main 32 bit routine is shown below. Complete source code for single/double/extended formats is attached. Code:
FPDIGITS set 6 ; number of decimal digits to show Getting floating point 'right' under all circumstances is surprisingly difficult. I recently discovered that the assembler I use doesn't produce correct floating point numbers, so I can't even disassemble some code and then reassemble it to check for correct conversion. There are lots of online converters with inscrutable code that assume you have a particular environment (javascript, Intel fpu etc.) whose accuracy cannot be verified. I am not by any means an expert on floating point formats. However when several online calculators can't even agree (let alone show proof that their algorithms are correct) it doesn't inspire confidence. |
Good lord, what an absolute pain :scream Quite the opposite of converting integers to ASCII :rolleyes
|
Next function in row is TimesIntPowTen, which multiplies a number with a power of 10. This is required to get the mantissa normalized to the interval [0.1,1] (or mostly), given the log10() approximation from yesterday's post.
Code:
;;; ************************************************************** The interesting part here is that I do not use a continuous multiplication by 10 (or 1/10) for normalization because that would just accumulate errors. Instead, an arbitrary power of 10 (10^n) is split up into multiplications with powers of 10 to powers of 2, i.e. write Code:
10^n = 10^(\sum_i b_i 2^i) The tables PowTenTable and InvPowTen contain numbers of the form 10^(2^i) with i from 1 to 12 and 10^(-2^i) also from i=1 to 12, corresponding to the range of extended precision. It is probably no coincidence that the 68882 contains the same numbers in its constant ROM, probably for exactly the same purpose. To be continued... |
Quote:
Yes, it is unfortunately quite involved, expecially if you want to keep the precision high and waste as little bits as possible. For double precision input (and this was the target development of my functions) you need extended precision to avoid "wasting precision". For extended precision, you would again need some extra bits, but even the 68882 does not do that and has only a precision of ~8ulp for its extended to packed decimal conversion. This conversion function is almost the same as the conversion to ASCII. I still wonder why CBM did not include that in mathieeedoubtrans. The latter library is more or less a direct copy from the Greenhill C compiler CBM used back then (and for the compilation of intuition up to 3.1) until they switched to lattice (which then became SAS/C). The Greenhill C math library surely included something similar, but the function is probably poor to isolate since it requires several subroutines (as mine does). The amiga.lib fpa function is rather low quality (along with a lot of other stuff in that library) and not very precise. There is something similar in Aztec C, but their implementation is comparibly poor and of low precision. SAS/C is ok mathematically, the only thing I did not like is that they perform some "optical rounding", i.e. their function aims at generating "nice" numbers such that 0.9999999 is displayed as 1.0, even though it is not 1.0. I did not like that. |
Quote:
|
Quote:
Quote:
My goal here is to get something that does a 'good enough' job (i.e. is at worse slightly off but never completely wrong). Quote:
Sure that showing 0.9999999 as 1.0 is wrong, but then what about 0.1 ? Without a little rounding, it is never shown as 0.1 as this number can not be represented exactly with a binary float. |
So here comes the rest. The remaining code removes the remaining binary exponent, then adds 0.5ulp of the output precision, and then uses a long shift register, multiplying the mantissa with 10 each times and copying the integer part to the target buffer.
It expects the floating point number in a0 and generates results in a1, the number of digits in d0. Code:
saveregs d2-d7/a2-a6 The higher level function for that is again not particularly interesting, it is pretty much boilerplate code and does not require assembly. |
All times are GMT +2. The time now is 04:24. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.