English Amiga Board


Go Back   English Amiga Board > Coders > Coders. Asm / Hardware

 
 
Thread Tools
Old 26 December 2022, 11:12   #21
Thomas Richter
Registered User
 
Join Date: Jan 2019
Location: Germany
Posts: 3,217
Quote:
Originally Posted by meynaf View Post
This is because it can not really be 'right'. Contrary to integer, float does not provide exactness. This is the main reason why i don't use floats whenever i can do otherwise.
Well.... depends on what you call "exact". If properly implemented, floating point is exact in the sense that the result of "a op b" where "op" is an operation, is identical to carrying out "a op b" in infinite precision, and then perform rounding to the target precision. Thus, floating point is "exact to the maximum degree possible", and the only place where you loose precision is in the final rounding because the operations can be implemented "as if" they would be carried out with an infinte number of bits.


This is the best you can possibly get, noting the fact that you "squeeze" the over-countably set of real numbers into a finite range of 64 bits.




Quote:
Originally Posted by meynaf View Post
My goal here is to get something that does a 'good enough' job (i.e. is at worse slightly off but never completely wrong).
The purpose of my function was to do "the best job possible" for double precision. If you would want to do the best job possible for extended precision, the multiplication chain from my second post would need to be implemented in somewhat higher precision so you don't loose bits there.


Quote:
Originally Posted by meynaf View Post

Perhaps this can be made an option.
You can make it an option with the above functions at your fingertips. Again, what the above code delivers is a decimal exponent as binary integer, and an ASCII-formatted mantissa still lacking the decimal dot. Having this data, you can "assemble" the output in the way you like, including any optical adjustments ("prettyfications") of the number. Thus, if the last digit is a '9', you can round that up if you like. I believe SAS/C does that.
I personally do not like that because there is no good mathematical justification for it. Rounding to the precision already happens within the function I presented, and the result is the "correct result" to the degree possible.


Quote:
Originally Posted by meynaf View Post


Sure that showing 0.9999999 as 1.0 is wrong, but then what about 0.1 ? Without a little rounding, it is never shown as 0.1 as this number can not be represented exactly with a binary float.

That's right, but there is a double precision number that lies closest to 0.1, and that number will be converted back to 0.1 with the above function. This is due to the rounding step in the above function (assuming 16 digit rounding).
Thomas Richter is offline  
Old 26 December 2022, 23:56   #22
Bruce Abbott
Registered User
 
Bruce Abbott's Avatar
 
Join Date: Mar 2018
Location: Hastings, New Zealand
Posts: 2,549
Quote:
Originally Posted by meynaf View Post
This is because it can not really be 'right'. Contrary to integer, float does not provide exactness.
Results of calculations may not be 'right', but displaying a floating point literal in decimal should be. The binary floating point format is not ambiguous so the display in decimal should not be either.

Ideally I want a decimal fp number that compiles to the same binary value as it was converted from. Unfortunately this relies not only on converting it to decimal 'correctly', but also on the compiler or assembler doing the 'correct' conversion the other way, which in my case (ProAsm) it doesn't (you can input several slightly but significantly different 80 bit fp numbers and they come out the same).

This problem is 'solved' by using hex fp literals. However for most purposes I am happy enough getting back what was typed into the original source code, eg. 1000.0 disassembles as 1.0E3, not 9.9999986E2. That's why I round. It's more likely that the original number was 1000 than 999.999??.
Bruce Abbott is offline  
Old 27 December 2022, 00:21   #23
paraj
Registered User
 
paraj's Avatar
 
Join Date: Feb 2017
Location: Denmark
Posts: 1,099
I don't know if it's still the state-of-the-art, but there has been progress on fast (and correct, i.e. round-trip proof) float->string conversion in the last 5-10 years: https://github.com/ulfjack/ryu
paraj is offline  
Old 27 December 2022, 08:03   #24
meynaf
son of 68k
 
meynaf's Avatar
 
Join Date: Nov 2007
Location: Lyon / France
Age: 51
Posts: 5,323
Quote:
Originally Posted by Bruce Abbott View Post
Ideally I want a decimal fp number that compiles to the same binary value as it was converted from. Unfortunately this relies not only on converting it to decimal 'correctly', but also on the compiler or assembler doing the 'correct' conversion the other way, which in my case (ProAsm) it doesn't (you can input several slightly but significantly different 80 bit fp numbers and they come out the same).
Inputting floats is as complicated as outputting them, if not more.
Hopefully i don't currently need to do that.


Quote:
Originally Posted by Bruce Abbott View Post
This problem is 'solved' by using hex fp literals. However for most purposes I am happy enough getting back what was typed into the original source code, eg. 1000.0 disassembles as 1.0E3, not 9.9999986E2. That's why I round. It's more likely that the original number was 1000 than 999.999??.
Small enough integers should be represented exact, but i see what you mean. Sure if i have $3fff999999999999999a i'll interpret as 1.20 rather than 1.200000047
But i prefer to not round it automatically so i know in which way original number was 'wrong'.
meynaf is offline  
Old 27 December 2022, 09:50   #25
Thomas Richter
Registered User
 
Join Date: Jan 2019
Location: Germany
Posts: 3,217
Quote:
Originally Posted by Bruce Abbott View Post
However for most purposes I am happy enough getting back what was typed into the original source code, eg. 1000.0 disassembles as 1.0E3, not 9.9999986E2. That's why I round. It's more likely that the original number was 1000 than 999.999??.
Both requirements cannot be satisfied - that you get in what you typed is not possible since there is limited precision, so precision has to go. Since decimal and binary have different bases, the loss is not expressible in an integer number of digits either.

And that "nice numbers are more likely", well.... Why? In the end it means that the float to ascii conversion cannot distinguish between some inputs and will represent them by the same output. It will print the "nice number" the same way as the "not so nice number" near to it. That does not look right to me. It means that you loose precision because you prefer some numbers to others, which is a bit bizarre.

Actually, this is exactly the problem I wanted to avoid.
Thomas Richter is offline  
Old 27 December 2022, 20:07   #26
Bruce Abbott
Registered User
 
Bruce Abbott's Avatar
 
Join Date: Mar 2018
Location: Hastings, New Zealand
Posts: 2,549
Quote:
Originally Posted by Thomas Richter View Post
Both requirements cannot be satisfied - that you get in what you typed is not possible since there is limited precision, so precision has to go.
I can understand why you might limit it for performance, but the compiler/assembler only has to do it once, and most programs don't have many fp literals in them.

I'm not asking for infinite precision, just that when I type in two different numbers that should be represented as different binary numbers in the selected format, I don't get the same number out. But I only need this for testing my disassembler. In a practical program near enough is good enough in floating point, so if the compiler only creates literals with 64 bit precision it's no big deal. 16 significant digits should be enough for anyone.

Quote:
Since decimal and binary have different bases, the loss is not expressible in an integer number of digits either.
Obviously. But in the 'real' world we want those floating point numbers to represent the actual numbers we are trying to use. So if I type in a nice round decimal number I want to see it come out the same. In most real-world applications numbers are rounded for display anyway, because humans have trouble dealing with huge numbers of digits.

Quote:
And that "nice numbers are more likely", well.... Why?
Why is because that is what the programmer probably entered. Buy hey, next time I write a program with floating point literals in it, I will deliberately type in 99.99999999999999876 instead of 100, just to mess with you.

Quote:
In the end it means that the float to ascii conversion cannot distinguish between some inputs and will represent them by the same output. It will print the "nice number" the same way as the "not so nice number" near to it. That does not look right to me. It means that you loose precision because you prefer some numbers to others, which is a bit bizarre.
So you would prefer to present the user with 99.99999999999999876 instead of 100, even though the last 3 digits are essentially random?

Quote:
Actually, this is exactly the problem I wanted to avoid.
If you want to display fp numbers totally accurately then do it in hex (or binary for the true masochists). That way there is zero loss of precision. Might be a bit harder for a human to digest though...
Bruce Abbott is offline  
Old 27 December 2022, 20:35   #27
meynaf
son of 68k
 
meynaf's Avatar
 
Join Date: Nov 2007
Location: Lyon / France
Age: 51
Posts: 5,323
Quote:
Originally Posted by Bruce Abbott View Post
So you would prefer to present the user with 99.99999999999999876 instead of 100, even though the last 3 digits are essentially random?
Integer numbers such as 100 can be represented exactly...
So if you see 99.99999999999999876 it is clearly not 100.
meynaf is offline  
Old 28 December 2022, 11:33   #28
Thomas Richter
Registered User
 
Join Date: Jan 2019
Location: Germany
Posts: 3,217
Quote:
Originally Posted by Bruce Abbott View Post

I'm not asking for infinite precision, just that when I type in two different numbers that should be represented as different binary numbers in the selected format, I don't get the same number out.
Sorry, I'm confused. That looks like a requirement of an ascii to IEEE conversion, which is something different. The topic here is on IEEE to ASCII conversion. If you perform this type of "optical rounding", the map from IEEE to ASCII is certainly not injective, saying that two different inputs will map to the same output, exactly because you round. That type of loss you probably like to avoid.



Quote:
Originally Posted by Bruce Abbott View Post
Obviously. But in the 'real' world we want those floating point numbers to represent the actual numbers we are trying to use. So if I type in a nice round decimal number I want to see it come out the same.
That's more a "lying to the user" thing. If a particular number cannot be represented precisely as floating point, and the resulting IEEE number is closer to another number whose decimal representation "is not so nice", well, then that's it.



Quote:
Originally Posted by Bruce Abbott View Post

In most real-world applications numbers are rounded for display anyway, because humans have trouble dealing with huge numbers of digits.
That depends on the applications. For scientific applications, that may or may not be the right thing to do. If my input has uncertainty in it, because it was measured with limited precision, rounding does make sense because the fractional part has no meaning anyhow. If the number is the output of a mathematical computation (and that was the case in my application - a Mandelbrot fractal renderer) rounding does not make sense because the numbers are precise.


Quote:
Originally Posted by Bruce Abbott View Post


Why is because that is what the programmer probably entered. Buy hey, next time I write a program with floating point literals in it, I will deliberately type in 99.99999999999999876 instead of 100, just to mess with you.
You can try that with DMandel if you like to. What will happen is that the ascii to IEEE conversion will pick the IEEE number closest to your input, which will be 100.0 likely, and then on output conversion, represent it as 100.0 precisely. If that number is not 100.0, well, the conversion from IEEE to ASCII will not print 100.0, and rightfully so. The problem with optical rounding is that it may print an output as 100.0 even though the input number is *not* 100.0, and that number can be represented exactly.




Quote:
Originally Posted by Bruce Abbott View Post



So you would prefer to present the user with 99.99999999999999876 instead of 100, even though the last 3 digits are essentially random?
Yes, I would. Simply on the basis that the IEEE binary number is not 100.0. Quite simple, actually. Print what you got. Not print "what looks nice".


Quote:
Originally Posted by Bruce Abbott View Post




If you want to display fp numbers totally accurately then do it in hex (or binary for the true masochists). That way there is zero loss of precision. Might be a bit harder for a human to digest though...
That's the point why it is not done, of course.


That is, of course, the reason why in particular applications, BCD arithmetics is used instead of binary arithmetics. It avoids losses when converting from and to human readable formats. However, BCD arithmetics has other problems, namely that the average rounding loss when performing arithmetic operations (such as +,-,*,/) is higher than with binary arithmetics. So you win at one end, but loose at another.


In terms of "inner precision" (rounding loss within the number format), binary is the best.
Thomas Richter is offline  
Old 30 December 2022, 20:07   #29
Bruce Abbott
Registered User
 
Bruce Abbott's Avatar
 
Join Date: Mar 2018
Location: Hastings, New Zealand
Posts: 2,549
Quote:
Originally Posted by meynaf View Post
Integer numbers such as 100 can be represented exactly...
So if you see 99.99999999999999876 it is clearly not 100.
Good to know. However I have a problem. Some integers are not coming out exactly in my disassembler.

Here's an example,

Source code:-
Code:
  fmove.x  #10,fp2
  fmove.x  #1e10,fp3
  fmove.x  #1e100,fp4
  fmove.x  #1e200,fp5
  fmove.x  #1e300,fp6
Disassembly of code produced by Devpac or ProAsm:-
Code:
   fmove.x #1.0E+1,fp2
   fmove.x #1.0E+10,fp3
   fmove.x #1.0E+100,fp4
   fmove.x #9.9999999999999993E+199,fp5
   fmove.x #9.9999999999999991E+299,fp6
Which in hex is:-
Code:
   fmove.x #$4002a000000000000000,fp2 ; 1E1
   fmove.x #$40209502f90000000000,fp3 ; 1E10
   fmove.x #$414b924d692ca61be758,fp4 ; 1E100
   fmove.x #$4297a738c6bebb12d16d,fp5 ; 1E200
   fmove.x #$43e3bf21e44003acdd2d,fp6 ; 1E300
Both assemblers concur, so it looks like a bug in my conversion code.

But perhaps both assemblers have the same bias. So I tried Barfly assembler:-
Code:
   fmove.x #1.0E+1,fp2
   fmove.x #1.0,fp3
   fmove.x #1.0,fp4
   fmove.x #1.0,fp5
   fmove.x #1.0,fp6
Oops! Looks like Barfly is ignoring the exponent!

To prove that the assembled fp numbers are correct I need to convert them to decimal with an algorithm which is known to be accurate. But I don't have the patience to do it manually, and I couldn't find an online tool that converts 80 bit floating point numbers from hex to decimal. However after a lot of googling I eventually found a downloadable tool for Windows on Sourceforge, which tells me the numbers are correct. That means my disassembly code is wrong (or at least not as accurate as it should be).

Or is it? I put in hex 43e3bf21e44003acdd2d, and out popped the expected 1E300. However I could put any lower number down to 43e3bf21e44003acc240 and it also showed 1E300. At 43e3bf21e44003acc23f the output finally changed to 9.99999999999999E299. That's 6894 unique fp numbers that are being reported as the same in decimal (and that's just the lower bound. I expect a similar amount above 1E300).

There should be 17-18 significant digits, but this tool is only showing 15 digits. Is it rounding up the result? Whatever the cause, it's not good enough. More research needed!
Bruce Abbott is offline  
Old 30 December 2022, 20:51   #30
phx
Natteravn
 
phx's Avatar
 
Join Date: Nov 2009
Location: Herford / Germany
Posts: 2,496
Quote:
Originally Posted by Bruce Abbott View Post
That's 6894 unique fp numbers that are being reported as the same in decimal (and that's just the lower bound. I expect a similar amount above 1E300).

There should be 17-18 significant digits, but this tool is only showing 15 digits.
But you have hundreds of digits in your last three example lines! There is simply not enough precision.
Note that the mantissa and exponent is binary, not decimal. Also note how many bits are already required in the mantissa for 1E10.

I have also a question: You wrote your extended precision hexadecimal constants with 80 bits, which might make sense as IEEE extended precision is defined as 80 bits. But I was under the impression that most Amiga assemblers need such a constant to be specified with 96 bits, with 16 zero bits between exponent and mantissa. I just checked the Devpac manual, which shows an example with such a 96 bit constant.
What is correct and would be expected from an 68k assembler?
phx is offline  
Old 30 December 2022, 22:57   #31
Thomas Richter
Registered User
 
Join Date: Jan 2019
Location: Germany
Posts: 3,217
Quote:
Originally Posted by Bruce Abbott View Post
There should be 17-18 significant digits, but this tool is only showing 15 digits. Is it rounding up the result? Whatever the cause, it's not good enough. More research needed!
It depends on how the assemblers implement the decimal to binary conversion, and whether you run it on a real machine or just under emulation. From the error messages I get from DevPac, I would guess that DevPac uses the FPU, *probably* using the packed decimal data format as input, and extended precision as output. That might be good enough for 16 digit precision, but not good enough for extended precision as target precision. The packed decimal datatype of the Motorolas have only a precision of about 8ulp or so. Then, if you run that on an emulated machine, you should know that the emulation itself may use the native FPU of the host system, and this may only have double precision available. Now, for converting from packed to double, you need more precision than double, namely extended. Even worse, my experiments with euae on linux show that support for packed decimal is "flaky" so to say. Maybe Toni did a better job in winUAE, but the original Linux floating point emulation I would describe as "lousy". Thus, there are various sources for errors here. Try with a native system, and do not trust the floating point implementation of assemblers too much. Finally, if you want to verify numbers, I suggest trying COP as debugger. It uses the "known good" conversion from the library I quoted above.
Thomas Richter is offline  
Old 31 December 2022, 09:31   #32
meynaf
son of 68k
 
meynaf's Avatar
 
Join Date: Nov 2007
Location: Lyon / France
Age: 51
Posts: 5,323
Quote:
Originally Posted by Bruce Abbott View Post
Good to know. However I have a problem. Some integers are not coming out exactly in my disassembler.
This comes from having a binary exponent. The numbers that cause trouble are the ones that have large decimal exponent, i.e. which need to be multiplied by 10 a big number of times - and therefore overflow the mantissa. Only small enough integers can be represented exact.
If you want to avoid these issues you can use decimal floats.



Quote:
Originally Posted by phx View Post
What is correct and would be expected from an 68k assembler?
I do not know what is correct. I think 96-bit may be wrong but there is perhaps a good reason for this.
When i checked 80-bit floats with phxass i found that the representation was different in comparison to the 80-bit floats i had in my HoMM2 game port (whose code comes from mac 68k version).
So the best way to handle it in an asm is probably to provide both options.
meynaf is offline  
Old 31 December 2022, 10:17   #33
Bruce Abbott
Registered User
 
Bruce Abbott's Avatar
 
Join Date: Mar 2018
Location: Hastings, New Zealand
Posts: 2,549
Quote:
Originally Posted by phx View Post
But you have hundreds of digits in your last three example lines! There is simply not enough precision.
Note that the mantissa and exponent is binary, not decimal. Also note how many bits are already required in the mantissa for 1E10.
I see your point (the decimal number is too large to fit inside the mantissa, and therefore can only be approximated). Now where can I find a tool that will show me the 'correct' decimal value to check my code against?

And how should I deal the fact that the original decimal number was an exact power of 10? Should I round up like the converter program did, or leave it in its ugly raw state? Remember that this only applies to constant values, not results of calculations. In the disassembly I am more interested in what the programmer typed in than the 'actual' decimal value (which tends to come out as a different number again when reassembled).

BTW a similar problem occurs with fractions, and not just with 'extreme' numbers. For example 0.1 comes out as 0.0999999999999999999. Should I leave it as is, or round up to 0.1?

Quote:
I have also a question: You wrote your extended precision hexadecimal constants with 80 bits, which might make sense as IEEE extended precision is defined as 80 bits. But I was under the impression that most Amiga assemblers need such a constant to be specified with 96 bits, with 16 zero bits between exponent and mantissa. I just checked the Devpac manual, which shows an example with such a 96 bit constant.
What is correct and would be expected from an 68k assembler?
Thanks for that. 96 bits is the in-memory format of 68k FPUs. I looked it up in my Devpac manual and you are right, it expects the padding even in immediate values. The converter program I tried expects 80 bits, so this is the correct format for it.

ProAsm doesn't support hex floating point immediates anyway, so to guarantee accurate disassembly I have to show the entire instruction in hex (as 'dc.l ...'). This is the default in my disassembler because it works with all assemblers.
Bruce Abbott is offline  
Old 31 December 2022, 10:49   #34
Thorham
Computer Nerd
 
Thorham's Avatar
 
Join Date: Sep 2007
Location: Rotterdam/Netherlands
Age: 47
Posts: 3,754
It sure is a pity everyone uses base 10 instead of base 8 or base 16 by default
Thorham is online now  
Old 31 December 2022, 10:56   #35
Thomas Richter
Registered User
 
Join Date: Jan 2019
Location: Germany
Posts: 3,217
Quote:
Originally Posted by Bruce Abbott View Post
I see your point (the decimal number is too large to fit inside the mantissa, and therefore can only be approximated). Now where can I find a tool that will show me the 'correct' decimal value to check my code against?
You need an arbitrary precision calculator for that. I used the linux "bc" program for computing all the floating point constants in SoftIEEE.


Quote:
Originally Posted by Bruce Abbott View Post
And how should I deal the fact that the original decimal number was an exact power of 10? Should I round up like the converter program did, or leave it in its ugly raw state?
Find the closest IEEE number that approximates your desired output. For that, print the number with "bc" for example, and round it correctly to the number of bits you need. That is the correct number.


Quote:
Originally Posted by Bruce Abbott View Post

BTW a similar problem occurs with fractions, and not just with 'extreme' numbers. For example 0.1 comes out as 0.0999999999999999999. Should I leave it as is, or round up to 0.1?
Again, round it to the precision the number is given in. The code snippets I posted do exactly that - they round the last digit in binary to the conversion precision they support. They *do not* attempt to generate "nice output". If the output of the rounding step is 0.9999999999999999, then that is the closest number in decimal to the binary input, and this output should then be printed. However, typically, if you enter 0.1 and convert to binary, and then convert back, it should be precise enough to print again 0.1. Of course, I cannot guarantee always.



These are all not new problems, that's exactly why I put this into a library for everyone to use. It's part of many other tools of my "toolchain" (COP, the disassembler.library...).
Thomas Richter is offline  
Old 31 December 2022, 16:02   #36
Kalms
Registered User
 
Join Date: Nov 2006
Location: Stockholm, Sweden
Posts: 237
I'd like to chip in with some theory here. Consider it a complement to what Thomas Richter has been saying above - I'm largely in agreement with him, but providing a different explanatory model.

----------------

For this, let's use the decimal value 0.1, and look at 32-bit IEEE variations.

There are two IEEE numbers close to the decimal value 0.1:

32-bit IEEE value: 0x3dcccccc
decimal value: 0.0999999940395355224609

32-bit IEEE value: 0x3dcccccd
decimal value: 0.100000001490116119385

Later on, we will refer back to these.

----------------

Now, regarding decimal value <=> IEEE floating point conversion:
IEEE floats are not designed to represent the decimal value 0.1 exactly.
There is no automated, unopinionated method that allows you to translate between the decimal number 0.1 and an IEEE float number. You need to make certain assumptions.

----------------

The first place where we encounter such an assumption is when we convert decimal => IEEE.
Given the decimal value 0.1, which IEEE number should we choose? The IEEE standard does not say anything about this. The "typical" rules chosen by most ASCII-to-float converters are:
1. If there is an exact match in the IEEE representation, choose that.
2. Otherwise, choose the IEEE representation whose decimal value matches the desired number most closely.
Notice though, that we can't be sure that all ASCII-to-float algorithms handle case 2 identically. It's possible that some converters don't use the "which IEEE number's decimal representation is nearest" as its metric, particularly if the pair of relevant IEEE numbers are at different exponent levels; also, different algorithms may handle tie breaks differently.

We will make the reasonable assumption that "nearest" is a good decision rule. Given that assumption, decimal value 0.1 => IEEE float 0x3dccccd.

----------------

The second place where we need to make assumptions is when we convert IEEE => decimal value.

Some IEEE numbers will turn into infinitely long repeating sequences in decimal form. We will need to round/truncate these to be able to display them.

Here, we have two common choices when rounding / truncating:
1. We presume that the IEEE number originated from a decimal value, and "nearest" was being used for rounding during decimal => IEEE conversion; truncate the result such that we get the original number back (assuming that the original value did 'fit' into an IEEE number). This results in successful 0.1 => IEEE => 0.1 roundtrips, but a few IEEE values will be slightly modified during IEEE => decimal => IEEE roundtrips.
2. We don't make any assumptions regaring where the IEEE number originally came from, and preservation of the exact IEEE number is most important; truncate the result such that each possible IEEE number gets a unique decimal representation. This results in stable IEEE => decimal => IEEE roundtrips, but values such as 0.1 will display as the sligtly-ugly 0.100000001 after roundtripping.

You accomplish choice #1 by choosing a "low enough" number of digits in ASCII-to-float conversion.
You accomplish choice #2 by choosing a "high enough" number of digits in ASCII-to-float conversion.

The "Character representation" section of the IEEE 754 article in Wikipedia has some numbers for you:

Choice #1, you need 7 decimal digits for floats and 16 decimal digits for doubles.
Choice #2, you need 9 decimal digits for floats and 17 decimal digits for doubles.
(I can't find any reference for how many digits are needed for extended-precision numbers, sorry.)

----------------

Okay. Where does this put you (Bruce Abbott)? You have raw machine code as input, containing float & double & extended-precision literals in IEEE format. You want to convert these to ASCII for the disassembly, and you want another assembler to reproduce the original IEEE values after processing the generated source code.

That sounds like you don't know where the IEEE numbers originated from, and you want to preserve IEEE => decimal => IEEE roundtrip-ability. Thus, you want choice #2. You will get roundtrip-ability, but some numbers (like 0.1) will look ugly in the disassembly (you will see 0.100000001). That is the price you have to pay to get roundtrip-ability for all IEEE numbers.
Kalms is offline  
Old 31 December 2022, 16:24   #37
Karlos
Alien Bleed
 
Karlos's Avatar
 
Join Date: Aug 2022
Location: UK
Posts: 4,150
@bhabbot

For the purposes of disassembly and reassembly, what's wrong with using the base 16 representation of the floating point value? You could annotate it with a comment showing the nearest decimal value, or the nearest decimal strictly lower than or equal to the actual value, or whatever you think is most appropriate. Provided the assembler you aim to reassemble the code with supports the hex representation of the floating point value, you lose absolutely nothing in the round trip and the provided annotation at least indicates what the value probably represents.
Karlos is online now  
Old 31 December 2022, 19:05   #38
Thomas Richter
Registered User
 
Join Date: Jan 2019
Location: Germany
Posts: 3,217
The disassembler.library does both. On the left, a hex dump showing the values in memory, and on the right, the human-readable instruction with the number in decimal.

However, if your goal is to create a disassembly you can re-assemble to exactly the same source, then it really depends on the assembler you plan to use for reassembling, and the quality of its floating point implementation. I wouldn't hold my breath there. You are on the safest side just to put the instruction as "dc.w " in there, with a comment what it should be.
Thomas Richter is offline  
Old 31 December 2022, 20:18   #39
Bruce Abbott
Registered User
 
Bruce Abbott's Avatar
 
Join Date: Mar 2018
Location: Hastings, New Zealand
Posts: 2,549
Quote:
Originally Posted by Thomas Richter View Post
I suggest trying COP as debugger. It uses the "known good" conversion from the library I quoted above.
I had COP installed on my system but didn't have a use for it until now (some day I must hook an external terminal up to my A1200 and try doing some hardcore debugging!).

According to COP the binary values for 1E200 and 1E300 are correct. This validates the assemblers' output and indicates that my conversion code is not accurate.

Quote:
Originally Posted by Karlos View Post
Provided the assembler you aim to reassemble the code with supports the hex representation of the floating point value,
Devpac does. My disassembler now (as of this morning) produces the correct hex format for it.

I need to check the documentation of assemblers I am not familiar with to find out if they support hex fp and what format they need.

Quote:
Originally Posted by Kalms View Post
That sounds like you don't know where the IEEE numbers originated from, and you want to preserve IEEE => decimal => IEEE roundtrip-ability. Thus, you want choice #2. You will get roundtrip-ability, but some numbers (like 0.1) will look ugly in the disassembly (you will see 0.100000001). That is the price you have to pay to get roundtrip-ability for all IEEE numbers.
Unfortunately it doesn't, because the assembler may take that slightly different number and generate yet another different floating point number.

But that's OK. I provide the option of showing the number (or entire instruction) in hex, then show the decimal version in the comment field. The user can toggle the on-screen format at any time for better understanding of the code. To this end I am thinking of showing fewer decimal digits and rounding to it.

As Thomas says, this is just 'optics', but in this case I think it's appropriate. Most 'industry standard' code only uses double precision anyway, except perhaps for internal calculations (which is why it's so hard to find an accurate converter for extended precision).

The main purpose of my disassembler is to aid in patching 'legacy' programs for which there is no source code available. Accurate reassembly is important for this because you don't know what effect any differences might have. However there aren't many programs I am interested in that have a lot of inline floating point code. AIBB is one. It includes both 68881 and 040 optimized code. This makes it a good test subject.

Last edited by Bruce Abbott; 31 December 2022 at 20:27.
Bruce Abbott is offline  
Old 31 December 2022, 20:38   #40
phx
Natteravn
 
phx's Avatar
 
Join Date: Nov 2009
Location: Herford / Germany
Posts: 2,496
Quote:
Originally Posted by Bruce Abbott View Post
Devpac does. My disassembler now (as of this morning) produces the correct hex format for it.

I need to check the documentation of assemblers I am not familiar with to find out if they support hex fp and what format they need.
Please keep me informed. Currently I know about several assemblers supporting the 96-bit format (Devpac, vasm, PhxAss), but none for 80 bits.

Greetings from the old into the new year!
phx is offline  
 


Currently Active Users Viewing This Thread: 1 (0 members and 1 guests)
 
Thread Tools

Similar Threads
Thread Thread Starter Forum Replies Last Post
how to convert this code to cpu instructions? jotd Coders. Asm / Hardware 4 31 December 2020 21:08
Sprite editor: software to convert image to C code Toki Coders. Language 5 28 June 2020 11:36
GCC 3.4.0 and soft-float TomSoniq Coders. C/C++ 4 22 April 2020 10:36
REQ: Amiga Format code tutorials AlfaRomeo AMR suggestions and feedback 6 24 January 2008 10:05
Req: Tanks 'n' Stuff Src Code kevingpo request.Old Rare Games 1 07 July 2003 12:07

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +2. The time now is 00:27.

Top

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.
Page generated in 0.13918 seconds with 14 queries