View Single Post
Old 03 February 2017, 09:56   #19
meynaf
son of 68k
 
meynaf's Avatar
 
Join Date: Nov 2007
Location: Lyon / France
Age: 51
Posts: 5,351
Quote:
Originally Posted by matthey View Post
You have a disassembler project too?
I'd like to resource PC programs and edit them and reassemble them, just like i can do on the Amiga.
If i could convert the code on the fly, perhaps a few game ports could even be done as well. However, no tools for that.

I've checked several disassemblers on the PC and read several docs. No two of them agree on all details...
There appears to be no full opcode map giving everything about x86 including the most recent additions, btw.


Quote:
Originally Posted by matthey View Post
He lists the ARM Evaluation System on his web site.

http://litwr2.atspace.eu/pi/pi-spigot-benchmark.html

The code density was mediocre but the performance was good for 1986.
Yeah, but i would like to see how current ARM cpus handle the case, especially with thumb-2.


Quote:
Originally Posted by Thorham View Post
BCD makes no sense at all on 32bit and 64bit CPUs. On those you're much better of using base 10000 or base 1 billion, or something like that.
If you do that you will end up with many MUL & DIV in your code.
And if you attempt to do decimal floating-point this way, even worse.


Quote:
Originally Posted by Thorham View Post
Can someone explain why code density is important, because I don't get it
Because it is beautiful ?

Well, ok. There are other reasons.

Pressure on the ICache is something, but we could just have more of it. However, bandwidth is more limited.
As an example, let's say your ICache can output 8 bytes per clock.
If you have 2-byte instructions you can execute up to 4 of them per clock.
If they are 4-byte, you can just execute 2.
So it's more or less twice faster if instructions are twice slower (overly simplified but you see the idea).


Quote:
Originally Posted by matthey View Post
BCD is still useful and used but computations are trivial on modern processors without the need for hardware support.
Trivial ?
Code to perform the operation for ABCD/SBCD doesn't sound trivial to me.
I don't think you can "emulate" those with only a handful instructions.

In matter of BCD, 6502 got it wrong. Code becomes unreadable because you never know if your ADC/SBC is executed bith "D" bit set or not.
x86 also got it wrong. You need to perform regular operation then adjust ; this needs a secondary carry used only for that purpose, where few instructions could have made it right away.
Of course if BCD wasn't there at all, any program using it would have to do things "by hand", leading to cumbersome code.
So perhaps the 68k got it right there too, in spite what many people think about it.

More generally, people who pretend x86, arm, or whatever is better than 68k on some aspect like programming or code density, always speak on theoretical grounds and never have any code to show.
I could write code for an example in 68k and someone else does x86 or arm, to see if and how 68k is superior to x86 and arm (or, for that matter, to anything else). Any sample of significative size (20-40 instructions) doing some useful work is ok.
For example I can do that pi-spigot main loop in just 9 instructions. I don't think x86 can do that. Nor do i think arm can. And this, while it's too short to show much anyway.

Anyone can attempt to prove me wrong.
But I now think this whole OT should be stopped now, or at least, sent to a dedicated thread.

Last edited by meynaf; 03 February 2017 at 09:58. Reason: forgot something
meynaf is offline  
 
Page generated in 0.08941 seconds with 11 queries