English Amiga Board


Go Back   English Amiga Board > Coders > Coders. Asm / Hardware

 
 
Thread Tools
Old 20 October 2018, 20:12   #541
litwr
Registered User
 
Join Date: Mar 2016
Location: Ozherele
Posts: 229
Hi there
Thanks a lot for the comments. I meanwhile have published a 6502's part - https://litwr.livejournal.com/2773.html - I will be very happy if someone finds it interesting but beware it is emotional.
@meynaf I have just spent several minutes and made my 386 code 386 bytes in size (it is uploaded to The Zone!). So don't try to beat this with 68020 code, it may be wrong for a programmer health. I can make PR0000 much smaller and get maybe even 286 bytes. You cannot directly execute headerless code with Amiga so it is out of contest.

Quote:
Originally Posted by grond View Post
And why could Intel "convert its huge and nonorthogonal ISA" fast enough?
Intel's ISA was not so huge and complex as Moto's. Indeed, VAX had even more complex ISA. You mentioned that RISC architecture is easier to develop and x86 was closer to RISC than 68k.

IBM PC was not very good but it was with open architecture which allowed cheap clones. Motorola on other side tried to rely rather on military contracts and other close architectures.


Quote:
Originally Posted by grond View Post
Those were happy times because there were more than two competing ISAs.
We have x86, x86-64, ARM, ARM-64, MiPS, Xilinx Microblaze, Risc-V, ...


Quote:
Originally Posted by grond View Post
The x86 survived because of MONEY, not because of any technical superiority.
x86 was the best in the proper time. It was the best when IBM needed a CPU. It was the best when time was come to use fast 6502-like hardware. Indeed, today x86 is among the best because of huge Intel's resources but it was being a long struggle for this position. x86 has been not perfect but well balanced between the current requests and future requirements. 68k was having the temporary leadership sometimes but the history knows the words of Chuck Peddle "And you know that the 6800 isn't the right one. It's too big." and the reply of Bill Mensch "Right, Chuck, I know it's too big." IMHO it is true for 68k as well.

Last edited by litwr; 20 October 2018 at 20:42.
litwr is offline  
Old 21 October 2018, 02:17   #542
Bruce Abbott
Registered User
 
Bruce Abbott's Avatar
 
Join Date: Mar 2018
Location: Hastings, New Zealand
Posts: 2,544
Quote:
Originally Posted by litwr View Post
You cannot directly execute headerless code with Amiga so it is out of contest.
OK - let's crank up the PC, copy your file to it over the network, open command prompt and type 'pi-imbpc.com'...

Code:
'pi-imbpc.com' is not recognized as an internal or external command,
operable program or batch file.
Hmm, maybe it would work if I rebooted into DOS. Oh wait, Windows XP doesn't have DOS. Apparently my 'modern' PC can't directly execute headerless code either!

Guess I will just have to boot with a DOS disk. Luckily I have an old MSDOS 6.2 boot disk (set up for PC-Task on the Amiga) and a bootable floppy drive in my PC. Oh damn, now it won't recognize my hard drive! Can't be bothered booting back into Windows just to get the file onto the floppy, so...

Plan B - put MSDOS disk in Amiga 1200, copy file onto it, run PC-Task to emulate X86, boot off floppy, type 'pi-imbpc.com'. Success! 1000 digits of Pi calculated and displayed in only 4.65 seconds!

And how many bytes of X86 code did my 'PC' need to do this?

Code:
A:\>dir /a
 Volume in drive A is DOS622
 Volume Serial Number is 1164-14E8

 Directory of A:\

31/05/1994  06:22 a.m.            40,774 IO.SYS
31/05/1994  06:22 a.m.            38,138 MSDOS.SYS
31/05/1994  06:22 a.m.            54,645 COMMAND.COM
31/05/1994  06:22 a.m.            66,294 DRVSPACE.BIN
21/10/2018  06:34 a.m.               386 PI-IBMPC.COM
               5 File(s)        200,237 bytes
               0 Dir(s)         527,360 bytes free
Bruce Abbott is offline  
Old 21 October 2018, 03:45   #543
Bruce Abbott
Registered User
 
Bruce Abbott's Avatar
 
Join Date: Mar 2018
Location: Hastings, New Zealand
Posts: 2,544
Quote:
Originally Posted by litwr View Post
x86 was the best in the proper time. It was the best when IBM needed a CPU. It was the best when time was come to use fast 6502-like hardware.
When 'best' is defined as 'first to market' and 'cheapest' rather than 'technically superior'.

But not for everyone.

I bought my first Amiga (an A1000) in 1987. A short time later I bought my first PC, an IBM JX. A nice looking machine, but technically inferior to the Amiga in every way. From it I learned about PC architecture and X86 machine code, neither of which compared well to the Amiga. On real-world tasks the 8088 was even slower than the Z80 in my Amstrad CPC664, and its quirky machine code was difficult to work with. But hey, it was made by IBM so it must be the Best!

If only IBM had waited a little longer until the 68000 was ready, the PC would have gotten a much better architecture (real 16/32 bit CPU, proper 16 bit bus, 16 Megabyte flat address range with no 640k limit, no need for EMM/XMS etc.) and Intel's kludgey little con job (yes, those are 8 bit opcodes and an 8 bit bus - but it really is a 16 bit CPU, honest!) would have remained a curiosity.
Bruce Abbott is offline  
Old 21 October 2018, 10:31   #544
meynaf
son of 68k
 
meynaf's Avatar
 
Join Date: Nov 2007
Location: Lyon / France
Age: 51
Posts: 5,323
Quote:
Originally Posted by litwr View Post
@meynaf I have just spent several minutes and made my 386 code 386 bytes in size (it is uploaded to The Zone!). So don't try to beat this with 68020 code, it may be wrong for a programmer health. I can make PR0000 much smaller and get maybe even 286 bytes. You cannot directly execute headerless code with Amiga so it is out of contest.
Quite obvious from looking at your 386 source, most of the texts are gone - which means most of the features of the program also are !
Remember, I got 408 bytes by doing the same. Remove the hunk data (36 bytes) and you get 372, smaller than your 386 bytes. And this is still counting the huge dos.library stuff !

For code density you have one single, biased example. It is biased because of OS differences. And on top of it, you use the "features" of ms-dos (headerless code, direct text output) but you refuse me to use those of AOS (vprintf).

I gave two examples. One is direct code that you were unable to rewrite shorter. What did you say about it ? That it was a "special case" ? Why it's not your code that's the special case instead ?
The other is a whole, big program that you can attempt to disassemble if you want. And from Don Adan's experience it's just one among many.

Sorry, but you have to either accept reality or try to build another example...
meynaf is offline  
Old 21 October 2018, 12:50   #545
Bruce Abbott
Registered User
 
Bruce Abbott's Avatar
 
Join Date: Mar 2018
Location: Hastings, New Zealand
Posts: 2,544
Quote:
Originally Posted by meynaf View Post
Quite obvious from looking at your 386 source, most of the texts are gone - which means most of the features of the program also are !
Remember, I got 408 bytes by doing the same. Remove the hunk data (36 bytes) and you get 372, smaller than your 386 bytes.
For a fair comparison we should take out all the OS specific stuff, leaving just the core algorithms - which must produce identical (accurate) results on both CPUs. I/O can be via ints/traps or call/bsr stubs. To keep it simple let's just have a fixed number of digits, basic character output and internal binary to decimal conversion as in litwr's latest code. Any extra OS-specific code or data required to execute the program and produce visible output will not be counted. And no cheating! (no self-modifying code, jumping into the middle of instructions etc.).

Can you manage 372 bytes (or less) under those conditions? Show us your code. Can litwr do better?
Bruce Abbott is offline  
Old 21 October 2018, 15:38   #546
a/b
Registered User
 
Join Date: Jun 2016
Location: europe
Posts: 1,039
Bruce, STOP MAKING SENSE! Only mickey mouse logic is allowed in this benchmark.
a/b is offline  
Old 21 October 2018, 17:16   #547
meynaf
son of 68k
 
meynaf's Avatar
 
Join Date: Nov 2007
Location: Lyon / France
Age: 51
Posts: 5,323
Quote:
Originally Posted by Bruce Abbott View Post
For a fair comparison we should take out all the OS specific stuff, leaving just the core algorithms - which must produce identical (accurate) results on both CPUs. I/O can be via ints/traps or call/bsr stubs. To keep it simple let's just have a fixed number of digits, basic character output and internal binary to decimal conversion as in litwr's latest code. Any extra OS-specific code or data required to execute the program and produce visible output will not be counted. And no cheating! (no self-modifying code, jumping into the middle of instructions etc.).

Can you manage 372 bytes (or less) under those conditions? Show us your code. Can litwr do better?
I guess i could do it. But not before litwr accepts the challenge.
meynaf is offline  
Old 21 October 2018, 17:45   #548
litwr
Registered User
 
Join Date: Mar 2016
Location: Ozherele
Posts: 229
Quote:
Originally Posted by Bruce Abbott View Post
OK - let's crank up the PC, copy your file to it over the network, open command prompt and type 'pi-imbpc.com'...
Just use DOS-BOX - it is not so fast as VirtualBox but it supports sound and is more accurate and handy. It is available for any Linux, any Microsoft Windows, ... - it has even a JS-version - https://classicreload.com/

Quote:
Originally Posted by Bruce Abbott View Post
I bought my first Amiga (an A1000) in 1987. A short time later I bought my first PC, an IBM JX. A nice looking machine, but technically inferior to the Amiga in every way. From it I learned about PC architecture and X86 machine code, neither of which compared well to the Amiga. On real-world tasks the 8088 was even slower than the Z80 in my Amstrad CPC664, and its quirky machine code was difficult to work with. But hey, it was made by IBM so it must be the Best!

If only IBM had waited a little longer until the 68000 was ready, the PC would have gotten a much better architecture (real 16/32 bit CPU, proper 16 bit bus, 16 Megabyte flat address range with no 640k limit, no need for EMM/XMS etc.) and Intel's kludgey little con job (yes, those are 8 bit opcodes and an 8 bit bus - but it really is a 16 bit CPU, honest!) would have remained a curiosity.
Thank you for mentioning PC JX - I have never heard about such a thing. However, excuse me, it is difficult to figure out why to buy A1000 in 1987 when it was possible to buy better and cheaper A2000 or A500? And it is completely impossible for me to imagine why to buy PCjr clone in 1987 having an Amiga? Indeed Amiga is much better in almost every way. Maybe it was only more difficult to find an HDD for Amiga this time. However you information that Z80 at 3.2 MHz (it is the effective frequency of this CPU inside Amstrad CPC) is faster thar 8088 at 4.77 MHz is a complete absurd. I worked much with CPC6128 and PCW9512 in the 80s, they are good computers but a level lower than IBM PC. Do you remember 180K FDD? The original IBM PC is about 2 times faster than CPC or PCW. Did you try to program Z80? IMHO PC AT or XT clone with 8088 or 8086 at 8 MHz was the right PC for 1987...

IBM did things in time, Intel and Acorn too, but superior Moto liked to be waited. However time is equal for everybody. I can think (excuse me if I am wrong) that according to your logic IBM should have waited for DEC V-11 chip which had very wide ISA or maybe NS 32016 - they are true 32-bit chips.

@meynaf Where is your code? I suspect that it is not fair again.
litwr is offline  
Old 21 October 2018, 20:57   #549
meynaf
son of 68k
 
meynaf's Avatar
 
Join Date: Nov 2007
Location: Lyon / France
Age: 51
Posts: 5,323
Quote:
Originally Posted by litwr View Post
@meynaf Where is your code? I suspect that it is not fair again.
Who's not fair here - and again ? Not me. Why would i give you any code now ? You cheated by just every possible mean, inventing rules that just arrange you.

First accept Bruce Abbott's challenge. Then we can talk.
meynaf is offline  
Old 21 October 2018, 21:37   #550
litwr
Registered User
 
Join Date: Mar 2016
Location: Ozherele
Posts: 229
@Bruce Abbott I am very confused that PI-IBMPC.COM doesn't work with your system. I have just checked it with my 32-bit Microsoft Windows XP under VirtualBox under 64-bit Linux (I have kept this XP since 2003) - it works perfectly. Then I tried to do the same with my native 64-bit Microsoft Windows 10 - it doesn't work because it is a 32-bit application. So COM-format is still quite relevant but, of course, rather obsolete.

Quote:
Originally Posted by meynaf View Post
Who's not fair here - and again ? Not me. Why would i give you any code now ? You cheated by just every possible mean, inventing rules that just arrange you.
Why not you? You haven't published your code and I have. I claimed the result and published the proof, but you only claimed something.
litwr is offline  
Old 21 October 2018, 22:50   #551
Kalms
Registered User
 
Join Date: Nov 2006
Location: Stockholm, Sweden
Posts: 237
My experience from developing for 8088-80286 is that for smaller programs - that is, programs that fit into a single 64kB segment, and where data would fit into a few 64kB segments - such that each 64kB segment contained data for a distinct purpose - then the assembly was alright. You would be able to express what you wanted, most of the time. Code density was reasonable.

Operations would take somewhere around 2-2.5 bytes / instruction on average.

I don't recall what the per-cycle performance of those processors ended up like, but based on some of the timings in litwr's example code it looks like operations on those processors started out at 2c for the simplest operations, going upward from there. Simple instructions on x86 would take fewer cycles than simple instructions on 68000. I don't remember how memory subsystem performance of a typical XT PC would affect these numbers.

Where the 16-bit assembly model broke down was when general-purpose data didn't fit into a single 64kB block any more... or when the application needed more than 1MB working set.

Once the data didn't fit into 64kB blocks any more, compiled languages didn't handle this well at all and this would usully result in lots more instructions for any pointer handling. Hand-crafted assembly could avoid most of the code bloat for a while, but eventually you would end up doing the same kind of treating all pointers as pairs of WORDs and always managing those. The increased work is twofold: both because of the lack of 32-bit loads/stores, but also because of the small number of segment registers, and the inability to specify literal 32-bit addresses.

Addressing more than 1MB of memory, at least on PCs, was done via a cumbersome bank switching process. Using bank switched memory was really cumbersome - I never tried writing anything serious with it but moved straight to 32-bit 386 code instead.

386 code, in 32-bit mode, removed all the previously mentioned addressing problems. 386 code had a bit larger instructions than 8088-286 code. The 16bit/32bit prefix byte system inflated instruction sizes a bit. You would want to store any values you could in 16-bit format instead of 32-bit back then (to save on data cache bandwidth and overall memory usage) but do as much computations as possible in 32-bit format. Given that 32-bit assembly introduced more flexible EA calculations, and there was a decent instruction cache, you stopped trying to use AL/AX/EAX as one of the primary operands; this resulted in overall less convoluted code, but less use of the short forms of instructions. If I were to make a wild guess then I would guess that 386 code would end up somewhere at 2.5-3 bytes per instruction. This would make simple programs larger in 32-bit format, but large programs would not necessarily be larger for 386 because they would not need to do the "far pointer dance" all the time.

386 assembly code feels like it has roughly the same expressiveness as 68020 assembly code, just a couple fewer registers (which is not a huge problem in practice).

I'm not sure what the PI calculation program is supposed to prove. It's a very small program so suitable both for 68000 and 8088. The calculation itself will be roughly the same set of operations for both processors. The executable header format, and how to invoke OS routines for I/O, has very little to do with the processors and more to do with the OSes. ...?

Last edited by Kalms; 21 October 2018 at 23:00.
Kalms is offline  
Old 22 October 2018, 00:22   #552
roondar
Registered User
 
Join Date: Jul 2015
Location: The Netherlands
Posts: 3,408
Quote:
Originally Posted by Kalms View Post
I'm not sure what the PI calculation program is supposed to prove. It's a very small program so suitable both for 68000 and 8088. The calculation itself will be roughly the same set of operations for both processors. The executable header format, and how to invoke OS routines for I/O, has very little to do with the processors and more to do with the OSes. ...?
As far as I understand it, the comparison is mostly litwr's attempt at proving the x86 has better code density than 68k. To do so, he has rewritten his example so that it uses a specific OS that happens to have a feature that allows him to get a much smaller program than otherwise would've been possible.

Then, he claims that the differences he basically only managed to get due to OS specific features somehow mean that the ISA (not the OS & environment) is better for code density, because the 68k based example uses a different OS which conveniently doesn't support the features he uses to get the program to be so small.

So yeah, it's a pretty meaningless comparison all things considered.
roondar is offline  
Old 22 October 2018, 01:45   #553
alpine9000
Registered User
 
Join Date: Mar 2016
Location: Australia
Posts: 881
Quote:
Originally Posted by roondar View Post
So yeah, it's a pretty meaningless comparison all things considered.
It's completely meaningless. Even if you "fixed" the comparison and compared the binary output of each sample excluding OS interation it would still be meaningless as it's an isolated example.

In order to make a real comparsion we would need a large number of examples of real world (not trivial) applications, each hand optimised (otherwise we could be comparing compiler optimisers rather than ISA efficency).

And then if we do declare a winner, what then ? Nothing changes at all.
alpine9000 is offline  
Old 22 October 2018, 02:01   #554
Bruce Abbott
Registered User
 
Bruce Abbott's Avatar
 
Join Date: Mar 2018
Location: Hastings, New Zealand
Posts: 2,544
Quote:
Originally Posted by litwr View Post
Just use DOS-BOX - it is not so fast as VirtualBox but it supports sound and is more accurate and handy.
DOSbox is an emulator. I already have an IBM emulator on my Amiga so it was quicker to just use that.

Quote:
it is difficult to figure out why to buy A1000 in 1987 when it was possible to buy better and cheaper A2000 or A500?
I had played with an A1000 before and preferred its styling over the more toy-like A500. The A2000 was quite a bit more expensive, and IIRC wasn't available in NZ at the time.

Quote:
And it is completely impossible for me to imagine why to buy PCjr clone in 1987 having an Amiga? Indeed Amiga is much better in almost every way.
My reason was to learn about PC architecture, which I had avoided until then. The JX was going cheap and had an excellent technical reference manual. Turned out to be a good decision because some incompatibilities between the JX and PC-XT forced me to learn about device drivers etc., and the 16 color 320x200 graphics mode was more interesting than 4 color CGA (I wrote a paint program for it using a mixture of BASIC and machine code).

Quote:
Maybe it was only more difficult to find an HDD for Amiga this time.
The JX had no hard drive either, but it came with two 720k 3.5" floppy drives which were more appealing than the 5.25" drives most PCs had at that time.

BTW I did have a hard drive on the A1000. It was 20MB Miniscribe 3.5" MFM drive with ST506 controller. I made an ISA bus adapter and wrote a device driver to make it work on the Amiga. No Autoconfig, but I patched the Kickstart disk so it could boot up from a reset-proof RAM disk (only had to boot from floppy at power-on).

Quote:
However you information that Z80 at 3.2 MHz (it is the effective frequency of this CPU inside Amstrad CPC) is faster thar 8088 at 4.77 MHz is a complete absurd.
The JX was based on the PCjr. RAM was shared between the CPU and Video controller, which reduced the effective clock frequency to ~3.5MHz. The disk drive controller was (like the Amstrad) non-DMA - quite annoying because the infrared keyboard had no flow control so it lost any characters typed while accessing the disk drive! Its ROM BASIC was also significantly slower than the CPC, despite the CPU running at full speed when in ROM.

Quote:
Did you try to program Z80?
Yep. Started on a ZX81, then ZX Spectrum, and finally the CPC664. I ported an assembler/debugger package from the Spectrum to the Amstrad and put it in ROM, expanded the RAM to 256k internally (my own design), added an NMI bank switch for debugging and 32k of battery-backed SRAM for ROM emulation. I used this machine to cross compile code for other Z80 based platforms, including games for the Sega SC3000 (MSX predecessor) and a commercial electronic gambling machine.

More recently I designed a multi-function expansion cart for the Mattel Aquarius. Code was cross-compiled on the PC and debugged in the Aquarius using the debugger ported from the Amstrad. For this project I wrote ~12,000 lines of Z80 assembly code.

Quote:
IMHO PC AT or XT clone with 8088 or 8086 at 8 MHz was the right PC for 1987...
It was 'right' in that it gave clone manufacturers a relatively easy transition from Apple II to IBM. But the PC's architecture was a mess. IBM tried again with the PCjr and failed miserably, then finally did a proper job with the Personal System/2 - and failed miserably again.

From that time on the industry struggled with maintaining compatibility with the original PC while trying to work around all of its mistakes and limitations, because the one thing that guaranteed failure was not being 100% 'IBM' compatible. Was this because the PC architecture was better? No, just that once it gained a foothold in the marketplace nothing else could compete. The combination of 'nobody ever got fired for buying IBM' and a design that anyone could reproduce was unbeatable.

Quote:
IBM did things in time, Intel and Acorn too, but superior Moto liked to be waited. However time is equal for everybody. I can think (excuse me if I am wrong) that according to your logic IBM should have waited for DEC V-11 chip which had very wide ISA or maybe NS 32016 - they are true 32-bit chips.
Yes, being a bit behind was often a problem for Motorola. However when IBM made their decision the 68000 was imminent. They didn't wait because they didn't want a powerful desktop machine, they just wanted something cheap to stop their customers from buying Apple II's. Intel had that cheap thing, with the promise of better to come once they got it figured out.

In truth the original PC was a dog - over-priced, under-powered, and lacking the features required for business applications (16-64k RAM, a cassette interface, BASIC in ROM, color video output to a TV - what were they thinking?). But IBM virtually open-sourced the design (again, what were they thinking?) and you could put anything in those slots.
Bruce Abbott is offline  
Old 22 October 2018, 02:48   #555
Bruce Abbott
Registered User
 
Bruce Abbott's Avatar
 
Join Date: Mar 2018
Location: Hastings, New Zealand
Posts: 2,544
Quote:
Originally Posted by alpine9000 View Post
It's completely meaningless. Even if you "fixed" the comparison and compared the binary output of each sample excluding OS interation it would still be meaningless as it's an isolated example.
Only an isolated example, but it does have a reasonable mix of instructions so I don't think it's completely meaningless. If the difference was a factor of 2 or more it would suggest one was clearly superior, and if a few more reasonable examples gave similar results...

Quote:
In order to make a real comparsion we would need a large number of examples of real world (not trivial) applications, each hand optimised (otherwise we could be comparing compiler optimisers rather than ISA efficency).
A fair comparison is harder to achieve with real-world applications because they are often dependent on other features of the platform, which may be difficult to isolate (even the trivial example we have now suffered from that).

Agreed on the compiler. 8 bit CPUs like the Z80 suffer badly, while PCs have the advantage of more mature compilers. Comparing hand-optimized assembly takes away that level of uncertainty.

Quote:
And then if we do declare a winner, what then ? Nothing changes at all.
It won't change what happened and we all know who 'won', but I still think it's important to separate fact from fiction. With older X86 PCs now becoming vintage like the Amiga, reawakening the debate just adds to the 'retro' experience!
Bruce Abbott is offline  
Old 22 October 2018, 03:31   #556
alpine9000
Registered User
 
Join Date: Mar 2016
Location: Australia
Posts: 881
Quote:
Originally Posted by Bruce Abbott View Post
Only an isolated example, but it does have a reasonable mix of instructions so I don't think it's completely meaningless. If the difference was a factor of 2 or more it would suggest one was clearly superior, and if a few more reasonable examples gave similar results...
I think unless the example is of a "reasonable" size (or the tests are a mix of different sizes from tiny to huge) then it is still fairly meaningless.

Sure there are cases where you can have a useful tiny program with limited data, but that alone cannot be used as a test case to declare either ISA as having the overall best density.
alpine9000 is offline  
Old 22 October 2018, 09:42   #557
meynaf
son of 68k
 
meynaf's Avatar
 
Join Date: Nov 2007
Location: Lyon / France
Age: 51
Posts: 5,323
Quote:
Originally Posted by litwr View Post
Why not you? You haven't published your code and I have. I claimed the result and published the proof, but you only claimed something.
Why not, because i didn't keep that old version. You told me extra features were needed and then i put them in. As easy as that.

But ok, if you insist, i will redo it the old way. Perhaps better so it will be more humiliating for your beloved 386.
Note that you may even take the 68k code yourself and do to it the same as what you did to your version...

Anyway what you claimed is wrong.
You did not in any manner document what features you have removed out of the program - actually you did not even say you removed anything !
Thus your claims suggests everything is there...
So what did you do ? Remove time measurement code, remove the variable amount of digits, and pretend it's the same now ? Isn't that outright cheating ?

However, why not instead do something better and remove all OS dependent code from the equation, as was suggested ?


EDIT: done - spigot version in the zone, 236 bytes - outputs 1000 digits, does not use OS formatting routines.
As you can see i can play your feature reduction game too.

Last edited by meynaf; 22 October 2018 at 11:17. Reason: new spigot version done
meynaf is offline  
Old 22 October 2018, 09:58   #558
meynaf
son of 68k
 
meynaf's Avatar
 
Join Date: Nov 2007
Location: Lyon / France
Age: 51
Posts: 5,323
Quote:
Originally Posted by alpine9000 View Post
In order to make a real comparsion we would need a large number of examples of real world (not trivial) applications, each hand optimised (otherwise we could be comparing compiler optimisers rather than ISA efficency).
Nobody's going to write hand-optimised whole non-trivial programs for x86 (at least, not for the purpose of that test).
But compiler stuff isn't as meaningless as it looks. Knowing x86 compilers are usually somewhat better than 68k compilers, and that many x86 games have in average 1.5 times the code size (it's Don Adan's claim, not mine, but i have given a clear example previously), i think we already know the winner.


Quote:
Originally Posted by alpine9000 View Post
And then if we do declare a winner, what then ? Nothing changes at all.
Something might change : litwr will keep his mouth shut


Quote:
Originally Posted by Bruce Abbott View Post
In truth the original PC was a dog - over-priced, under-powered, and lacking the features required for business applications (16-64k RAM, a cassette interface, BASIC in ROM, color video output to a TV - what were they thinking?). But IBM virtually open-sourced the design (again, what were they thinking?) and you could put anything in those slots.
What were they thinking, you ask ? They thought the machine had no future and therefore not worth spending efforts in the design nor even worth protecting with a decent copyright !


Quote:
Originally Posted by alpine9000 View Post
I think unless the example is of a "reasonable" size (or the tests are a mix of different sizes from tiny to huge) then it is still fairly meaningless.

Sure there are cases where you can have a useful tiny program with limited data, but that alone cannot be used as a test case to declare either ISA as having the overall best density.
For the huge we already know the answer i think.
For the tiny, at least there is some competition and that alone makes it interesting.
meynaf is offline  
Old 22 October 2018, 14:15   #559
grond
Registered User
 
Join Date: Jun 2015
Location: Germany
Posts: 1,918
Quote:
Originally Posted by litwr View Post
Intel's ISA was not so huge and complex as Moto's. Indeed, VAX had even more complex ISA. You mentioned that RISC architecture is easier to develop and x86 was closer to RISC than 68k.
I don't think that early x86 was closer to RISC than 68k because early x86 was so much less orthogonal than 68k and orthogonality is one important factor that makes designing RISCs easier than CISCs. Kudos to Intel for being ahead of most of the superior competing ISAs in actual implementations when it comes to speed.


Quote:
IBM PC was not very good but it was with open architecture which allowed cheap clones. Motorola on other side tried to rely rather on military contracts and other close architectures.
Motorola probably earned more than IBM with the PC so their business model made some sense. Intel and Microsoft, on the other hand, earned with each and every one of those shitty PCs sold. The rest is history.


Quote:
We have x86, x86-64, ARM, ARM-64, MiPS, Xilinx Microblaze, Risc-V, ...
I wouldn't count x86 vs x86-64 and ARM vs ARM-64 as competing ISAs. Hence, with the exception of RISC-V you only get a few remaining ISAs of a much larger ISA-zoo we had twenty years ago.


Quote:
x86 was the best in the proper time. It was the best when IBM needed a CPU. It was the best when time was come to use fast 6502-like hardware.
If you replace "best" with "most suitable for the very low original specifications", I can agree.
grond is offline  
Old 22 October 2018, 14:28   #560
plasmab
Banned
 
plasmab's Avatar
 
Join Date: Sep 2016
Location: UK
Posts: 2,917
x86 is not remotely close to RISC. Nor is 68K. Both are prime examples of CISC.

Why don’t you two just give up. This whole thread is pointless.
plasmab is offline  
 


Currently Active Users Viewing This Thread: 1 (0 members and 1 guests)
 
Thread Tools

Similar Threads
Thread Thread Starter Forum Replies Last Post
Any software to see technical OS details? necronom support.Other 3 02 April 2016 12:05
2-star rarity details? stet HOL suggestions and feedback 0 14 December 2015 05:24
EAB's FTP details... Basquemactee1 project.Amiga File Server 2 30 October 2013 22:54
req details for sdl turrican3 request.Other 0 20 April 2008 22:06
Forum Details BippyM request.Other 0 15 May 2006 00:56

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +2. The time now is 00:19.

Top

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.
Page generated in 0.14503 seconds with 16 queries