Thread: Next gen Amiga
View Single Post
Old 23 May 2018, 21:21   #526
meynaf
son of 68k
 
meynaf's Avatar
 
Join Date: Nov 2007
Location: Lyon / France
Age: 51
Posts: 5,350
Quote:
Originally Posted by Megol View Post
Smaller: yes. A few percent at most.
A few percent, for about everything being twice the size ?


Quote:
Originally Posted by Megol View Post
Not easier to support in software.
I wrote "easier to support in hardware".
But anyway of course yes it is. Software is always easier to make when it has less things to handle.


Quote:
Originally Posted by Megol View Post
Faster: in a pure speed-demon design, yes. A few percent lower latency in adder path and about one clock lower latency for multiplication (unless a 32x32->64 instruction is supported). In practice no.
You forget about quite a lot of things that actually make it slower.
Remember you have to move twice the data amount for every pointer you touch (and some data). Means more pressure on the caches. Means slower.
And i'm not speaking about routing latencies that will not help (even though this is more important in fpga than in asic).

Actually i wouldn't be surprised if some x86 programs are slower in 64-bit than in 32-bit in spite of this mode having twice the registers.


Quote:
Originally Posted by Megol View Post
And the downside: whenever you want to do additions on 64 bits you need two instructions with an additional clock latency.
This is easy to fix and i explained how.
And anyway the case is too rare to account for anything significant.


Quote:
Originally Posted by Megol View Post
Or at least 6 instructions to do a 64x64->128 multiplication, this is being overly generous by assuming ternary addition instructions with carry support.
You speak about a nonexisting downside. When did I last need 64x64->128 multiplication in code ? Oh yes : never !


Quote:
Originally Posted by Megol View Post
Why would 64 bit have anything to do with being orthogonal?
Because for being 64bit you have to remove things.
There are quite a few operations that are available for 32-bit x86 that 64-bit x86 does not support. Try to guess why.


Quote:
Originally Posted by Megol View Post
The idea that a 32 bit processor is easier to use than a 64 bit processor is just ludicrous.
That brings nothing to the discussion. You're just coming at the limit of name-calling.


Quote:
Originally Posted by Megol View Post
And you, as usual, don't give any reason why that would be.
Nope.
First, you're at the limit like above.
Second, I told it already. Did you read and understand what i wrote ?
But ok, here it is again.
Try to encode 68020 with added 64-bit. You will quickly see that you have to either lose orthogonality or remove some features. Simply because there is not enough encoding space to fit this extravagant "upgrade".


Quote:
Originally Posted by Megol View Post
You want to waste more instruction bytes to do something,
Since when are you concerned about wasting instruction bytes ? Your proposition is much worse than this !
Again, be realistic. 64-bit datatype is rare even in 64-bit programs. So these few bytes are more than compensated by having a better instruction set.
You want a proof ? Encode your thing, i encode mine. We both write a significant sized routine and compare the size... Oh, well. Something tells me you're not gonna do this.


Quote:
Originally Posted by Megol View Post
you want to complicate hardware making it slower
Nope.


Quote:
Originally Posted by Megol View Post
and you don't see the problem.
If you see a problem, explain it.


Quote:
Originally Posted by Megol View Post
The no-go place that the most popular processor architecture is in?
First, being popular has never meant being technically good.
Second, no popular processor architecture is really there anymore.


Quote:
Originally Posted by Megol View Post
I wonder what construct would need 4-5 instructions when the 68k only need one? MOVEM possibly. However I can see many common operations where a 68k would need two+ instructions that map to one C/RISC instruction.
4-5, yes, that does exist, maybe rare enough, but for common instructions 3 is easy to find.
Something like ADD.W D0,(A0)+.
Or BSET #0,(A1).

Of course instructions exist that may need up to 8 and more. Think about something like BFINS.

But even if it's just 2. That means twice the amount of instructions to decode and execute. By definition it can not be faster. Because risc or cisc, both can reach same ipc (actually perhaps more for cisc because code is smaller and therefore the icache can provide more of them per clock).


Quote:
Originally Posted by Megol View Post
And you again doesn't give any example of _why_ it would be a pain in the ass.
I have, but why would I do that again ? Seems you just don't read what I write and repeat the same wrong things over and over.


Quote:
Originally Posted by Megol View Post
Everything but 68k is apparently horrible.
That's true. But you need to write enough code to know that. And i'm still waiting for any code comparison we could have made.

I know by experience that only 68k can make a few things possible, like disassembling megabytes of code and alter that to make it work on another platform, or writing entire apps (of a significant size) in asm.

Work is always easier when you have the right tool for the job.
The 68000 has been designed after doing statistical studies of existing programs.
So would I if asked to design my own. I even already have such statistics.
Did you do that ? If not, you can not speak about what's useful in programs and what's not, period.
meynaf is offline  
 
Page generated in 0.04838 seconds with 11 queries