English Amiga Board

English Amiga Board (https://eab.abime.net/index.php)
-   Retrogaming General Discussion (https://eab.abime.net/forumdisplay.php?f=17)
-   -   error from motorola ? switching 68k to ppc (https://eab.abime.net/showthread.php?t=99081)

turrican3 07 October 2019 05:36

error from motorola ? switching 68k to ppc
 
Hi guys,
what do you think about:
motorola continuing 68k cpus in 90's and not switching to ppc.
I think it urts the amiga and the mac... The two gone bankrupt or near bankrupt (they were saved by bill gates).
Could have been possible to continue with compatible 68000 like the pc did instead of switching to ppc ??
In a technical pov was it possible to continue on 68k compatibles like x86 ?

Jope 07 October 2019 12:35

Sure it would have been possible. My guess is, the volume just wasn't there to make it economically feasible though.

The desktop world went x86, the embedded world went arm..

Locutus 07 October 2019 12:52

Something Something Hindsight.

Hewitson 07 October 2019 13:44

I'd have preferred to see the 68k series of CPU's achieve faster speeds and more capabilities, but this had absolutely NOTHING to do with Commodore going bankrupt.

Saved by Bill Gates? More like killed by Bill Gates.

Aladin 07 October 2019 15:49

@Hewitson

https://youtu.be/WxOp5mBY9IY

Vypr 07 October 2019 16:42

I think the Amiga was more hurt by Commodore's frankly ludicrous business decisions than Motorola's lack of movement from 68K to PPC.
Plus the PowerPC processor wasn't introduced till 1992 which is only two years before Commodore finally folded.

turrican3 08 October 2019 00:33

I don't think that it was the only reason,
but it didn't help.
And it didn't help when the amiga was buy, because they had to make a new hardware and port the amiga os to ppc, it add costs.
We won't remake history today, but you know if....if... then perhaps...,who knows what could have happened?




ps: but i would be happy to see a new platform with a new hardware, something fresh with different games, something like the amiga even with a different name... I have the impression that every years are the sames, no more surprises. You know our times lack of surprises, like the amiga and the atari st were at the time. For me, we are following a boring road for some years now.It's just my feeling, i suppose others feel differently.

jotd 08 October 2019 21:40

they could have reduced the instruction set (like the Coldfire) if this was an issue instead of making up this PPC risc nonsense that noone is able to program properly except a compiler and breaking the binary compatibility while it was still slow to emulate a 68040 on a PPC. Stupid move when everyone loved the 68k series...

x86 instruction set is compatible since 1980 or earlier and compilers AND humans can generate good & fast code on it. That's probably what makes its strength.

Bruce Abbott 11 October 2019 23:02

Quote:

Originally Posted by turrican3 (Post 1349748)
Hi guys,
what do you think about:
motorola continuing 68k cpus in 90's and not switching to ppc.

They did continue 68k, with Coldfire. The problem was they couldn't make the 68k powerful enough to compete with Intel. Going RISC seemed like a good idea at the time... (and let's not forget that Intel did that too, but quickly realized the error of their ways).

But by that time it wouldn't have mattered what Motorola did - bar making Pentium clones. The PC was it, and anything not 100% compatible was bound to lose in the marketplace. So the real reason Motorola failed was that they didn't have the 68000 ready in time for the PC in 1980. After that mistake nothing could save them.

However, if it wasn't for that the Amiga might never have happened! Or if they had managed to compete (and Commodore survived) the Amiga would have become a virtual PC clone, with all the attendant downsides. So in the end Motorola switching to PPC and Commodore going bankrupt was a good thing for retro Amiga fans, because it froze the design when it was at its best.

AmigaHope 13 October 2019 22:02

The great irony of RISC was that what made it easy to implement in limited silicon (simplicity) made it a drawback in an era of plentiful transistors but long pipelines and stalled clock increases. In the end CISC acts as sort of a form of realtime code compression. Sure you could do a better compression scheme data-wise but CISC has the benefit of keeping it syntactically logical.

68k is by far the most sensible instruction set I've ever seen for a processor. Imagine what could have been done in making it faster with the might of a big company like Intel like they did with the way-less-elegant x86 instruction set.

NorthWay 13 October 2019 23:32

Strictly speaking, it was the 88K that was the next step for Motorola. And I don't blame them for doing neither that nor PPC - as long as it looked like clockspeeds were going up and up you were ahead with the 'R'.
What I'm not so impressed by is to drop the 68K line, or the Coldfire replacements they did.

I've been trying to keep up with modern cpu design wisdom and the contortions going on behind the scenes in a modern one is frightening. So much so that I think the Itanium/VLIW in principle was right, it was more the particular implementation and the failure to have compilers back it up that has made everyone shun them.
Which is why I am so keenly keeping an eye at what the people behind the "Mill" is doing. It feels like a cpu designed by compiler writers - when something is difficult they get an architecture feature to help them succeed. They really believe you can do massive amounts of operations at once (unlike conventional wisdom which says it maxes out around 5), and have done lots of work to shrink opcode size and to expose inner parts of the architecture to the sw side.

turrican3 14 October 2019 01:06

Quote:

Originally Posted by Bruce Abbott (Post 1350634)
They did continue 68k, with Coldfire. The problem was they couldn't make the 68k powerful enough to compete with Intel.

Could you explain why ??
Or if someone else know why ??
What is specific to the x86 that the 68k didn't have, to continue the road ??

meynaf 14 October 2019 08:54

Quote:

Originally Posted by Bruce Abbott (Post 1350634)
So in the end Motorola switching to PPC and Commodore going bankrupt was a good thing for retro Amiga fans, because it froze the design when it was at its best.

An often overlooked point.


Quote:

Originally Posted by AmigaHope (Post 1351101)
Imagine what could have been done in making it faster with the might of a big company like Intel like they did with the way-less-elegant x86 instruction set.

Horrors maybe ? x86 wasn't nice to start with, but today it is a complete mess. Hopefully the 68k didn't get all the garbage they added during the years (something like >1300 instructions overall ?).


Quote:

Originally Posted by NorthWay (Post 1351144)
I've been trying to keep up with modern cpu design wisdom and the contortions going on behind the scenes in a modern one is frightening.

It may be frightening but at least it works. Intel cpus may be internally horrors, but in performance they are awesome.


Quote:

Originally Posted by NorthWay (Post 1351144)
So much so that I think the Itanium/VLIW in principle was right, it was more the particular implementation and the failure to have compilers back it up that has made everyone shun them.

But it wasn't right. And the failure to have compilers back it up was just a natural consequence.


Quote:

Originally Posted by NorthWay (Post 1351144)
Which is why I am so keenly keeping an eye at what the people behind the "Mill" is doing. It feels like a cpu designed by compiler writers - when something is difficult they get an architecture feature to help them succeed.

Splitting instructions streams isn't exactly a way to make code more readable, and readable code is the key to efficient programs. For me it doesn't feel at all like a cpu designed by compiler writers (which could have been a good idea indeed).


Quote:

Originally Posted by NorthWay (Post 1351144)
They really believe you can do massive amounts of operations at once (unlike conventional wisdom which says it maxes out around 5),

Doing massive amounts of operations at once is already possible. You can do that with a GPU or many cores.
Yes for single core general purpose cpu it maxes out around 5, i would have said 4, and this number is a maximum not always reached because of dependencies - so more wouldn't make any sense.


Quote:

Originally Posted by NorthWay (Post 1351144)
and have done lots of work to shrink opcode size

Are you sure they are really doing that ? It's the opposite of VLIW.


Quote:

Originally Posted by NorthWay (Post 1351144)
and to expose inner parts of the architecture to the sw side.

That's THE mistake cpu designers shouldn't do. Exposing the implementation to the sw side is bound to failure because implementations change all the time.


Quote:

Originally Posted by turrican3 (Post 1351162)
What is specific to the x86 that the 68k didn't have, to continue the road ??

Massive amounts of money.

NorthWay 15 October 2019 23:27

Quote:

Originally Posted by meynaf (Post 1351211)
It may be frightening but at least it works. Intel cpus may be internally horrors, but in performance they are awesome.

I was thinking modern out-of-order cpus in general. The additional logic to keep hundreds of instructions in-flight at any one time takes so much transistors, heat, and complexity that it dwarfs much the rest of the logic (cache etc notwithstanding).

Quote:

Originally Posted by meynaf (Post 1351211)
But it wasn't right. And the failure to have compilers back it up was just a natural consequence.

Well, the failure might just as well have been because the hw designers made their new toy and kicked it over to the sw side with a "make this work", and as such it was a natural consequence. There have been many opinions on the Itanitum - of which a number point to the 'design by committee' nature of it - but I'm not sure it is _fundamentally_ flawed and can't be done right. But I also don't expect to find out, with the possible exception of Mill.


Quote:

Originally Posted by meynaf (Post 1351211)
Splitting instructions streams isn't exactly a way to make code more readable, and readable code is the key to efficient programs. For me it doesn't feel at all like a cpu designed by compiler writers (which could have been a good idea indeed).

Compiler writers don't care much about any "beauty" of the instruction set, they want anything to make their life easier.

Quote:

Originally Posted by meynaf (Post 1351211)
Doing massive amounts of operations at once is already possible. You can do that with a GPU or many cores.
Yes for single core general purpose cpu it maxes out around 5, i would have said 4, and this number is a maximum not always reached because of dependencies - so more wouldn't make any sense.

Well, as I said, the Mill designers heavily oppose this thinking and aim to up the parallellism a lot on regular code.

Quote:

Originally Posted by meynaf (Post 1351211)
Are you sure they are really doing that ? It's the opposite of VLIW.

Yes. The size of the individual opcodes themselves, but not the parallell "packet" they are delivered in. Their worry is that to be able to do so many opcodes in parallell they need smaller opcodes so they can feed and decode more of them and not blow cache/line sizes.

Quote:

Originally Posted by meynaf (Post 1351211)
That's THE mistake cpu designers shouldn't do. Exposing the implementation to the sw side is bound to failure because implementations change all the time.

Well, they force you to use an intermediate Mill instruction set that is then finalized for the Mill type you are running it on. That is IMO the weakest link in their plans - not necessarily for technical reasons but for getting mental buy-in from their target audience.
But I do agree with their thinking; if you want to replace the extra complexity of modern cpus with logic that does more, then you need to expose internals to the sw side and you need to come up with new ideas to help this along. But I'll say Mill is brave to think they can do this.


(Yes, I like the Mill architecture a lot. It isn't any single (or few) thing(s), and therefore not something you can gift to whatever other cpu family, but a long long line of interconnected and coherent ideas that feel fresh and exciting. I _do_ worry about what clock speed they can achieve though.)

AmigaHope 16 October 2019 02:47

Eh I still find 68k the best in terms of code density *AND* readability, while still being compiler-friendly. Since it's cleaner than x86, the CISC -> microop pipeline in a theoretical modern 68k CPU could be a lot cleaner and smaller, yet still provide a lot of the advantages that x86 code has today.

Of course now that we're reaching the true hard limits of silicon, we might see a future where a new process (either electronic with a new chemistry, or maybe photonic?) provides 10X-100X the clock rate but with fewer transistors, and RISC might become the future again.

meynaf 16 October 2019 10:17

Quote:

Originally Posted by NorthWay (Post 1351697)
I was thinking modern out-of-order cpus in general. The additional logic to keep hundreds of instructions in-flight at any one time takes so much transistors, heat, and complexity that it dwarfs much the rest of the logic (cache etc notwithstanding).

Yes but does the complexity of OoO part lower significantly when trimming the instruction set ? If i remember correctly, it does not depend that much on the isa.


Quote:

Originally Posted by NorthWay (Post 1351697)
Well, the failure might just as well have been because the hw designers made their new toy and kicked it over to the sw side with a "make this work", and as such it was a natural consequence. There have been many opinions on the Itanitum - of which a number point to the 'design by committee' nature of it - but I'm not sure it is _fundamentally_ flawed and can't be done right. But I also don't expect to find out, with the possible exception of Mill.

I think that if it is developer hostile, it is fundamentally flawed.


Quote:

Originally Posted by NorthWay (Post 1351697)
Compiler writers don't care much about any "beauty" of the instruction set, they want anything to make their life easier.

But it's exactly the same at the end, what makes the beauty is the efficiency here...


Quote:

Originally Posted by NorthWay (Post 1351697)
Well, as I said, the Mill designers heavily oppose this thinking and aim to up the parallellism a lot on regular code.

Then they haven't studied regular code...


Quote:

Originally Posted by NorthWay (Post 1351697)
Yes. The size of the individual opcodes themselves, but not the parallell "packet" they are delivered in. Their worry is that to be able to do so many opcodes in parallell they need smaller opcodes so they can feed and decode more of them and not blow cache/line sizes.

I still think they would be better off by using the same opcode on different data, rather than many opcodes at once.


Quote:

Originally Posted by NorthWay (Post 1351697)
Well, they force you to use an intermediate Mill instruction set that is then finalized for the Mill type you are running it on. That is IMO the weakest link in their plans - not necessarily for technical reasons but for getting mental buy-in from their target audience.
But I do agree with their thinking; if you want to replace the extra complexity of modern cpus with logic that does more, then you need to expose internals to the sw side and you need to come up with new ideas to help this along. But I'll say Mill is brave to think they can do this.

(Yes, I like the Mill architecture a lot. It isn't any single (or few) thing(s), and therefore not something you can gift to whatever other cpu family, but a long long line of interconnected and coherent ideas that feel fresh and exciting. I _do_ worry about what clock speed they can achieve though.)

Well, i admit my pov is exactly the opposite. As an asm programmer, i would *not* want to code on that...

I fear this is again theoretical power-on-the-paper.

They want to execute many things in parallel but are many tasks cpu bound today ?
They may just implement it, and then discover the memory interface does not follow and their performance level is not significantly better than the others.

Or find out most of the instructions they have in their blocks are NOPs due to inter-instruction dependencies.
If the mill is in-order cpu, it will be beaten by OoO on regular code regardless of how many things it can do at once, as it WILL have to wait for intermediate results to come.


Quote:

Originally Posted by AmigaHope (Post 1351718)
Eh I still find 68k the best in terms of code density *AND* readability, while still being compiler-friendly. Since it's cleaner than x86, the CISC -> microop pipeline in a theoretical modern 68k CPU could be a lot cleaner and smaller, yet still provide a lot of the advantages that x86 code has today.

It would actually beat the crap out of x86 in many places, but it's not going to happen.


Quote:

Originally Posted by AmigaHope (Post 1351718)
Of course now that we're reaching the true hard limits of silicon, we might see a future where a new process (either electronic with a new chemistry, or maybe photonic?) provides 10X-100X the clock rate but with fewer transistors, and RISC might become the future again.

Current limit for clock rate is more generated heat (which does not go linear with speed but actually worse iirc) than speed of individual transistors.
I don't see how this could become different in the future.

AmigaHope 16 October 2019 17:27

Quote:

Originally Posted by meynaf (Post 1351756)
Current limit for clock rate is more generated heat (which does not go linear with speed but actually worse iirc) than speed of individual transistors.
I don't see how this could become different in the future.

Hence any advancements needing a different chemistry -- it could potentially generate less heat with the movement of electrons, or even be a photonic instead of electronic system.

Let's say we found a workable tech that could clock to 100Ghz with minimal heat generation, but with the caveat that gate density was nowhere as good as our current silicon designs. It might require a much simpler architecture, losing a lot of the advancements we've made in instruction decoding. Code might have to go to a fully orthagonal RISC instruction set with a very limited set of instructions. Removing other features might cause IPC to go down considerably, but since the damn thing is running at 100Ghz it's still many times faster than what we have today.

meynaf 16 October 2019 17:34

Quote:

Originally Posted by AmigaHope (Post 1351863)
Let's say we found a workable tech that could clock to 100Ghz with minimal heat generation, but with the caveat that gate density was nowhere as good as our current silicon designs. It might require a much simpler architecture, losing a lot of the advancements we've made in instruction decoding. Code might have to go to a fully orthagonal RISC instruction set with a very limited set of instructions. Removing other features might cause IPC to go down considerably, but since the damn thing is running at 100Ghz it's still many times faster than what we have today.

I don't see the point in having a design that will spend all its time waiting on the memory interface. As if you don't have space for a good enough decoder, you also don't have space for a decent onchip cache.

turrican3 17 October 2019 00:58

Quote:

Originally Posted by meynaf (Post 1351211)

Massive amounts of money.

if you had 1,000,000,000 $ could it be for you a good goal to make a new 68k compatible amiga but with the power of a new intel i7 ??
Do you have an idea of what could be better between a powerfull 68k and corei7 ???
Could have been the present better in your pov ??
All of this is theorycall.

Hewitson 17 October 2019 09:04

Quote:

Originally Posted by turrican3 (Post 1351946)
if you had 1,000,000,000 $ could it be for you a good goal to make a new 68k compatible amiga but with the power of a new intel i7 ??
Do you have an idea of what could be better between a powerfull 68k and corei7 ???
Could have been the present better in your pov ??
All of this is theorycall.

The only way that's happening is if AmigaOS is ported to x86/x64 with a 68k emulation layer. Something that should have been done years ago, really. But no, a bunch of complete morons decided PPC was the way to go.


All times are GMT +2. The time now is 18:32.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.

Page generated in 0.04855 seconds with 11 queries