English Amiga Board


Go Back   English Amiga Board > Main > Retrogaming General Discussion

 
 
Thread Tools
Old 07 October 2019, 05:36   #1
turrican3
Moon 1969 = amiga 1985
 
turrican3's Avatar
 
Join Date: Apr 2007
Location: belgium
Age: 48
Posts: 3,913
error from motorola ? switching 68k to ppc

Hi guys,
what do you think about:
motorola continuing 68k cpus in 90's and not switching to ppc.
I think it urts the amiga and the mac... The two gone bankrupt or near bankrupt (they were saved by bill gates).
Could have been possible to continue with compatible 68000 like the pc did instead of switching to ppc ??
In a technical pov was it possible to continue on 68k compatibles like x86 ?
turrican3 is offline  
Old 07 October 2019, 12:35   #2
Jope
-
 
Jope's Avatar
 
Join Date: Jul 2003
Location: Helsinki / Finland
Age: 43
Posts: 9,863
Sure it would have been possible. My guess is, the volume just wasn't there to make it economically feasible though.

The desktop world went x86, the embedded world went arm..
Jope is offline  
Old 07 October 2019, 12:52   #3
Locutus
Registered User
 
Join Date: Jul 2014
Location: Finland
Posts: 1,178
Something Something Hindsight.
Locutus is online now  
Old 07 October 2019, 13:44   #4
Hewitson
Registered User
 
Hewitson's Avatar
 
Join Date: Feb 2007
Location: Melbourne, Australia
Age: 41
Posts: 3,773
I'd have preferred to see the 68k series of CPU's achieve faster speeds and more capabilities, but this had absolutely NOTHING to do with Commodore going bankrupt.

Saved by Bill Gates? More like killed by Bill Gates.
Hewitson is offline  
Old 07 October 2019, 15:49   #5
Aladin
Registered User
 
Join Date: Nov 2016
Location: France
Posts: 854
@Hewitson

[ Show youtube player ]
Aladin is offline  
Old 07 October 2019, 16:42   #6
Vypr
Registered User
 
Vypr's Avatar
 
Join Date: Dec 2016
Location: East Kilbride, Scotland
Posts: 451
I think the Amiga was more hurt by Commodore's frankly ludicrous business decisions than Motorola's lack of movement from 68K to PPC.
Plus the PowerPC processor wasn't introduced till 1992 which is only two years before Commodore finally folded.
Vypr is offline  
Old 08 October 2019, 00:33   #7
turrican3
Moon 1969 = amiga 1985
 
turrican3's Avatar
 
Join Date: Apr 2007
Location: belgium
Age: 48
Posts: 3,913
I don't think that it was the only reason,
but it didn't help.
And it didn't help when the amiga was buy, because they had to make a new hardware and port the amiga os to ppc, it add costs.
We won't remake history today, but you know if....if... then perhaps...,who knows what could have happened?




ps: but i would be happy to see a new platform with a new hardware, something fresh with different games, something like the amiga even with a different name... I have the impression that every years are the sames, no more surprises. You know our times lack of surprises, like the amiga and the atari st were at the time. For me, we are following a boring road for some years now.It's just my feeling, i suppose others feel differently.
turrican3 is offline  
Old 08 October 2019, 21:40   #8
jotd
This cat is no more
 
jotd's Avatar
 
Join Date: Dec 2004
Location: FRANCE
Age: 52
Posts: 8,201
they could have reduced the instruction set (like the Coldfire) if this was an issue instead of making up this PPC risc nonsense that noone is able to program properly except a compiler and breaking the binary compatibility while it was still slow to emulate a 68040 on a PPC. Stupid move when everyone loved the 68k series...

x86 instruction set is compatible since 1980 or earlier and compilers AND humans can generate good & fast code on it. That's probably what makes its strength.
jotd is offline  
Old 11 October 2019, 23:02   #9
Bruce Abbott
Registered User
 
Bruce Abbott's Avatar
 
Join Date: Mar 2018
Location: Hastings, New Zealand
Posts: 2,585
Quote:
Originally Posted by turrican3 View Post
Hi guys,
what do you think about:
motorola continuing 68k cpus in 90's and not switching to ppc.
They did continue 68k, with Coldfire. The problem was they couldn't make the 68k powerful enough to compete with Intel. Going RISC seemed like a good idea at the time... (and let's not forget that Intel did that too, but quickly realized the error of their ways).

But by that time it wouldn't have mattered what Motorola did - bar making Pentium clones. The PC was it, and anything not 100% compatible was bound to lose in the marketplace. So the real reason Motorola failed was that they didn't have the 68000 ready in time for the PC in 1980. After that mistake nothing could save them.

However, if it wasn't for that the Amiga might never have happened! Or if they had managed to compete (and Commodore survived) the Amiga would have become a virtual PC clone, with all the attendant downsides. So in the end Motorola switching to PPC and Commodore going bankrupt was a good thing for retro Amiga fans, because it froze the design when it was at its best.
Bruce Abbott is offline  
Old 13 October 2019, 22:02   #10
AmigaHope
Registered User
 
Join Date: Sep 2006
Location: New Sandusky
Posts: 942
The great irony of RISC was that what made it easy to implement in limited silicon (simplicity) made it a drawback in an era of plentiful transistors but long pipelines and stalled clock increases. In the end CISC acts as sort of a form of realtime code compression. Sure you could do a better compression scheme data-wise but CISC has the benefit of keeping it syntactically logical.

68k is by far the most sensible instruction set I've ever seen for a processor. Imagine what could have been done in making it faster with the might of a big company like Intel like they did with the way-less-elegant x86 instruction set.
AmigaHope is offline  
Old 13 October 2019, 23:32   #11
NorthWay
Registered User
 
Join Date: May 2013
Location: Grimstad / Norway
Posts: 840
Strictly speaking, it was the 88K that was the next step for Motorola. And I don't blame them for doing neither that nor PPC - as long as it looked like clockspeeds were going up and up you were ahead with the 'R'.
What I'm not so impressed by is to drop the 68K line, or the Coldfire replacements they did.

I've been trying to keep up with modern cpu design wisdom and the contortions going on behind the scenes in a modern one is frightening. So much so that I think the Itanium/VLIW in principle was right, it was more the particular implementation and the failure to have compilers back it up that has made everyone shun them.
Which is why I am so keenly keeping an eye at what the people behind the "Mill" is doing. It feels like a cpu designed by compiler writers - when something is difficult they get an architecture feature to help them succeed. They really believe you can do massive amounts of operations at once (unlike conventional wisdom which says it maxes out around 5), and have done lots of work to shrink opcode size and to expose inner parts of the architecture to the sw side.
NorthWay is offline  
Old 14 October 2019, 01:06   #12
turrican3
Moon 1969 = amiga 1985
 
turrican3's Avatar
 
Join Date: Apr 2007
Location: belgium
Age: 48
Posts: 3,913
Quote:
Originally Posted by Bruce Abbott View Post
They did continue 68k, with Coldfire. The problem was they couldn't make the 68k powerful enough to compete with Intel.
Could you explain why ??
Or if someone else know why ??
What is specific to the x86 that the 68k didn't have, to continue the road ??
turrican3 is offline  
Old 14 October 2019, 08:54   #13
meynaf
son of 68k
 
meynaf's Avatar
 
Join Date: Nov 2007
Location: Lyon / France
Age: 51
Posts: 5,323
Quote:
Originally Posted by Bruce Abbott View Post
So in the end Motorola switching to PPC and Commodore going bankrupt was a good thing for retro Amiga fans, because it froze the design when it was at its best.
An often overlooked point.


Quote:
Originally Posted by AmigaHope View Post
Imagine what could have been done in making it faster with the might of a big company like Intel like they did with the way-less-elegant x86 instruction set.
Horrors maybe ? x86 wasn't nice to start with, but today it is a complete mess. Hopefully the 68k didn't get all the garbage they added during the years (something like >1300 instructions overall ?).


Quote:
Originally Posted by NorthWay View Post
I've been trying to keep up with modern cpu design wisdom and the contortions going on behind the scenes in a modern one is frightening.
It may be frightening but at least it works. Intel cpus may be internally horrors, but in performance they are awesome.


Quote:
Originally Posted by NorthWay View Post
So much so that I think the Itanium/VLIW in principle was right, it was more the particular implementation and the failure to have compilers back it up that has made everyone shun them.
But it wasn't right. And the failure to have compilers back it up was just a natural consequence.


Quote:
Originally Posted by NorthWay View Post
Which is why I am so keenly keeping an eye at what the people behind the "Mill" is doing. It feels like a cpu designed by compiler writers - when something is difficult they get an architecture feature to help them succeed.
Splitting instructions streams isn't exactly a way to make code more readable, and readable code is the key to efficient programs. For me it doesn't feel at all like a cpu designed by compiler writers (which could have been a good idea indeed).


Quote:
Originally Posted by NorthWay View Post
They really believe you can do massive amounts of operations at once (unlike conventional wisdom which says it maxes out around 5),
Doing massive amounts of operations at once is already possible. You can do that with a GPU or many cores.
Yes for single core general purpose cpu it maxes out around 5, i would have said 4, and this number is a maximum not always reached because of dependencies - so more wouldn't make any sense.


Quote:
Originally Posted by NorthWay View Post
and have done lots of work to shrink opcode size
Are you sure they are really doing that ? It's the opposite of VLIW.


Quote:
Originally Posted by NorthWay View Post
and to expose inner parts of the architecture to the sw side.
That's THE mistake cpu designers shouldn't do. Exposing the implementation to the sw side is bound to failure because implementations change all the time.


Quote:
Originally Posted by turrican3 View Post
What is specific to the x86 that the 68k didn't have, to continue the road ??
Massive amounts of money.
meynaf is offline  
Old 15 October 2019, 23:27   #14
NorthWay
Registered User
 
Join Date: May 2013
Location: Grimstad / Norway
Posts: 840
Quote:
Originally Posted by meynaf View Post
It may be frightening but at least it works. Intel cpus may be internally horrors, but in performance they are awesome.
I was thinking modern out-of-order cpus in general. The additional logic to keep hundreds of instructions in-flight at any one time takes so much transistors, heat, and complexity that it dwarfs much the rest of the logic (cache etc notwithstanding).

Quote:
Originally Posted by meynaf View Post
But it wasn't right. And the failure to have compilers back it up was just a natural consequence.
Well, the failure might just as well have been because the hw designers made their new toy and kicked it over to the sw side with a "make this work", and as such it was a natural consequence. There have been many opinions on the Itanitum - of which a number point to the 'design by committee' nature of it - but I'm not sure it is _fundamentally_ flawed and can't be done right. But I also don't expect to find out, with the possible exception of Mill.


Quote:
Originally Posted by meynaf View Post
Splitting instructions streams isn't exactly a way to make code more readable, and readable code is the key to efficient programs. For me it doesn't feel at all like a cpu designed by compiler writers (which could have been a good idea indeed).
Compiler writers don't care much about any "beauty" of the instruction set, they want anything to make their life easier.

Quote:
Originally Posted by meynaf View Post
Doing massive amounts of operations at once is already possible. You can do that with a GPU or many cores.
Yes for single core general purpose cpu it maxes out around 5, i would have said 4, and this number is a maximum not always reached because of dependencies - so more wouldn't make any sense.
Well, as I said, the Mill designers heavily oppose this thinking and aim to up the parallellism a lot on regular code.

Quote:
Originally Posted by meynaf View Post
Are you sure they are really doing that ? It's the opposite of VLIW.
Yes. The size of the individual opcodes themselves, but not the parallell "packet" they are delivered in. Their worry is that to be able to do so many opcodes in parallell they need smaller opcodes so they can feed and decode more of them and not blow cache/line sizes.

Quote:
Originally Posted by meynaf View Post
That's THE mistake cpu designers shouldn't do. Exposing the implementation to the sw side is bound to failure because implementations change all the time.
Well, they force you to use an intermediate Mill instruction set that is then finalized for the Mill type you are running it on. That is IMO the weakest link in their plans - not necessarily for technical reasons but for getting mental buy-in from their target audience.
But I do agree with their thinking; if you want to replace the extra complexity of modern cpus with logic that does more, then you need to expose internals to the sw side and you need to come up with new ideas to help this along. But I'll say Mill is brave to think they can do this.


(Yes, I like the Mill architecture a lot. It isn't any single (or few) thing(s), and therefore not something you can gift to whatever other cpu family, but a long long line of interconnected and coherent ideas that feel fresh and exciting. I _do_ worry about what clock speed they can achieve though.)
NorthWay is offline  
Old 16 October 2019, 02:47   #15
AmigaHope
Registered User
 
Join Date: Sep 2006
Location: New Sandusky
Posts: 942
Eh I still find 68k the best in terms of code density *AND* readability, while still being compiler-friendly. Since it's cleaner than x86, the CISC -> microop pipeline in a theoretical modern 68k CPU could be a lot cleaner and smaller, yet still provide a lot of the advantages that x86 code has today.

Of course now that we're reaching the true hard limits of silicon, we might see a future where a new process (either electronic with a new chemistry, or maybe photonic?) provides 10X-100X the clock rate but with fewer transistors, and RISC might become the future again.
AmigaHope is offline  
Old 16 October 2019, 10:17   #16
meynaf
son of 68k
 
meynaf's Avatar
 
Join Date: Nov 2007
Location: Lyon / France
Age: 51
Posts: 5,323
Quote:
Originally Posted by NorthWay View Post
I was thinking modern out-of-order cpus in general. The additional logic to keep hundreds of instructions in-flight at any one time takes so much transistors, heat, and complexity that it dwarfs much the rest of the logic (cache etc notwithstanding).
Yes but does the complexity of OoO part lower significantly when trimming the instruction set ? If i remember correctly, it does not depend that much on the isa.


Quote:
Originally Posted by NorthWay View Post
Well, the failure might just as well have been because the hw designers made their new toy and kicked it over to the sw side with a "make this work", and as such it was a natural consequence. There have been many opinions on the Itanitum - of which a number point to the 'design by committee' nature of it - but I'm not sure it is _fundamentally_ flawed and can't be done right. But I also don't expect to find out, with the possible exception of Mill.
I think that if it is developer hostile, it is fundamentally flawed.


Quote:
Originally Posted by NorthWay View Post
Compiler writers don't care much about any "beauty" of the instruction set, they want anything to make their life easier.
But it's exactly the same at the end, what makes the beauty is the efficiency here...


Quote:
Originally Posted by NorthWay View Post
Well, as I said, the Mill designers heavily oppose this thinking and aim to up the parallellism a lot on regular code.
Then they haven't studied regular code...


Quote:
Originally Posted by NorthWay View Post
Yes. The size of the individual opcodes themselves, but not the parallell "packet" they are delivered in. Their worry is that to be able to do so many opcodes in parallell they need smaller opcodes so they can feed and decode more of them and not blow cache/line sizes.
I still think they would be better off by using the same opcode on different data, rather than many opcodes at once.


Quote:
Originally Posted by NorthWay View Post
Well, they force you to use an intermediate Mill instruction set that is then finalized for the Mill type you are running it on. That is IMO the weakest link in their plans - not necessarily for technical reasons but for getting mental buy-in from their target audience.
But I do agree with their thinking; if you want to replace the extra complexity of modern cpus with logic that does more, then you need to expose internals to the sw side and you need to come up with new ideas to help this along. But I'll say Mill is brave to think they can do this.

(Yes, I like the Mill architecture a lot. It isn't any single (or few) thing(s), and therefore not something you can gift to whatever other cpu family, but a long long line of interconnected and coherent ideas that feel fresh and exciting. I _do_ worry about what clock speed they can achieve though.)
Well, i admit my pov is exactly the opposite. As an asm programmer, i would *not* want to code on that...

I fear this is again theoretical power-on-the-paper.

They want to execute many things in parallel but are many tasks cpu bound today ?
They may just implement it, and then discover the memory interface does not follow and their performance level is not significantly better than the others.

Or find out most of the instructions they have in their blocks are NOPs due to inter-instruction dependencies.
If the mill is in-order cpu, it will be beaten by OoO on regular code regardless of how many things it can do at once, as it WILL have to wait for intermediate results to come.


Quote:
Originally Posted by AmigaHope View Post
Eh I still find 68k the best in terms of code density *AND* readability, while still being compiler-friendly. Since it's cleaner than x86, the CISC -> microop pipeline in a theoretical modern 68k CPU could be a lot cleaner and smaller, yet still provide a lot of the advantages that x86 code has today.
It would actually beat the crap out of x86 in many places, but it's not going to happen.


Quote:
Originally Posted by AmigaHope View Post
Of course now that we're reaching the true hard limits of silicon, we might see a future where a new process (either electronic with a new chemistry, or maybe photonic?) provides 10X-100X the clock rate but with fewer transistors, and RISC might become the future again.
Current limit for clock rate is more generated heat (which does not go linear with speed but actually worse iirc) than speed of individual transistors.
I don't see how this could become different in the future.
meynaf is offline  
Old 16 October 2019, 17:27   #17
AmigaHope
Registered User
 
Join Date: Sep 2006
Location: New Sandusky
Posts: 942
Quote:
Originally Posted by meynaf View Post
Current limit for clock rate is more generated heat (which does not go linear with speed but actually worse iirc) than speed of individual transistors.
I don't see how this could become different in the future.
Hence any advancements needing a different chemistry -- it could potentially generate less heat with the movement of electrons, or even be a photonic instead of electronic system.

Let's say we found a workable tech that could clock to 100Ghz with minimal heat generation, but with the caveat that gate density was nowhere as good as our current silicon designs. It might require a much simpler architecture, losing a lot of the advancements we've made in instruction decoding. Code might have to go to a fully orthagonal RISC instruction set with a very limited set of instructions. Removing other features might cause IPC to go down considerably, but since the damn thing is running at 100Ghz it's still many times faster than what we have today.
AmigaHope is offline  
Old 16 October 2019, 17:34   #18
meynaf
son of 68k
 
meynaf's Avatar
 
Join Date: Nov 2007
Location: Lyon / France
Age: 51
Posts: 5,323
Quote:
Originally Posted by AmigaHope View Post
Let's say we found a workable tech that could clock to 100Ghz with minimal heat generation, but with the caveat that gate density was nowhere as good as our current silicon designs. It might require a much simpler architecture, losing a lot of the advancements we've made in instruction decoding. Code might have to go to a fully orthagonal RISC instruction set with a very limited set of instructions. Removing other features might cause IPC to go down considerably, but since the damn thing is running at 100Ghz it's still many times faster than what we have today.
I don't see the point in having a design that will spend all its time waiting on the memory interface. As if you don't have space for a good enough decoder, you also don't have space for a decent onchip cache.
meynaf is offline  
Old 17 October 2019, 00:58   #19
turrican3
Moon 1969 = amiga 1985
 
turrican3's Avatar
 
Join Date: Apr 2007
Location: belgium
Age: 48
Posts: 3,913
Quote:
Originally Posted by meynaf View Post

Massive amounts of money.
if you had 1,000,000,000 $ could it be for you a good goal to make a new 68k compatible amiga but with the power of a new intel i7 ??
Do you have an idea of what could be better between a powerfull 68k and corei7 ???
Could have been the present better in your pov ??
All of this is theorycall.
turrican3 is offline  
Old 17 October 2019, 09:04   #20
Hewitson
Registered User
 
Hewitson's Avatar
 
Join Date: Feb 2007
Location: Melbourne, Australia
Age: 41
Posts: 3,773
Quote:
Originally Posted by turrican3 View Post
if you had 1,000,000,000 $ could it be for you a good goal to make a new 68k compatible amiga but with the power of a new intel i7 ??
Do you have an idea of what could be better between a powerfull 68k and corei7 ???
Could have been the present better in your pov ??
All of this is theorycall.
The only way that's happening is if AmigaOS is ported to x86/x64 with a 68k emulation layer. Something that should have been done years ago, really. But no, a bunch of complete morons decided PPC was the way to go.
Hewitson is offline  
 


Currently Active Users Viewing This Thread: 1 (0 members and 1 guests)
 
Thread Tools

Similar Threads
Thread Thread Starter Forum Replies Last Post
Blizzard PPC 68k socket PeteJ support.Hardware 18 19 August 2014 14:20
Mixed 68k / PPC code on VBCC. Cowcat Coders. General 10 01 August 2013 16:01
68K/PPC context switching? RedskullDC support.Hardware 1 08 December 2008 11:44
Any interested in buying NEW PPC and 68K accelerators for A1200? keropi MarketPlace 8 23 February 2005 00:20
GVP G-force 030 board for A2000-problem switching between 030 and 68k Unregistered support.Hardware 5 19 August 2004 10:04

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +2. The time now is 15:17.

Top

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.
Page generated in 0.09806 seconds with 13 queries