error from motorola ? switching 68k to ppc
Hi guys,
what do you think about: motorola continuing 68k cpus in 90's and not switching to ppc. I think it urts the amiga and the mac... The two gone bankrupt or near bankrupt (they were saved by bill gates). Could have been possible to continue with compatible 68000 like the pc did instead of switching to ppc ?? In a technical pov was it possible to continue on 68k compatibles like x86 ? |
Sure it would have been possible. My guess is, the volume just wasn't there to make it economically feasible though.
The desktop world went x86, the embedded world went arm.. |
Something Something Hindsight.
|
I'd have preferred to see the 68k series of CPU's achieve faster speeds and more capabilities, but this had absolutely NOTHING to do with Commodore going bankrupt.
Saved by Bill Gates? More like killed by Bill Gates. |
|
I think the Amiga was more hurt by Commodore's frankly ludicrous business decisions than Motorola's lack of movement from 68K to PPC.
Plus the PowerPC processor wasn't introduced till 1992 which is only two years before Commodore finally folded. |
I don't think that it was the only reason,
but it didn't help. And it didn't help when the amiga was buy, because they had to make a new hardware and port the amiga os to ppc, it add costs. We won't remake history today, but you know if....if... then perhaps...,who knows what could have happened? ps: but i would be happy to see a new platform with a new hardware, something fresh with different games, something like the amiga even with a different name... I have the impression that every years are the sames, no more surprises. You know our times lack of surprises, like the amiga and the atari st were at the time. For me, we are following a boring road for some years now.It's just my feeling, i suppose others feel differently. |
they could have reduced the instruction set (like the Coldfire) if this was an issue instead of making up this PPC risc nonsense that noone is able to program properly except a compiler and breaking the binary compatibility while it was still slow to emulate a 68040 on a PPC. Stupid move when everyone loved the 68k series...
x86 instruction set is compatible since 1980 or earlier and compilers AND humans can generate good & fast code on it. That's probably what makes its strength. |
Quote:
But by that time it wouldn't have mattered what Motorola did - bar making Pentium clones. The PC was it, and anything not 100% compatible was bound to lose in the marketplace. So the real reason Motorola failed was that they didn't have the 68000 ready in time for the PC in 1980. After that mistake nothing could save them. However, if it wasn't for that the Amiga might never have happened! Or if they had managed to compete (and Commodore survived) the Amiga would have become a virtual PC clone, with all the attendant downsides. So in the end Motorola switching to PPC and Commodore going bankrupt was a good thing for retro Amiga fans, because it froze the design when it was at its best. |
The great irony of RISC was that what made it easy to implement in limited silicon (simplicity) made it a drawback in an era of plentiful transistors but long pipelines and stalled clock increases. In the end CISC acts as sort of a form of realtime code compression. Sure you could do a better compression scheme data-wise but CISC has the benefit of keeping it syntactically logical.
68k is by far the most sensible instruction set I've ever seen for a processor. Imagine what could have been done in making it faster with the might of a big company like Intel like they did with the way-less-elegant x86 instruction set. |
Strictly speaking, it was the 88K that was the next step for Motorola. And I don't blame them for doing neither that nor PPC - as long as it looked like clockspeeds were going up and up you were ahead with the 'R'.
What I'm not so impressed by is to drop the 68K line, or the Coldfire replacements they did. I've been trying to keep up with modern cpu design wisdom and the contortions going on behind the scenes in a modern one is frightening. So much so that I think the Itanium/VLIW in principle was right, it was more the particular implementation and the failure to have compilers back it up that has made everyone shun them. Which is why I am so keenly keeping an eye at what the people behind the "Mill" is doing. It feels like a cpu designed by compiler writers - when something is difficult they get an architecture feature to help them succeed. They really believe you can do massive amounts of operations at once (unlike conventional wisdom which says it maxes out around 5), and have done lots of work to shrink opcode size and to expose inner parts of the architecture to the sw side. |
Quote:
Or if someone else know why ?? What is specific to the x86 that the 68k didn't have, to continue the road ?? |
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
Yes for single core general purpose cpu it maxes out around 5, i would have said 4, and this number is a maximum not always reached because of dependencies - so more wouldn't make any sense. Quote:
Quote:
Quote:
|
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
But I do agree with their thinking; if you want to replace the extra complexity of modern cpus with logic that does more, then you need to expose internals to the sw side and you need to come up with new ideas to help this along. But I'll say Mill is brave to think they can do this. (Yes, I like the Mill architecture a lot. It isn't any single (or few) thing(s), and therefore not something you can gift to whatever other cpu family, but a long long line of interconnected and coherent ideas that feel fresh and exciting. I _do_ worry about what clock speed they can achieve though.) |
Eh I still find 68k the best in terms of code density *AND* readability, while still being compiler-friendly. Since it's cleaner than x86, the CISC -> microop pipeline in a theoretical modern 68k CPU could be a lot cleaner and smaller, yet still provide a lot of the advantages that x86 code has today.
Of course now that we're reaching the true hard limits of silicon, we might see a future where a new process (either electronic with a new chemistry, or maybe photonic?) provides 10X-100X the clock rate but with fewer transistors, and RISC might become the future again. |
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
I fear this is again theoretical power-on-the-paper. They want to execute many things in parallel but are many tasks cpu bound today ? They may just implement it, and then discover the memory interface does not follow and their performance level is not significantly better than the others. Or find out most of the instructions they have in their blocks are NOPs due to inter-instruction dependencies. If the mill is in-order cpu, it will be beaten by OoO on regular code regardless of how many things it can do at once, as it WILL have to wait for intermediate results to come. Quote:
Quote:
I don't see how this could become different in the future. |
Quote:
Let's say we found a workable tech that could clock to 100Ghz with minimal heat generation, but with the caveat that gate density was nowhere as good as our current silicon designs. It might require a much simpler architecture, losing a lot of the advancements we've made in instruction decoding. Code might have to go to a fully orthagonal RISC instruction set with a very limited set of instructions. Removing other features might cause IPC to go down considerably, but since the damn thing is running at 100Ghz it's still many times faster than what we have today. |
Quote:
|
Quote:
Do you have an idea of what could be better between a powerfull 68k and corei7 ??? Could have been the present better in your pov ?? All of this is theorycall. |
Quote:
|
All times are GMT +2. The time now is 18:32. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.