English Amiga Board


Go Back   English Amiga Board > Main > Nostalgia & memories

 
 
Thread Tools
Old 10 June 2017, 18:42   #41
babsimov
Registered User
 
Join Date: Jun 2017
Location: France
Posts: 40
Quote:
Originally Posted by Gorf View Post
@babsimov

Don't worry: your post ist still kind of short compared to mine (*sorry*)

it is nice to see all the similarities! ok - the differences are interesting too.
So I was the third person to suggest the name A700 in an alternative timeline. :-)
I think the A700 is the best way to avoid confusion with the 600 (and by the way i don't like the 600, i think it's one of the big mystake by Commodore).

About similarities, i agree. I think it show that there is common things that all the Amiga users have wanted at the time, and if Commodore have delivered that to the community, Amiga will be there today.

Quote:
all in all your timeline is more "ambitious" than mine. I would have liked to bring new chipsets and other innovations earlier, but I fear this might be too unrealistic.
So I tried to use only technology that was already there at this point of time and not too expensive to produce...
I like your timeline. I don't know if its to "ambitious" about my timeline, but it's a dream timeline so, maybe i make it to "unrealistic" (and i think it's unrealistic too, it's why i like there is a tycoon game to try it in "reality".

Quote:
I also like these low price tags on your products - but there I also fear, it would be impossible to produce them cheap enough - probably even my prices are to low...
Yes, i think my price tags are to dreamly to happen. But even if the "real hardware" be more expensive, i think these hardware can't victoriously compete with others computers of the time. If Commodore was seriously managed by competent administration, i think the Amiga would become the main personnal computer (especialy for it's earlier multimedia capabilities, that no other computer can compete.


By the way, have you read this subject and documents about Hombre. If i have these at the time i have wrote my alternate timeline, maybe i have changed some things here and there. Maybe it will be heplfull to you.

http://eab.abime.net/showthread.php?t=87342

Last edited by babsimov; 10 June 2017 at 20:00.
babsimov is offline  
Old 10 June 2017, 20:43   #42
Gorf
Registered User
 
Gorf's Avatar
 
Join Date: May 2017
Location: Munich/Bavaria
Posts: 2,294
Quote:
By the way, have you read this subject and documents about Hombre. If i have these at the time i have wrote my alternate timeline, maybe i have changed some things here and there. Maybe it will be heplfull to you.

Yes I did read it.

The dual-line-buffer feature with zooming is inspired by this. It was done in other projects before Hombre as well, so it felt ok to include this feature in my AAA+ chipset.

some ideas of Hombre will also end up in my 3D-module. But also there it is not really Hombre-specific ... only shader and mapper.

In my timeline there will be no Hombre, since it is not really a Amiga for me.
Gorf is offline  
Old 10 June 2017, 21:03   #43
babsimov
Registered User
 
Join Date: Jun 2017
Location: France
Posts: 40
Quote:
Originally Posted by Gorf View Post
Yes I did read it.

The dual-line-buffer feature with zooming is inspired by this. It was done in other projects before Hombre as well, so it felt ok to include this feature in my AAA+ chipset.

some ideas of Hombre will also end up in my 3D-module. But also there it is not really Hombre-specific ... only shader and mapper.

In my timeline there will be no Hombre, since it is not really a Amiga for me.
I'm not a technician so i miss somes of the technical things in your AAA+ specs.

About Hombre, at the time (may 1994) when i first read the specs in a magazine (the same month another magazine tell us the specs for AAA), i'm very happy with Hombre.

But when they talk about WindowsNT as operating system... what can i say... it's not an Amiga.

Later this year, Commodore UK try to buy Commodore. They said Hombre will be the next generation Amiga with an official AmigaOS port to it (with improvements). To be honest, the PA-RISC choice instead of PPC made me doubt. Apple had followed Motorola and I found that the PA-RISC was perhaps not the best choice. But why not, the engineers of Commodore knew what they were doing.

Now with hindsight, i think Hombre would have been the best thing that could have happen to the Amiga. Moving it into the 21st century before time. Engineers talked about having affordable bi-processor (or standard for the high end). Years before two-core processors. At the time the specification of the BeBox impressed me. Jean Louis Gassée said that this was what the Amiga would have become if Commodore had not disappeared. With Hombre, it seems that he was telling the truth. So for me it's downright AAA that it should not have started. Just made the AA+ (AGA improved) to release Hombre in 1992.

For me OCS 1985 ECS 1986 AGA 1987 (only for the high end, full 32 bit Amiga, just before Apple release the MACII) AA+ 1989/90 and Hombre 1992.
babsimov is offline  
Old 10 June 2017, 21:58   #44
Gorf
Registered User
 
Gorf's Avatar
 
Join Date: May 2017
Location: Munich/Bavaria
Posts: 2,294
Quote:
Originally Posted by babsimov View Post
I'm not a technician so i miss somes of the technical things in your AAA+ specs.
Well I am of course still very vague in the description, but I think I have a pretty good plan and more detailed specifications in mind. Just ask me!

Quote:
For me OCS 1985 ECS 1986 AGA 1987 (only for the high end, full 32 bit Amiga, just before Apple release the MACII) AA+ 1989/90 and Hombre 1992.
AA+, something more than AGA, would have been too expensive in 1989. You always have to consider the transistor-count and the technology that is available at this specific time.
Jay Miner even developed the "Ranger" prototype for Commodore - a new chipset taking advantage of VRAM ... but it would have been incompatible and high end.

Last edited by Gorf; 10 June 2017 at 22:11.
Gorf is offline  
Old 11 June 2017, 08:55   #45
michaelz
Registered User
 
Join Date: Jan 2017
Location: Den Haag / Netherlands
Posts: 193
What if the A2000 would have been introduced as an A1100 (more advanced to the A1000) and the A2000 would have been a ranger-based high-end model. So cbm would have the low-end A500, a mid-range A1100 and the high-end A2000. The ranger could later on drop in the mid-range A1200 (not the current, but a tower) and a low-end A600 (A model like the current A500+, with an HD and PCMCIA).
michaelz is offline  
Old 11 June 2017, 11:11   #46
babsimov
Registered User
 
Join Date: Jun 2017
Location: France
Posts: 40
Quote:
Originally Posted by Gorf View Post
Well I am of course still very vague in the description, but I think I have a pretty good plan and more detailed specifications in mind. Just ask me!
I'm not a technician, so i don't know if i fully understand the specs if they are too technical. But if you have something more detailled, i like to read it, if you have time, send me a mail here.

Quote:
AA+, something more than AGA, would have been too expensive in 1989. You always have to consider the transistor-count and the technology that is available at this specific time.
Jay Miner even developed the "Ranger" prototype for Commodore - a new chipset taking advantage of VRAM ... but it would have been incompatible and high end.
But, it is well know that AAA was very expensive to produce for a 1990 release. And AA+ have half less transistor than AAA, so i think it's a better option for high end for 1989, and a mid range for 1990, entry level for 1991.

By the way AGA for 1987 have a price tag a little under the MACII, it's a high end computer for the time, full 32 bits, and with a RTG subsystem available at release, for the professionnal market.

The mid range is ECS, and low end is OCS, they remain until 1990 when Commodore switch to AA+.

In insight, i more and more think AAA, as great it was for the time, is more a ingeneer dream, than a financial competitive chipset for all market. Maybe i'm wrong.

About the Ranger, i'm not sure it's really a chipset, it's not clear with the latest news i read about it. I think it's more a code name for the "A2000" like computer from the original team, but this computer as i understand it, is based only on the OCS, maybe a little revised OCS.

But at the time, in an Jay Miner interview, he said he had designed and fully tested the next chipset, with high resolution and VRAM. So i don't know what to think.

The specs it have found about the Ranger chipset, seems to me to little for be a "next generation" chipset. VRAM ok, but very expensive. Only 128 color, at a time where 256 couleurs start to be the "must have". Only 4096 colors as palette, as a time of 262144 color palettes. VGA is better is these aspects for me. For sound it remain the same. It's OK for a 1988/89 release, but as i see it, all the chipset need a really important upgrade to be up to date for the 90 years.

For me a chipset, at release, need to be ok for 4 to 5 years. The AAA would be ok for 5 years, but too expensive as i said. The AA+ can be Ok for 3 to 4 years.

A nice feature i take from the AAA to Hombre2 (if i remember correctly) is the dual chipset. AAA have SLI far earlier than Nvidia (or 3dfx not sure).
babsimov is offline  
Old 11 June 2017, 14:04   #47
michaelz
Registered User
 
Join Date: Jan 2017
Location: Den Haag / Netherlands
Posts: 193
Quote:
Originally Posted by babsimov View Post
I'm not a technician, so i don't know if i fully understand the specs if they are too technical. But if you have something more detailled, i like to read it, if you have time, send me a mail here.







But, it is well know that AAA was very expensive to produce for a 1990 release. And AA+ have half less transistor than AAA, so i think it's a better option for high end for 1989, and a mid range for 1990, entry level for 1991.



By the way AGA for 1987 have a price tag a little under the MACII, it's a high end computer for the time, full 32 bits, and with a RTG subsystem available at release, for the professionnal market.



The mid range is ECS, and low end is OCS, they remain until 1990 when Commodore switch to AA+.



In insight, i more and more think AAA, as great it was for the time, is more a ingeneer dream, than a financial competitive chipset for all market. Maybe i'm wrong.



About the Ranger, i'm not sure it's really a chipset, it's not clear with the latest news i read about it. I think it's more a code name for the "A2000" like computer from the original team, but this computer as i understand it, is based only on the OCS, maybe a little revised OCS.



But at the time, in an Jay Miner interview, he said he had designed and fully tested the next chipset, with high resolution and VRAM. So i don't know what to think.



The specs it have found about the Ranger chipset, seems to me to little for be a "next generation" chipset. VRAM ok, but very expensive. Only 128 color, at a time where 256 couleurs start to be the "must have". Only 4096 colors as palette, as a time of 262144 color palettes. VGA is better is these aspects for me. For sound it remain the same. It's OK for a 1988/89 release, but as i see it, all the chipset need a really important upgrade to be up to date for the 90 years.



For me a chipset, at release, need to be ok for 4 to 5 years. The AAA would be ok for 5 years, but too expensive as i said. The AA+ can be Ok for 3 to 4 years.



A nice feature i take from the AAA to Hombre2 (if i remember correctly) is the dual chipset. AAA have SLI far earlier than Nvidia (or 3dfx not sure).


Don't forget; VGA was only 640x480 with 256 colours at the start. And most pc's had CGA, EGA at the time of introduction. With VGA just really getting traction around 1990's.


In 1995 I bought a VGA computer, with 800x600 16-bit colour. That was just getting standard then. The ranger could do 1024x1024 pixels in 1987 and 128 colours from a 4096 palette (HAM thus simultaneous on screen?) would have been great at that time.

I think Ranger would really have brought Commodore the Desktop Publishing supreme reign and Apple might have been obliterated at that time. Maybe Commodore could have been now a days Apple in that scenario.
michaelz is offline  
Old 11 June 2017, 15:58   #48
babsimov
Registered User
 
Join Date: Jun 2017
Location: France
Posts: 40
Quote:
Originally Posted by michaelz View Post
Don't forget; VGA was only 640x480 with 256 colours at the start. And most pc's had CGA, EGA at the time of introduction. With VGA just really getting traction around 1990's.


In 1995 I bought a VGA computer, with 800x600 16-bit colour. That was just getting standard then. The ranger could do 1024x1024 pixels in 1987 and 128 colours from a 4096 palette (HAM thus simultaneous on screen?) would have been great at that time.

I think Ranger would really have brought Commodore the Desktop Publishing supreme reign and Apple might have been obliterated at that time. Maybe Commodore could have been now a days Apple in that scenario.
I know 1024x1024 128 color would be great for 1987/88. But Ranger (if this chipset has existed as say Jay Miner, not sure of this, today), is, as i understand it, limited to 128 colors maximum even at low resolution. Remenber 320x200 256 colors at the end of the 80 become the "must have" for all game. I remember when on PC game sceenshot of game packaging (or all other platform) the Amiga screenshot are used instead of the real screenshot for PC. Because it is the best screenshot of the time. When the Amiga packaging start to be illustrated by PC VGA screenshot, i understand that Commodore need to do something quickly if Amiga want to remain on top.

So 128 colors for the standard game resolution is not enough. Sorry, i really like the Jay Miner work on Amiga, but i don't see Ranger push it enough. And it seems to be expensive to achieve only 128 colors at low resolution.

I really like the AAA specs, this is that i really consider as a next generation Amiga, and it really push the Amiga concept in all area. But it seem expensive even for a high end.

This is why, i think an AGA high end computer released for 1987 is enough to compete with MAC/PC of the time. And a AA+ high end for 1989 can make Amiga competitive until 1992/93 for Hombre to be the next generation with early 3D "GPU". These chipset allow 256 colors at low resolution with only DRAM, so it is possible without using expensive VRAM.

Ranger don't have chunky pixel, Jay Miner himself have said one of his regret is not to have chose chunky from the start for Amiga. Wolfenstein 3D, Wingcommander are two games that push the PC into the game market and make peoples buy a PC instead of an Amiga. AA+ have chunky pixel for, as i understand it, a reasonnable price. I reaaly think an entry level computer with AA+ at early/mid 90 would have save Amiga.

By the way about VGA, i think at the start it is only 320x200 256 colors and 640x400 16 colors. If i remember correctly 640x400 256 colors come at end of the 80 or early 90. But games use 320x200 256 colors from the start on VGA, an VGA is 1987.


The ranger as i have liked to be :

1024x1024 2 to 128 colors
800x600 2 to 256 colors
640x400 2 to 512 colors
320x200 2 to 1024 colors

EDIT (after rereading my post, why not up to 4096 colors for 320x200, a "true color resolution". HAM still exist for compatibility and give 4096 colors at highter resolution or 32000 colors at low resolution if the palette is 32000 colors instead of the planned 4096 colors).


And dual playfield (or more) for all these resolution. Maybe for 320x200 we can have multiple playfield from 16 colors to 512 colors (2 playfields in this case) and more for less than 512 colors for each playfield (i don't know if i explain correctly, sorry for the english).



Sound :
2 paula (8 channel 8 bit)

Or one paula, but instead of the 68000, they use a 68456 (68000+DSP 56000 integrated)
https://en.wikipedia.org/wiki/Motorola_56000


EDIT : In my vision the Amiga was the color computer by excellence (or better to call it a graphic computer) and had to always be ahead of all others on this point, not a follower as it has become.

EDIT 2 : I reread the AA+ spec from a french Amiga site :
http://obligement.free.fr/articles/aa+.php

About cost of the AA+ versus the AAA, it seems the AA+ would have been really really cheaper to produce.

Number of transistor for : AAA single chipset 750000, an 1 million for a dual configuration.

The AA is a chipset with two chip with 100000 transistor each (200000 for the complete chipset), more than 3 time less than AAA.

Alice from the AGA is alone 80000 transistor, so AA+ is very cheap i think and should have been what Commodore release in 89/90. No need to add more than the AA+ offer, AAA is only expensive luxury, i'm more an more convinced of that.

Complete ECS chipset is 60000 transistor.

@Gorf : Sorry to digress an take a little of your subject. It's just i really have liked Amiga to be there today and i frequently dream how to achieve this if i can "rewrite" Amiga history.

Last edited by babsimov; 11 June 2017 at 18:22.
babsimov is offline  
Old 11 June 2017, 21:31   #49
Gorf
Registered User
 
Gorf's Avatar
 
Join Date: May 2017
Location: Munich/Bavaria
Posts: 2,294
Quote:
Originally Posted by michaelz View Post
What if the A2000 would have been introduced as an A1100 (more advanced to the A1000) and the A2000 would have been a ranger-based high-end model. So cbm would have the low-end A500, a mid-range A1100 and the high-end A2000. The ranger could later on drop in the mid-range A1200 (not the current, but a tower) and a low-end A600 (A model like the current A500+, with an HD and PCMCIA).
The Ranger chipset was not compatible and was far from ready when the A2000 came out.
It also would have made not much sense to diverse the chipset at this early stage - programmers just began to find out, what things were possible on the OCS.
And there where not enough units sold yet - computers where still a rare thing.
(the best year (sold units) for the A500 was as late as 1990)
We need to establish a common ground first.

I also did not want to split up the development team and spread resources over too many projects. Thats why I concentrated all efforts towards the development of the AAA+ chipset after a quick and compatible update like the ECS.

Only a more or less one-man-project for the Amber-chip would run in parallel:

With a few tweaks the Amber would be able to deinterlace a 800x600 SVGA screen mode (productivity Super72) or even 1024x786 XGA. How?

Amber stores the odd field (or the even... does not matter here) in a special dual-ported RAM. This RAM was expensive, so they limited it. It is arranged in three 264x4 chips (in total 3072 kBits (not kBytes)), storing the whole 12-bit value of each pixel in one field.
But 800x300x12 is 2880k - so it would still fit!
Amber was just not fast enough and ignored all screen modes over 15kHz.
We fixed that.

For higher resolutions we run out of dual-ported RAM :-/
But wait!
To store the whole 12-bit for each pixel is only necessary for things like HAM or Copper-magic. So you need this of course for games, but not for a productivity mode:

If Amber can "remember" just 4 12bit-values we need only 2 bits per pixel.
That is a very minimalistic color-lookup-table (CLUT) - very tiny.
We do it like Graffity or DCTV and "teach" Amber this 4 values in the first line of our screen (and thereby losing one line...). A special morse-code of alternating values would make Amber listen to the 4 12bit values we transmit. This way we also restore the whole 12bit-palette for super-highres modes!

Theoretically we can store now even QXGA (1600x1200) at 4 colors - but ECS can not feed Amber that fast....
well only with an other trick - the same the A2024 monitor used, by stitching more than one screen together.
The total screen would than be updated only at 15Hz or even lower but the picture would be flicker free with a vertical frequency of 60hz or more.
Bad for scrolling or other fast movements, but nice for DTP or CAD and Amix.

With our slightly tweaked Amber we can now deliver great deinterlaced 800x600 @ 70 Hz, 1024x768 @ 60 Hz und with a faster RAMDAC we even beat the NeXTstation, which had 1120×832 with only 4 shades of grey!

Last edited by Gorf; 12 June 2017 at 23:58.
Gorf is offline  
Old 11 June 2017, 22:04   #50
idrougge
Registered User
 
Join Date: Sep 2007
Location: Stockholm
Posts: 4,332
Quote:
Originally Posted by Gorf View Post
Maybe it would have made sense for Commodore to buy or join with Atari?
What are the pros and cons?
What point of time would be best?
The pro would be gaining the userbase and developer base of the ST. If you let Atari perish, you basically give away the MIDI and hard disc recording market to Apple.

Exactly how you would take over Atari's assets is another story. You could keep Atari going quite easily since you already have a factory and their R&D costs are low, especially if you refocus Atari's engineers on porting whatever can be ported to your own hardware. You have the CPU in common, so they will feel right at home and a compatibility layer for TOS may also be made, at least given some simple hardware support such as a card with MIDI ports where Cubase expects them to be. Or you just convince Steinberg to make the necessary alterations to the next release of Cubase.

It's a terrific market to be in, and if you're not in it, someone else will be.

The best point in time would probably be before the Falcon is released, to make sure you don't have to support a still-born platform. As far as I know, the Falcon came very close to not ever being released, since Atari had already decided to leave the computer business in favour of consoles. Unless the bad blood between Tramiel and Commodore got inbetween, an offer for Atari's computer operations could very well be accepted since it would relieve them of that headache while giving them the necessary cash infusion for the Jaguar. Which will fail in any case.
idrougge is offline  
Old 11 June 2017, 22:25   #51
Gorf
Registered User
 
Gorf's Avatar
 
Join Date: May 2017
Location: Munich/Bavaria
Posts: 2,294
Quote:
Originally Posted by idrougge View Post
The pro would be gaining the userbase and developer base of the ST. If you let Atari perish, you basically give away the MIDI and hard disc recording market to Apple.
A fair point.
The user base is not huge, but considerable. And midi/music is a Atari stronghold back than.

Quote:
Or you just convince Steinberg to make the necessary alterations to the next release of Cubase.
of course a native version would make sense after a while. We would certainly work together with software houses.

Quote:
The best point in time would probably be before the Falcon is released, to make sure you don't have to support a still-born platform. As far as I know, the Falcon came very close to not ever being released, since Atari had already decided to leave the computer business in favour of consoles. Unless the bad blood between Tramiel and Commodore got inbetween, an offer for Atari's computer operations could very well be accepted since it would relieve them of that headache while giving them the necessary cash infusion for the Jaguar. Which will fail in any case.

We did send Irvin Gould to early retirement anyway. In this whole scenario i don't see any way to rescue cbm or Amiga with Gould still on board.
To buy out only the ST line of products sounds very reasonable under this circumstances.
But on the other hand this very cash infusion might lead to a far better Jaguar competing our own AmigaPS and other consoles.

To play the "console card" in my scenario was just a good way to earn money, attract developers and spread the user base. I am not a console-fan at all and don't like that idea very much, but the CD32 showed the huge potential in this years...

But maybe it would work out: the Jaguar would still be late, probably even later, because they will build in more features into the Jag to rival us... so it would ship at the same time as the Sony Prey-station and the SEGA Saturn.
Both being competitors we can not avoid anyway..
Gorf is offline  
Old 12 June 2017, 19:53   #52
babsimov
Registered User
 
Join Date: Jun 2017
Location: France
Posts: 40
Quote:
Originally Posted by Gorf View Post
The Ranger chipset was not compatible and was far from ready when the A2000 came out.
It also would have made not much sense to diverse the chipset at this early stage - programmers just began to find out, what things were possible on the OCS.
And there where not enough units sold yet - computers where still a rare thing.
(the best year (sold units) for the A500 was as late as 1990)
We need to establish a common ground first.

I also did not want to split up the development team and spread resources over too many projects. Thats why I concentrated all efforts towards the development of the AAA+ chipset after a quick and compatible update like the ECS.

Only a more or less one-man-project for the Amber-chip would run in parallel:

With a few tweaks the Amber would be able to deinterlace a 800x600 SVGA screen mode (productivity Super72) or even 1024x786 XGA. How?

Amber stores the odd field (or the even... does not matter here) in a special dual-ported RAM. This RAM was expensive, so they limited it. It is arranged in three 264x4 chips (in total 3072 kBits (not kBytes)), storing the whole 12-bit value of each pixel in one field.
But 800x300x12 is 2880k - so it would still fit!
Amber was just not fast enough and ignored all screen modes over 15kHz.
We fixed that.

For higher resolutions we run out of dual-ported RAM :-/
But wait!
To store the whole 12-bit for each pixel is only necessary for things like HAM or Copper-magic. So you need this of course for games, but not for a productivity mode:

If Amber can "remember" just 4 12bit-values we need only 2 bits per pixel.
That is a very minimalistic color-lookup-table (CLUT) - very tiny.
We do it like Graffity or DCTV and "teach" Amber this 4 values in the first line of our screen (and thereby losing one line...). A special morse-code of alternating values would make Amber listen to the 4 12bit values we transmit. This way we also restore the whole 12bit-palette for super-highres modes!

Theoretically we can store now even QXGA (1600x1200) at 4 colors - but ECS can not feed Amber that fast.... well only with an other trick - the same the
A2024 Monitor used, by stitching more than one screen together.
The total screen would than be updated only at 15Hz or even lower but the picture would be flicker free with a vertical frequency of 60hz or more.
Bad for scrolling or other fast movements, but nice for DTP or CAD and Amix.

With our slightly tweaked Amber we can now deliver great deinterlaced 800x600 @ 70 Hz, 1024x768 @ 60 Hz und with a faster RAMDAC we even beat the NeXTstation, which had 1120×832 with only 4 shades of grey!
I like your amber strategy. If I had had enough technical knowledge, like you, I would probably have integrated something similar in my alternative story.

About buying Atari, i think you don't need that. Just add as standard a MIDI interface on all Amiga model (and of course the first) and you get the only Atari strong market. No need to spend money on Atari.

Commodore have a sufficient good brand image, to not need the Atari one.

Remember Atari, at the time is famous in the game market, but one thing Amiga don't need is a label of "game machine".

That what happen in reality, a lot of people think the Amiga is only a game machine and Commodore do nothing to change that at the time. So buy Atari is not a good thing for me.
babsimov is offline  
Old 12 June 2017, 23:31   #53
Gorf
Registered User
 
Gorf's Avatar
 
Join Date: May 2017
Location: Munich/Bavaria
Posts: 2,294
In the Year 1994

The year Amiga did not go bankrupt

This will be the last year I am describing in detail.
Until now I did not see too many options, what we could have done differently. We were very restrained by the course of events, the available technology and the market. That way I could describe what we did and why we did it as detailed as I did. In the following years, there will be too many different options, in what direction we go, and I will only lay out one possible path. Also ripples in time are catching up, so in the following years we can no longer take market situation and available technology for granted.

But 1994 is of course a year of destiny, so it deserves a longer post.

By the way:
4MB of DRAM still cost $150
a single GB hdd $700
a desktop PC from Dell with Pentium60 at least $3000



What did we archive so far?

We got rid of the PC business, while the name CBM was still worth something in that market.
We renamed ourselves to C.A.T (Commodore Amiga Technologie) and licensed CBM to other PC vendors.
With the money we bought InMOS and Epyx (Lynx).
We bundled all our efforts to develop and build our AAA+ NG line.
InMOS gave as crucial technology, developers and patents.
Lynx made us a game publisher.
Both was necessary for a successful launch of our new product line.

In 1994 we produce the best selling game- and multimedia-console: the AmigaPS
The worldwide marketshare is over 60%. Due to the low price it was not really profitable the first 2 years, but this year we reached break even and now money ist coming in - big time! But we also know, that 1995 will be much harder...

Our AAA+ Amigas reach a marketshare of up to 25% in some European countries including UK. The share is much bigger when we look at home-users alone.
In the US we reach a little over 10%.

Introducing CD-ROM to all our NG models reduced piracy considerably - for developers Amiga is the most profitable platform.

The CPU: risking the RISC?

Last year the Pentium came out and is now finally hitting the market. As usual Motorola is late and the 68060 was not available until this year. Motorola wants to go PPC only and the 68060 is the last of its kind...
Apple decided to jump on this wagon and ships the first PPC-Macs.
Apple needed something new and shiny for marketing reasons, as they don't do so well these years.
We let them have this "win" and stick with the 68060 + DSP for now.
Why?
The PPC 601 is nowhere near as fast as marketing says. Even worse if you have to emulate the 68K for legacy software (and even parts of the OS as Apple does)
The 603 und 604 are very expensive and not available in numbers.
But we did build prototype-cards, started to port AmigaOS and will even bring a 604 ZorroIII card next year.

Other processor-options?
  • PA-RISC - is nice but is will be going nowhere in terms of clock speed.
  • SPARC - was going 64bit only - we don't need that yet, it just makes everything more expensive
  • MIPS - also a dead end
We still have our transputer CPU design and kept on developing it. We also hired more experts in this field and are developing our own 64bit CPU, with some very new ideas an features. But it will probably not be ready or make sense until 98, when SDRAM becomes available.

until than we provide multiprocessing - first with 68K and later with PPC.

New Products in 1994

AAA++:
this is not revolutionary but more a evolutionary update to our AAA+ design.
The AAA+ is now 2.5 years old and this gave us the chance to jump to the next higher level of integration and to smaller structures, providing lower energy consumption. This gives us the opportunity to finally release a AmigaNG-Laptop.

We went from 14 to 28 MHz for the bus. Well ... RAM is still horribly slow!
Internally the Blitter is now 50% faster than before.
Mary can now listen: Audio in.
Buster speaks PCI: actually both ZorroIII and PCI and your Amiga will provide both.
Ramsey is now a interleaved RAM-cortoller to speed up fastram.
Hydra provides now serial links with 112 MBit.
A5000NG:
  • AAA++ chipset
  • SCSI-2
  • Busboard with ZorroIII and PCI
  • 68060 +DSP (also dual-configuration available)
  • 4MB chipram (VRAM) and 8 MB fastram
  • hdd
  • CD-ROM
  • floptical
As usual you have all these options as well for the Amiga Tower.
CPU-Boards:

to upgrade your Amiga we provide new CPU boards
  • 68040 + DSP @ 40Mhz
  • dual 040 + dual DSP with 2nd-level-cache
  • 68060 + DSP @ 50Mhz
  • dual 060 + dual DSP with 2nd-level-cache

The AmigaNG Laptop:
with the new energy efficient AAA++ chipset, we can finally replace the old A3000 (ECS) Laptop.
Sure color-lcd, battery-lifetime, weight and so on are far from what we are used to now, but the market for laptops is already growing faster than the rest.
We are part of this.

the Quake-Module and the Quake-Card:
developed together with id-soft and other game-developers, this module provides 3D acceleration for games and applications.
For introduction it will we bundled with id-soft's Quake.
Like Voodoo would do two years later, our 3D chip only handles the last stages of the rendering pipeline: shading and mapping and some antialiasing.

The module for the AmigaPS is quite limited, as all textures need to be loaded via the InMOS-link. no direkt access to chipram or fastram.
It comes with its own 2 1.5 MB ram.
It is fast enough to render 320x256 in 24bit-YUV (where the shader only changes the 8bit luminance) or 640x512 in 8bit.
Linda/Monica take care of the output, so you can zoom the image and mix it with the 2D output of Blitter.
Sprites are shown on top.

this module offers the about the same 3D power the Sony PreyStation will offer later this year. But our concept is more complicated and the combination of AmigaPS and Quake-module is more expensive. :-/

But since we already sold millions of AmigaPS, we believe the owners will rather upgrade their system than switch to a new console.


The Quake-ZorroIII card can be equipped with more RAM and allows direct access for the CPU/DSP.
next year we will also provide a more powerful PCI version, also for the PC-market!
Amiga VideoStation:
NewTek and MacroSystem are now subsidiaries of C.A.T.
We provide digital video processing equipment for hobbyists to full editing stations for pros.
only Amiga!
the C64-laptop sold reasonable well ... but its over now - we stop the production.
Linux and BSD are now destroying the UNIX market - there will be no AmigaUX line anymore. But we actively support a BSD port.
The cooperation with Sun ended already last year.
We stop the production of the Lynx - it served its purpose.

for some reason the A500 classic mini is still selling ...
AmigaOS:
As promised we keep on updating and improving our OS constantly.
Always with elegance, small size and speed in mind.
Our filesystem is now rock-stable and fast and supports very large disks.
A transparent compression layer is available as well as a overlay-system, that allows to "write" on your CD and comes in handy vor our virtual subsystem.

Speaking of the virtual subsystem: since multiprocessor cards are now now available not only as ZorroIII cards but also as main processor-module, we refined our (asymmetric) multiprocessing and provide a easy to use GUI to start programs in a container, move them around, start/stop containers and so on. All this with very little overhead.
(For small single CPU systems there is no overhead at all, since we don't use it there.)

Last edited by Gorf; 19 June 2017 at 02:32.
Gorf is offline  
Old 13 June 2017, 02:18   #54
matthey
Banned
 
Join Date: Jan 2010
Location: Kansas
Posts: 1,284
Quote:
Originally Posted by Gorf View Post
In the Year 1994

...

The CPU: risking the RISC?

Last year the Pentium came out and is now finally hitting the market. As usual Motorola is late and the 68060 was not available until this year. Motorola wants to go PPC only and the 68060 is the last of its kind...
Apple decided to jump on this wagon and ships the first PPC-Macs.
Apple needed something new and shiny for marketing reasons, as they don't do so well these years.
We let them have this "win" and stick with the 68060 + DSP for now.
Why?
The PPC 601 is nowhere near as fast as marketing says. Even worse if you have to emulate the 68K for legacy software (and even parts of the OS as Apple does)
The 603 und 604 are very expensive and not available in numbers.
But we did build prototype-cards, started to port AmigaOS and will even bring a 604 ZorroIII card next year.

Other processor-options?
  • PA-RISC - is nice but is will be going nowhere in terms of clock speed.
  • SPARC - was going 64bit only - we don't need that yet, it just makes everything more expensive
  • MIPS - also a dead end
We still have our transputer CPU design and kept on developing it. We also hired more experts in this field and are developing our own 64bit CPU, with some very new ideas an features. But it will probably not ready or make sense until 98, when SDRAM becomes available.

until than we provide multiprocessing - first with 68K and later with PPC.
You blew it by choosing the same PPC path which has been tried and failed. Hindsight is supposed to be 20/20. Surely we can see now the RISC hype for what it is. The PPC may be the best choice of these three but is it a coincidence that these three RISC processors with among the worst code density are all but dead today? Code density was even more important in 1994 when transistor counts for caches, memory and HD space were more important than today's desktop CPUs (still very important for embedded CPUs today). Apple had trouble competing with the early PPC chips against the 68060 which ran at a lower clock speed and was more competitive than even Motorola would admit. The Pentium was able to ramp up clock speeds faster but was it really ahead even then?

Pentium@75MHz 80502, 3.3V, 0.6um, 3.2 million transistors, 9.5W max
68060@75MHz 3.3V, 0.6um, 2.5 million transistors, ~5.5W max*
PPC 601@75MHz 3.3V, 0.6um, 2.8 million transistors, ?W max

* estimate based on 68060@50MHz 3.9W max, 68060@66MHz 4.9W max

The 68060 is 42% more energy efficient and is using 21% fewer transistors compared to the most comparable Pentium while giving similar performance. The 68060 also has considerably better code density than the x86 of probably around 10%. The 68k ISA has encoding room to add ISA enhancements which improve code density while adding enhancements to the x86 increases code sizes. The Pentium was clocked up faster due to economies of scale from 3D FPS games but that is no technical advantage.

PPC was not as efficient as hoped either. The 68060 8kB ICache gave nearly the same performance as a PPC 32kB ICache. The 601 transistor count has a lot to do with the extra caches needed by the PPC to match performance with the 68060. No problem as the plan was to clock up the PPC CPUs but that didn't go as intended either. Moving CPU complexity into the compiler had limited success also.

So which CPU would you choose again? Motorola management failed to look closely at the technical differences and made a huge mistake also. You would think they would look at the technical data and see the 68060 possibilities for embedded at least since that was their bread and butter. They did end up marketing the 68060 for embedded only actually where it was high end at that time but ended up gutting it for a lower end ColdFire instead of continuing it. The energy efficiency made it a good choice for a laptop at that time even though one was never created. The 68060 was just the start of a CPU line like the Pentium and could have been similarly improved. I doubt it could have clocked as high as the x86 but I expect it could have had better performance for the clock speed. Of course SMP would not be a problem (code density advantages reduce cache requirements per core and maximum clock speeds become less of an advantage). It would also be possible to create a 64 bit 68k ISA (68k_64 like x86_64) in a separate mode also. Then you could have 68k_32 with considerably better code density than Thumb 2 and 68k_64 for 64 bit addressing needs on high end CPUs. The x86_64 ISA needed to add 8 GP registers for descent performance but the 68k already has 16 GP registers making a 68k_64 ISA potentially easier to design and better. I have recently been thinking about creating just such a 68k_64 ISA.

You are still looking at the small picture if you are just considering desktops, laptops and consoles. You would want to avoid the eventual decline of the desktop market using hindsight and would want to start early. You would want to allocate development resources toward embedded markets (including small electronic devices where the 68k and AmigaOS small footprint is already an advantage). You can share development costs between classic uses and embedded uses. It may have been possible to cheaply buy the design and rights to the 68060 from Motorola/Freescale/NXP as they have grossly mis-valued it. The 68060 could have been further developed in FPGA and a 68k+Amiga SoC made for further cost reductions. This brings us forward a bit but brings up another good question. Is it still possible to save the Amiga today? Let's say you started with $10 million U.S. dollars. What decisions would you make today?

Last edited by matthey; 13 June 2017 at 02:28.
matthey is offline  
Old 13 June 2017, 03:02   #55
Gorf
Registered User
 
Gorf's Avatar
 
Join Date: May 2017
Location: Munich/Bavaria
Posts: 2,294
@ matthey

Quote:
You blew it by choosing the same PPC path which has been tried and failed. Hindsight is supposed to be 20/20.
I know
that is the reason, why my A5000 still has a 68060. I have my doubts, we could have licensed the 68K from motorola and developed a 68080 fast enough, so for a short while we would take the PPC road. At a time when the 604 and the G3 were still competitive to x86.

We are developing our own CPU, but the bottleneck until 97/98 ist simply the RAM. We are still talking about fast page DRAM or EDO-RAM with 70ns. The CPU clocks went up but the frontsidebus stayed horribly slow.

Until our baby is ready we will stick to the 2x68060 setup and for about 2 years PPC - at least the MHz numbers look nice for marketing.

The OS needs to become more hardware independent and portable anyway. So our efforts are not in vain.

Our CPU will be a totally different animal, that I will describe in one of my next posts. One among many feature will be, that it provides security and protection for a single-adressspace-OS.

Quote:
I have recently been thinking about creating just such a 68k_64 ISA.
I would very much like to hear more about your ideas for this!

Quote:
Is it still possible to save the Amiga today? Let's say you started with $10 million U.S. dollars. What decisions would you make today?
For the hardware:
this would definitely be the Apollo-Core/SAGA approach in my eyes. As dual or quad core ASIC. Maybe with some ideas of my C.A.T CPU to make the OS secure, without changing it into a unix-clone. ... well that are not my own ideas, but based on todays research.

For the OS:
buy out AOS4 from Hyperion as well as MorphOS and merge it with AROS. And buy the rights for the old OS back from Cloanto.

Do we have enough money for all this?

Last edited by Gorf; 13 June 2017 at 21:19.
Gorf is offline  
Old 13 June 2017, 09:37   #56
matthey
Banned
 
Join Date: Jan 2010
Location: Kansas
Posts: 1,284
Quote:
Originally Posted by Gorf View Post
I know
that is the reason, why my A5000 still has a 68060. I have my doubts, we could have licensed the 68K from motorola and developed a 68080 fast enough, so for a short while we would take the PPC road. At a time when the 604 and the G3 were still competitive to x86.
From recent documentation which was released, it looks like C= was trying to license the 68k to make a single chip (SoC) Amiga. Knowing C=, they would have licensed a 68EC020 or similarly low spec 68k instead of the 68060 which had a good foundation for upgrading. It wasn't necessary to compete with x86 in performance as the 68k can go smaller and have better energy efficiency but a 68020 is just too slow in comparison to a Pentium for the desktop. The 68060 could be clocked up (rev 6 usually run 100MHz) but Motorola did not want more competition for PPC where it was having trouble increasing clock speeds. The PPC was really only successful with the efficient G3/G4 shallow pipeline design (most modern embedded PPC designs are small improvements on this) but this was not conducive to higher clock speeds. I don't know why you would want to go to PPC for a short time considering the cost of adapting the AmigaOS and compatibility lost.

Commodore did own MOS Technologies (although they did *not* keep their fabs modern) and was one of the early adopters of expensive at the time FPGA technology to design their own custom chips. They were working on adding custom features (probably custom SIMD features) to the PA-RISC which they had licensed from HP for the Hombre. They were already vertically integrated and should have had some of the expertise necessary to develop the 68k and may have known who they needed to hire. C= did a poor job of R&D to product, marketing/licensing and managing in general which may have sabotaged any such attempt. A CISC design is more complex than RISC although the 68k should be cleaner and easier to develop than the x86. C= could have done an acquisition of perhaps an AMD or Cyrix if they couldn't hire enough CPU expertise.

Quote:
Originally Posted by Gorf View Post
We are developing our own CPU, but the bottleneck until 97/98 ist simply the RAM. We are still talking about DRAM or EDO-RAM with 70ns. The CPU clocks went up but the frontsidebus stayed horribly slow.
Good code density is more important with slow memory! First off, the caches hold more code so there are fewer cache misses where it is necessary to access memory. When you do access memory, the 68k is getting about 40% more code than the PPC from the same sized access. Memory access throughput for code is increased by this much also. Think of your internet connection where you can choose to compress the data coming through by 40% or not with your hardware able to decompress with no noticeable latency. The 68k has room to improve code density also. I believe at least 50% better code density than the PPC is possible with ISA/ABI changes, CPU design changes and improved compiler support.

Quote:
Originally Posted by Gorf View Post
Our CPU will be a totally different animal, that I will describe in one of my next posts. One among many feature will be, that it provides security and protection for a single-adressspace-OS.
The 68k could add a new hypervisor mode above supervisor mode for virtualization. A virtual address capable MMU would probably be necessary also. You could offer partial task isolation (even as 32 bit) with the MMU, write protect the AmigaOS in memory, and offer a new API for write protecting code and other memory but passing messages by pointer is not going to work for full memory protection and copying messages is a big change for the AmigaOS and slower. All the virtualization stuff has significant overhead too.

Quote:
Originally Posted by Gorf View Post
I would very much like to hear more about your ideas for this!
I documented some ideas when I was part of the Apollo Team before Gunnar decided to go a different route. See the 68kF_PRMv7f.pdf on the first page of the following thread for an idea of user mode enhancements which are possible for 32 bit.

http://eab.abime.net/showthread.php?t=83642

I will probably change the ISA names to 68k_32 and add 68k_64. SIMD, MMU and supervisor or hypervisor mode instructions would likely be added last as they are more hardware design specific and don't scale well. A very knowledgeable team would really be needed to develop and fine tune these.

Quote:
Originally Posted by Gorf View Post
For the hardware:
this would definitely be the Apollo-Core/SAGA approach in my eyes. As dual or quad core ASIC. Maybe with some ideas of my C.A.T CPU to make the OS secure, without changing it into a unix-clone. ... well that are not my own ideas, but based on todays research.
Vertical integration with in house CPU core development and customization using FPGA is what I was thinking also. Unfortunately, Gunnar has not been very fast to move to SMP (limited by FPGA size currently?), rejected virtual address MMUs (too much performance overhead in FPGA), joined the SIMD unit to the integer unit which permanently restricts the SIMD unit register size to 64 bits with no floating point. An ASIC is the path forward for reducing cost but Gunnar's FPGA hyper optimization and ISA makes it less likely IMO.

Quote:
Originally Posted by Gorf View Post
For the OS:
buy out AOS4 from Hyperion as well as MorphOS and merge it with AROS. And buy the rights for the old OS back from Cloanto.

Do we have enough money for all this?
You wouldn't think it would take much to obtain the rights to most of the AmigaOS like OSs but that is assuming the claimed owners are rational and sane based on actual income generated by their products. It might be cheaper to develop a custom 68k core and create an ASIC though .
matthey is offline  
Old 13 June 2017, 19:21   #57
Gorf
Registered User
 
Gorf's Avatar
 
Join Date: May 2017
Location: Munich/Bavaria
Posts: 2,294
Quote:
It wasn't necessary to compete with x86 in performance as the 68k can go smaller and have better energy efficiency but a 68020 is just too slow in comparison to a Pentium for the desktop. The 68060 could be clocked up (rev 6 usually run 100MHz) but Motorola did not want more competition for PPC where it was having trouble increasing clock speeds.
We are still talking about the years 1994 - 1998
Energy efficiency is nice, but thats was not the most urgent thing back than.
In these years the MHz-race really started and people would run home from the computer-shop, knowing, when they arrive at home, their new computer would already be outdated

The 68060 with 50MHz was late and the Pentium was already available for a whole year.
In March 1994 Intel released the P64C with 75Mhz, in June 1995 with 133Mhz.
Even the 486DX4 reached 100MHz in 1994 und was very cheap and you could easily upgrade your old board.
The 060 still only 50Mhz...

When Motorola finally reached 66MHz the Pentium was already at 200 and the Pentium with MMX at 150.

To stay in the race we would need at least 68060 with 233Mhz in end of 1997...
The PPC604ev reached 340MHz in 97.

Quote:
Commodore did own MOS Technologies (although they did *not* keep their fabs modern)
That factory was total crap. After CBM sold it (in real reality), it was used for some power-semiconductors (much larger structures) but it had to close for environmental reasons... the cleaning up must have been very expensive.

ether go fab-less or build a fab as a joint-venture with other companies.

Quote:
A CISC design is more complex than RISC although the 68k should be cleaner and easier to develop than the x86. C= could have done an acquisition of perhaps an AMD or Cyrix if they couldn't hire enough CPU expertise.
to come up within 2 years with something better than Motorola and produce a faster chip at higher clock-rates?

In my scenario we secretly kept on developing the transputer-core with some very new design and new ideas - constantly since 1989. As a almost 10-year-plan.

Quote:
Good code density is more important with slow memory! First off, the caches hold more code so there are fewer cache misses where it is necessary to access memory. When you do access memory, the 68k is getting about 40% more code than the PPC from the same sized access. Memory access throughput for code is increased by this much also.

I hear you. I really do.
And we will try to convince Motorola to speed up the 060 as good as they can..

Quote:
vertical integration with in house CPU core development and customization using FPGA is what I was thinking also. Unfortunately, Gunnar has not been very fast to move to SMP (limited by FPGA size currently?), rejected virtual address MMUs (too much performance overhead in FPGA), joined the SIMD unit to the integer unit which permanently restricts the SIMD unit register size to 64 bits with no floating point.
Yes
Maybe you did read my thread about SMP at the Apollo forum.
Basically Gunnar is saying:
Yes the core is ready for SMP, but we are not doing it. And that this is not something we should want for Amiga

Well I am pretty sure I do want 68K multiprocessing for Amiga. I can't see why not.

Also the FPU story is quite confusing. They claim to have it, but for some strange reason the "laziness" of users and customers prevents the team from releasing it...

Why do companies around the Amiga have always trouble in communicating with customers?

Last edited by Gorf; 13 June 2017 at 20:53.
Gorf is offline  
Old 13 June 2017, 19:30   #58
idrougge
Registered User
 
Join Date: Sep 2007
Location: Stockholm
Posts: 4,332
Quote:
Originally Posted by matthey View Post
You blew it by choosing the same PPC path which has been tried and failed.
The 68000 series is a dead end while the PowerPC has a long and successful life ahead of itself. QED.
idrougge is offline  
Old 13 June 2017, 21:13   #59
Gorf
Registered User
 
Gorf's Avatar
 
Join Date: May 2017
Location: Munich/Bavaria
Posts: 2,294
back in real time

(not in my alternative timeline but back in real-reality)

The MHz race back in the mid 90s was kind of crazy.
Computers finally went mainstream und marketing had found its magic trick:
Megaherz equals Horsepower

This was stupid and so wrong on so many levels ... but hey people liked it:
the more MHz the more powerful your PC is, just like PS and a car.

Ok you could try to explain and set it in the right context... the answer would be: ok, so let's take mips instead

And just at the time these figures would become relevant Motorola failed to deliver...
Gorf is offline  
Old 13 June 2017, 23:01   #60
matthey
Banned
 
Join Date: Jan 2010
Location: Kansas
Posts: 1,284
Quote:
Originally Posted by Gorf View Post
We are still talking about the years 1994 - 1998
Energy efficiency is nice, but thats was not the most urgent thing back than.
In these years the MHz-race really started and people would run home from the computer-shop, knowing, when they arrive at home, their new computer would already be outdated
I can't deny that the MHz race is good for marketing (Intel took it to the extreme with the awful P4) and the combination of economies of scale and what looked like Moore's law being true with die shrinks was a challenge and difficult to deal with if the x86 became popular with FPS gaming while the Amiga had no answers for 3D like chunky and faster 68k CPUs early and custom 3D hardware later (you started your C= alternate timeline early enough to try and fix that though). Today, these challenges have been greatly reduced where Moore's law has become invalid with die shrinks approaching their molecular limits (cost wise at least), energy efficiency and small foot print becoming much more important and the size of the embedded market having grown by several times. There are tens of thousands of potential Amiga unit sales, millions of potential retro computer unit sales and billions of potential embedded unit sales today.

Quote:
Originally Posted by Gorf View Post
The 68060 with 50MHz was late and the Pentium was already available for a whole year.
In March 1994 Intel released the P64C with 75Mhz, in June 1995 with 133Mhz.
Even the 486DX4 reached 100MHz in 1994 und was very cheap and you could easily upgrade your old board.
The 060 still only 50Mhz...
The 68060 was late but good and could have been more aggressively clocked up. Motorola was more conservative with their clock ratings also. Without the economies of scale, they would not have been able to have as many clock rated variations for sure. For embedded, high clock speeds are not necessary and often undesirable. Consistent performance is more important (lack of jitter) and high clock rates increase board costs as well. The average clock speed for embedded has been slowing down (from 485MHz average in 2013 to 397MHz in 2015 according to one survey) Back then, an OS was used less than today for embedded but it was a fast growing market which the AmigaOS could have captured. The 32 bit CPU is currently the fastest growing CPU type as well.

http://www.embedded.com/electronics-...-for-engineers

Quote:
Originally Posted by Gorf View Post
When Motorola finally reached 66MHz the Pentium was already at 200 and the Pentium with MMX at 150.

To stay in the race we would need at least 68060 with 233Mhz in end of 1997...
The PPC604ev reached 340MHz in 97.
I don't think it would have been a problem to clock the 68060 up although it probably would not quite reach x86 clock speeds with everything else equal. It does have a very efficient pipeline length now, especially if using for embedded, power efficient applications and SMP. Performance could have easily been improved by doubling the DCache size, doubling the instruction fetch, adding a link stack, bringing back the 64 bit result MUL and DIV, making a few more instructions like SWAP work in both integer pipes, adding a SIMD unit, creating a new ISA/ABI etc. and later add a L2 cache and multiple cores. Panicking, dumping a large existing 68k customer base and jumping the fence to greener pastures like Motorola did with the PPC was not smart but maybe that was the time to buy the rights or at least a license to the 68k. Intel didn't panic when the 68000 came out and "it was terrifying" for them.

http://eab.abime.net/showthread.php?t=83699

Quote:
Originally Posted by Gorf View Post
That factory was total crap. After CBM sold it (in real reality), it was used for some power-semiconductors (much larger structures) but it had to close for environmental reasons... the cleaning up must have been very expensive.

ether go fab-less or build a fab as a joint-venture with other companies.
I absolutely agree that fabless semiconductor design is the way to go today with foundry services having become commodities. This was not so when C= bought MOS (every semiconductor designer had their own foundry then). It is interesting how C= fell so far behind here considering Chuck Peddle had created the best CPU foundry in the world and joined the C= management when MOS was acquired. Perhaps this shows the level of incompetence of even higher level management.

Quote:
Originally Posted by Gorf View Post
In my scenario we secretly kept on developing the transputer-core with some very new design and new ideas - constantly since 1989. As a almost 10-year-plan.
A competitive massively parallel general purpose CPU is probably less realistic than acquiring and clocking up the 68060. The Cell (PPC) CPU probably came closest but it was just too difficult to program and many tasks can not be done in parallel. Some of your hardware ideas would likely have coherency issues also. SMP and SIMD units are a good route for some CPU parallelization and modern GPUs can handle some more limited but massively parallelized tasks.


Quote:
Originally Posted by Gorf View Post
Yes
Maybe you did read my thread about SMP at the Apollo forum.
Basically Gunnar is saying: Yes the core is ready for SMP, but we are not doing it. And that this is not something we should not want for Amiga

Well I am pretty sure I do want 68K multiprocessing for Amiga. I can't see why not.
The Apollo Team discussed SMP/AMP and multi-threading when AROS was first adding SMP support. I don't think Gunnar is opposed to SMP or multi-threading but I think he would rather use a more affordable FPGA which would limit the chances of being able to add another core. It would be different if he was designing the CPU core for an ASIC once again. Adding more cores at its simplest is just a copy and paste but needs debugged support. I would want to have at least 2 cores in the first ASIC design for SMP hardware debugging/testing and to increase yields by deactivating one core if it is bad.

Quote:
Originally Posted by Gorf View Post
Also the FPU story is quite confusing. They claim to have it, but for some strange reason the "laziness" of users and customers prevents the team from releasing it...
Gunnar may want to replace the 68k FPU with a vector FPU (SIMD FPU) since his SIMD unit can't add floating point support (this would give floating point in the integer file which would be horrible for performance). He asked me to add vector extensions to the 68k FPU once but the design is not a good candidate. It makes so much sense to keep the current 68k FPU for compatibility and compiler support and add a SIMD which supports both integer and floating point (SSE instead of MMX). His SIMD integer with the integer units and SIMD floating point with the FPU is not a bad idea for a new design but it is a horrible fit for the 68k. It is just like adding more registers which is not there on the 68k as they can't be added in an orthogonal way but he forced it anyway.

Quote:
Originally Posted by Gorf View Post
Why do companies around the Amiga have always trouble in communicating with customers?
Gunnar communicates fine when he wants to. I guess customers are on a need to know basis like team members .

Quote:
Originally Posted by idrougge View Post
The 68000 series is a dead end while the PowerPC has a long and successful life ahead of itself. QED.
The PPC is only barely alive in embedded and that is life support from Freescale/NXP which is being acquired by Qualcomm who is a big ARMv8 AArch64 supporter (the AArch64 ISA is similar to PPC but more modern and gives a little better code density). IBM will make custom PPC designs if you pay them enough but there aren't many takers these days.

The 68k is a much more loved processor than the PPC. It was used in many computers, consoles and arcades which have become more popular with the current retro trend. It is very easy to program and debug at a low level. It has a better code density than any other semi-modern general purpose 32 bit CPU giving a tiny footprint with relatively large address space. There is practically no competition in the higher performance 68k CPU market.
matthey is offline  
 


Currently Active Users Viewing This Thread: 1 (0 members and 1 guests)
 
Thread Tools

Similar Threads
Thread Thread Starter Forum Replies Last Post
Alternate monitor BanisterDK support.Hardware 4 12 January 2017 22:42
Amiga timeline TroyWilkins Nostalgia & memories 23 05 September 2016 15:30
Timeline Yesideez Amiga scene 1 13 September 2007 08:12
Magazine cover artwork = Timeline? alexh AMR suggestions and feedback 1 28 May 2007 02:04
CAPS Release Timeline fiath project.SPS (was CAPS) 10 29 June 2004 17:10

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +2. The time now is 06:27.

Top

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.
Page generated in 0.40912 seconds with 14 queries