English Amiga Board Amiga Lore


Go Back   English Amiga Board > Main > Nostalgia & memories

 
 
Thread Tools
Old 13 June 2017, 23:32   #61
Gorf
Registered User

 
Join Date: May 2017
Location: Munich/Bavaria
Posts: 156
Quote:
Originally Posted by matthey View Post
I
A competitive massively parallel general purpose CPU is probably less realistic than acquiring and clocking up the 68060. The Cell (PPC) CPU probably came closest but it was just too difficult to program and many tasks can not be done in parallel. Some of your hardware ideas would likely have coherency issues also. SMP and SIMD units are a good route for some CPU parallelization and modern GPUs can handle some more limited but massively parallelized tasks.
No, no, no, no!
Noooo!

Please don't jump to conclusions!
I am not talking about anything massively parallel! Not at all!

I was talking about one/single/solo transputer core! The T800 series was quite a good single core processor back in 1987, delivering over 15 mips at 20Mhz.
32bit with a 64bit fpu.

This will be one foundations stone of our new CPU-concept. Some other influence may come from Wangs mainframe CPUs...and some radical new ons from the future...

Quote:
Gunnar may want to replace the 68k FPU with a vector FPU (SIMD FPU) since his SIMD unit can't add floating point support (this would give floating point in the integer file which would be horrible for performance). He asked me to add vector extensions to the 68k FPU once but the design is not a good candidate. It makes so much sense to keep the current 68k FPU for compatibility and compiler support and add a SIMD which supports both integer and floating point (SSE instead of MMX). His SIMD integer with the integer units and SIMD floating point with the FPU is not a bad idea for a new design but it is a horrible fit for the 68k. It is just like adding more registers which is not there on the 68k as they can't be added in an orthogonal way but he forced it anyway.
To go Gunners way would be a sad an wrong decision.
He is a big Amiga-fan, as we all are, but I think he is more biased to the hardware, 68k asm, and the custom chips ..
While I and some others (probably most of the Aros and modern Amiga crowd) are more biased towards the AmigaOS and the feeling.

So people like me do not just want to play old games on an fpga, but actually *use* the system and work/play with programs, do need a fast FPU...

Last edited by Gorf; 14 June 2017 at 01:12.
Gorf is offline  
AdSense AdSense  
Old 14 June 2017, 00:44   #62
idrougge
Registered User
 
Join Date: Sep 2007
Location: Stockholm
Posts: 2,902
Quote:
Originally Posted by matthey View Post
The PPC is only barely alive in embedded and that is life support from Freescale/NXP which is being acquired by Qualcomm who is a big ARMv8 AArch64 supporter (the AArch64 ISA is similar to PPC but more modern and gives a little better code density). IBM will make custom PPC designs if you pay them enough but there aren't many takers these days.
You seem to think it's 2017, but it's 1995 right now. For the next fifteen years, the PPC will be the only processor approaching x86 in performance and cost.
idrougge is offline  
Old 14 June 2017, 03:18   #63
matthey
Registered User
 
Join Date: Jan 2010
Location: Kansas
Posts: 1,271
Quote:
Originally Posted by Gorf View Post
No, no, no, no!
Noooo!

Please don't jump to conclusions!
I am not talking about anything massively parallel! Not at all!

I was talking about one/single/solo transputer core! The T800 series was quite a good single core processor back in 1987, delivering over 15 mips at 20Mhz.
32bit with a 64bit fpu.
Surely you are still talking about the benefits from parallelism? This is where the performance comes from and I don't see any other reason to like the Transputer for a modern CPU. Adapteva made a similar modern version of the Transputer which they used Kickstarter to fund as the Parallella project.

https://en.wikipedia.org/wiki/Adapteva

The big advantage is the parallelism and Transputer like communication which is cluster computing like big supercomputers but with fewer resources. Cache coherency problems are solved by using a partitioned global address space but this also limits general purpose use and requires special programming. It takes a lot of parallelism to outperform a general purpose Intel i7 in floating point performance.

Quote:
Originally Posted by Gorf View Post
To go Gunners way would be a sad an wrong decision.
He is a big Amiga-fan, as we all are, but I think he is more biased to the hardware, 68k asm, and the custom chips ..
While I and some others (probably most of the Aros and modern Amiga crowd) are more biased towards the AmigaOS and the feeling.

So people like me do not just want to play old games on an fpga, but actually *use* the system and work/play with programs, do need a fast FPU...
I tried to get Gunnar to improve compiler support. It was one of the main focuses of the ISA we worked on as you can see (one of my highest priorities). Most new FPU instructions are simple and directly map to C functions. Several integer instructions are from the ColdFire and only need to be turned on to simplify compiler code generation and gain in code density. Compilers and assemblers already have ColdFire peephole optimizations ready to turn on also. Gunnar is not looking for a few percent of performance in many places though. He tries to hit home runs every time but that also means he strikes out a lot. His control needs to be bought out and more pragmatic chip designers brought in to form a real "team".

Quote:
Originally Posted by idrougge View Post
You seem to think it's 2017, but it's 1995 right now. For the next fifteen years, the PPC will be the only processor approaching x86 in performance and cost.
Gorf altered the space time continuum earlier and this may have an affect. C= may have a stronger position and be more easily able to influence Motorola as a top customer. If not, obtaining and developing the 68060 is less radical than the PA-RISC Hombre project which nearly happened. C= would have needed some leadership with foresight to steer them away from big RISCy decisions including PPC. Many C= developers fell for the RISC hype and who can argue with AIM (Apple, IBM, Motorola) support, even though that fell apart. There was more conservative development support in C= which valued compatibility and pushed for a 68k based Amiga SoC. Was PPC migration or the 68k SoC development path the better choice? Hindsight is not always 20/20 after we alter the time line. I can tell you which looks better today though .
matthey is offline  
Old 14 June 2017, 04:30   #64
Gorf
Registered User

 
Join Date: May 2017
Location: Munich/Bavaria
Posts: 156
Quote:
Originally Posted by matthey View Post
Surely you are still talking about the benefits from parallelism?
No I am NOT! That is what I am trying to tell you all the time!

Quote:
This is where the performance comes from
Not in this case...
We are talking about a "normal" setup: single or dual core. nothing fancy

Quote:
I don't see any other reason to like the Transputer for a modern CPU.
Don't let the name fool you.
As I said, the T800 had pretty good single core performance
the T9000 was in development but never came to live:
  • 32bit CPU 64bit FPU
  • 64-bit wide memory interface
  • superscalar
  • 5 stage pipeline
  • "instruction grouper" (out of order execution)
  • 32 cashed positions (instead of registers - it's a stack-machine)
  • 16 kB L1 cache
  • MMU
  • 50 MHz in 1991

Quote:
Adapteva made a similar modern version of the Transputer
I know that project...
No, it is not that similar at all: the weak cores can only use their tiny 16kb ram and you can not attach a ram-controller (because they are all on the same die)
useless!

Please get that image out of your head: that is not what I am taking about!


The link or interconnect technology we already reused for networking and our USB clone. There it is put to best use - not in the CPU.

Last edited by Gorf; 14 June 2017 at 04:42.
Gorf is offline  
Old 15 June 2017, 02:56   #65
matthey
Registered User
 
Join Date: Jan 2010
Location: Kansas
Posts: 1,271
Quote:
Originally Posted by Gorf View Post
No I am NOT! That is what I am trying to tell you all the time!

Not in this case...
We are talking about a "normal" setup: single or dual core. nothing fancy
No need to get excited. You are still talking about parallelism just single core parallelism with Superscalar operation and OoO execution. These are more common parallelism paths which is why I thought you would be referring to the unusual cluster features of the Transmuter.

Quote:
Originally Posted by Gorf View Post
Don't let the name fool you.
As I said, the T800 had pretty good single core performance
the T9000 was in development but never came to live:
  • 32bit CPU 64bit FPU
  • 64-bit wide memory interface
  • superscalar
  • 5 stage pipeline
  • "instruction grouper" (out of order execution)
  • 32 cashed positions (instead of registers - it's a stack-machine)
  • 16 kB L1 cache
  • MMU
  • 50 MHz in 1991
I looked at the T9000 Transputer Hardware Reference Manual. The CPU has some interesting features but the stack machine is not so impressive for the early '90s. The manual claims a continuous 200MIPS at 50MHz but Wiki says they only achieved 36MIPS. Stack machines have very simple instructions and good code density but they also have a huge number of instructions and many cache/memory accesses. The T9000 would need a very good "grouper" to keep up with CPUs executing half as many instructions. RISC CPUs added more complex CISC like addressing modes to reduce instruction fetch bottlenecks and instruction scheduling complexity from the high number of instructions of pure RISC. Some stack machines added some registers to reduce the number of instructions. Perhaps more of a hybrid approach would have been better for the Transputer as well. The Mill Architecture is an interesting variation of a stack machine (called a belt machine) which tries to use VLIW to encode all those simple instructions.

https://en.wikipedia.org/wiki/Mill_architecture

VLIW has been a failure so far for general purpose computing and I doubt this will be a success because of the complexity.

CISC can still be pretty good.

+ powerful instructions (few instructions needed)
+ efficient cache/memory accesses
+ good code density (for fetch and cache efficiency)
+ good performance with few registers
+ easy to use and debug at a low level
- some complexity in the decoder (for decompression)
- complex to create
matthey is offline  
Old 15 June 2017, 03:11   #66
Gorf
Registered User

 
Join Date: May 2017
Location: Munich/Bavaria
Posts: 156
Damn - now you spoiled the surprice!

Yes, the radical ideas I brought back from the future is indeed the Mill.

Ivan Godard from Mill about the Amiga:
"Yep. The Amiga got a lot of things right; amazingly so considering the vintage. AmigaOS was actually one of the (mental) use-cases during Mill development.

Last edited by Gorf; 15 June 2017 at 03:44.
Gorf is offline  
Old 15 June 2017, 03:47   #67
matthey
Registered User
 
Join Date: Jan 2010
Location: Kansas
Posts: 1,271
Quote:
Originally Posted by Gorf View Post
Damn - now you spoiled the surprice!

Yes, the radical ideas I brought back from the future is indeed the Mill.
You might want to take back something that works. Another VLIW attempt would have been a good way to put C= out of business if you were wanting to take it over though. Transmeta had almost $1 billion in investment and couldn't come up with a descent general purpose VLIW CPU. Intel's Itanium was a more EPIC VLIW fail with some of the best chip designers in the world working on it. The Mill machine is an even more radical design with immense compiler complexity (likely a debugging nightmare). They can probably find plenty of investor suckers as history repeats itself though. Unfortunately, there aren't very many investors who want to invest in making proven designs better.
matthey is offline  
Old 15 June 2017, 04:25   #68
Gorf
Registered User

 
Join Date: May 2017
Location: Munich/Bavaria
Posts: 156
Quote:
Originally Posted by matthey View Post
Transmeta had almost $1 billion in investment and couldn't come up with a descent general purpose VLIW CPU.
Really? the Crusoe was a descent general purpose VLIW CPU - it had fever transistors, therefore was cheaper and more efficient than Intels counterparts.
It even reached reasonable clock speed and performance.

The mistake was to attack Intel on x86.

Back at the time I even was in email contact with the guys and tried to convince them to adapt their "code morphing software" to the 68K ISA.
It was considered but dismissed :-(

Quote:
Intel's Itanium was a more EPIC VLIW fail with some of the best chip designers in the world working on it.
The Itanium was equally good as its x86 (or x64) counterparts. It just was not much better. And you can not sell a equally fast processor with no software-support for a higher price... that is the only reason why it "failed".


Quote:
The Mill machine is an even more radical design with immense compiler complexity (likely a debugging nightmare). They can probably find plenty of investor suckers as history repeats itself though. Unfortunately, there aren't very many investors who want to invest in making proven designs better.
In real world I would definitely like to see both coming to life:
The Mill and a modern 136K.
(would a 64bit 68K not be a 136K?)

Last edited by Gorf; 15 June 2017 at 07:58.
Gorf is offline  
Old 15 June 2017, 06:39   #69
Gorf
Registered User

 
Join Date: May 2017
Location: Munich/Bavaria
Posts: 156
@ matthey


If one would follow your arguments with ice-cold logic, going x86 (and later x64) would be the most reasonable thing to do:

Even if we take over 68K development and manage to develop chips, that are as fast as late Pentium or Core chips... it would be very expensive to do so. And even if we could bring back Apple to use 68k again, we would end up producing and selling less than Intel: equal costs, but smaller revenue.

And we would always be one step behind Intel in terms of the manufacturing process. Intel was and is always the first to ship a smaller structure size...
Therefore we would almost certainly be more power-hungry or slower.

We would face the same problems AMD did and does - in some years we might have a win, but most of the time we lack behind.

So it would make more sense to invest our precious money in the development of good gfx-chips, good software and nice computers and leave the CPU to Intel and AMD. Let them fight the fight and enjoy the result: cheap and fast CPUs

The only reason that would justify taking an other route, is a design that is fundamentally better in all areas: speed, efficiency, price and support for AmigaOS as well as Linux/BSD.
And it needs to be so good, that others switch over from x86 to our platform.
68K CISC is just not different enough to provide any real benefit over x86. Having a nicer ISA and nicer asm is not doing the trick.
Gorf is offline  
Old 15 June 2017, 18:19   #70
Gorf
Registered User

 
Join Date: May 2017
Location: Munich/Bavaria
Posts: 156
in the year 1995

in the year 1995

Happy 10th anniversary Amiga!

Of course there is a big party!
Where?
In our brand-new headquarter!
Where?
In Los Gatos, California!

We are finally back where everything began.
Since the development of the CDTV we had a group stationed in this area and over the years it became more and more our creative center. Now we are officially back. (And no more shady Bahamas of course!)

Here is a short list of important events and our new products:

Microsoft, Win85, OLE:

This is nor only a big year for use but also for MS: they are shipping Windows 95.
Finally more than 8 characters in a filename! No joke, it really took them so long.
And finally 32bit. Well at least mostly ... of course there is still DOS deep inside and win16 and old driver models and ...

We spent some money to analyze the market and customer expectations, but still we have not the slightest clue why MS is able to sell Windows.


But at least we got the patents. Since we took over we are filing patents for every stupid idea - no matter how trivial. The patent system in the US is terribly broken, but we can not change that, so we play along.
So we have now a silly patent to press a "start button", if you want to shut down you computer...
A more substantial patent covers the OLE framework MS is using in Office. We got this by merging with Wang. In real-reality MS would settle in 1995 for about $90 million. We do not want the money... Wang needed it back than, but we don't.

MS wants to ship its new line of Office for its new Win95, and in worst case we could find a judge, who forbids these new products in USA. That is a risk, MS is not really willing to take. So we settle under following conditions - they can use OLE as long as:
  • MS is porting Office to AmigaOS within a year (can use Mac-port)- feature complete. New versions as soon as Windows gets a new version.
  • Office is using our open IFF-office-format to save files. As standard option!
MS-Office will be no longer a excuse for not buying a amiga!

AmigaPS, Sony PreyStation, SEGA Saturn:

This year all the other 5th Generation Consoles are hitting the market.
Sony has the most aggressive strategy - they are offering their PreyStation for incredible $299. We just reached that price with our outdated model from 1992.
With mpeg- and/or 3D module we are more expensive and still not better..

We integrate, size down and optimize ... we put the mpeg-chip as well as the Quake-chip on the same board - and we use an old trick: Bundles!
With 3 or 5 games the AmigaPS+video-3D is available for $400 - same a s the Sega Saturn.

Sega is out of the race quite soon. The battle is between Sony and C.A.T
We are going to lose our no. 1 position this year, but we still got over 25 million AmigaPS out there and so games are still coming.

We do not have such a elaborate copy-protection as Sony PreyStation has... so some developers prefer the new platform.
CD-burners are now available for around $1000 and they can not yet burn our special CD-ROM format ... but this is only a question of time.

In the end the ability to copy games, will be a factor that prolonged the lifecycle of the AmigaPS until the year 2000.

BeOS and BeBox:

Both come out this year. That don't impressing me much...
We added a full lock-free SMP solution to out AmigaOS, thanks to Alexa Massalin. In addition to our VM/Container based MP solution.
(to be honest: I don't think the BeBox would ship in our timeline, since this product makes no sense anymore...)

Motorola and the CPU

Long negotiations, hard bargaining ... in the end a compromise:
Motorola helped us integrate the AAA+ chipset for the AmigaPS into one chip.
A second chip contains the 68030 and the licensed DSP, a third chip the Mpeg-decoder and 3D.
They also brought the 68060 including the DSP to their newest 0.42mu line. 3.3V and 66Mhz. In large numbers.

But for that we had to promise to join the PPC-Club.


New Products:
  • AmigaPS in 3.3V technology including mpeg and 3D
  • End of the Amiga700NG
  • All other Desktop-Amigas include now AAA++ chipset and 68060DSP.
  • Amiga-Cube as new entry model
  • Amiga-Server with PPC604 as first PowerPC-model.
  • PCI-cards with AAA++ (2d) OR quake (3d) for Amigas and PCs*

*) we are going to establish our 3D standard before someone else does - it also makes ports easier.

Last edited by Gorf; 15 June 2017 at 21:41.
Gorf is offline  
Old 15 June 2017, 23:52   #71
matthey
Registered User
 
Join Date: Jan 2010
Location: Kansas
Posts: 1,271
Quote:
Originally Posted by Gorf View Post
Really? the Crusoe was a descent general purpose VLIW CPU - it had fewer transistors, therefore was cheaper and more efficient than Intels counterparts.
It even reached reasonable clock speed and performance.
The Crusoe was a descent processor. It was probably *not* a descent general purpose processor. If the Crusoe and Efficeon could have filled all their VLIW execution slots (NOPs don't count) then they would have crushed the x86 competition. The GP CPU test is the conditional branch instruction and I expect they performed poorly here. They probably did perform well with multimedia processing and blew away the Pentium 4 in energy efficiency but Intel was learning from their mistakes. The "fewer transistors" didn't count the fact that half the x86 CPU was in software which was 32MB (which shows how complex this virtual machine was). Considerable performance was likely lost in this complex virtual machine where they expected to gain performance or at least offset performance losses by optimizing but I expect cache issues and latencies were a problem as they piled on the layers of abstraction.

Quote:
Originally Posted by Gorf View Post
The mistake was to attack Intel on x86.
That was one of their biggest mistakes. They might have had something if they made the code morphing software easier to adapt to many architectures. It is a nice feature to be able to execute code from different architectures on the same CPU even if a less efficient virtual machine is needed for translation.

Quote:
Originally Posted by Gorf View Post
Back at the time I even was in email contact with the guys and tried to convince them to adapt their "code morphing software" to the 68K ISA.
It was considered but dismissed :-(
We might be using Transmeta CPUs with 68k support today if they had focused on morphing for many architectures and looked for embedded, small electronics and niche markets where VLIW is an advantage or less of a disadvantage while they refined their complex product. They spent big and went after big contracts instead of building up their product and generating cash flow from smaller markets where they had an advantage.

IBM and Motorola both worked on CPUs which supported multiple or customizable ISAs. IBM made the PPC 615 prototypes which supported PPC and x86. Some of the Engineers supposedly joined the Transmeta team. Jim Drew (Fusion Macintosh emulator author) says Motorola was working on a CPU which could execute 68k code but wasn't a 68k CPU. I'm not sure what CPU this would have been but the major chip manufacturers were experimenting with technologies.

Quote:
Originally Posted by Gorf View Post
The Itanium was equally good as its x86 (or x64) counterparts. It just was not much better. And you can not sell a equally fast processor with no software-support for a higher price... that is the only reason why it "failed".
The cost and complexity were through the roof. Nobody wants to design CISC processors because of the complexity which is manageable and produces better than expected results but billions of U.S. dollars have been spent to design general purpose VLIW processors and nobody has been able to manage the complexity and produce something successful .

Quote:
Originally Posted by Gorf View Post
In real world I would definitely like to see both coming to life:
The Mill and a modern 136K.
(would a 64bit 68K not be a 136K?)
The Mill CPU design could be successful if they were able to achieve a fraction of its potential. I doubt they will even get it working reliably but I hope they do.

As far as a 64 bit 68k, it is not difficult but is it worthwhile? The Apollo Core is 64 bit (integer/SIMD units) and the integer registers are 64 bit wide even though the address lines are not currently connected. I have looked at benchmarks and test data and my conclusion is that on average a general purpose CPU is slower with 64 bit ALU processing. There are a few work loads which benefit greatly but most are streaming data which can be done more efficiently in the SIMD unit. Shift, multiplication and division of 64 bit data slows down the whole pipeline, code density deteriorates requiring larger ICache and data alignment needs increase requiring more DCache. The only overall reason to have 64 bit integer pipes in a general purpose CPU is to add more physical address space. Here you are using just a few more bits but now all your pointers take 64 bits clogging the DCache as half as many fit in the DCache. There are some applications which need 64 bit addressing but most general purpose applications would be fine with 4GB of address space per task. Caches become slower as they become larger (electricity takes time to propagate across more transistors) and large L1 caches for multiple cores multiplies the transistor cost. We can no longer shrink the dies indefinitely as Moore's law expires so 64 bit will remain slower at some point. What percentage of target customers need more than 32 bits of physical addressing per task, especially since we have excellent code density?

Quote:
Originally Posted by Gorf View Post
If one would follow your arguments with ice-cold logic, going x86 (and later x64) would be the most reasonable thing to do:
Changing architectures is expensive especially when the endianess changes. Apple did manage it but they already had experience from the 68k to PPC change which did not go so smoothly (the 68060 Amigas running Mac emulation were the fastest Macs for quite some time).

Quote:
Originally Posted by Gorf View Post
Even if we take over 68K development and manage to develop chips, that are as fast as late Pentium or Core chips... it would be very expensive to do so. And even if we could bring back Apple to use 68k again, we would end up producing and selling less than Intel: equal costs, but smaller revenue.
What about embedded sales? Does Intel or ARM sell more CPUs today?

Quote:
Originally Posted by Gorf View Post
And we would always be one step behind Intel in terms of the manufacturing process. Intel was and is always the first to ship a smaller structure size...
Therefore we would almost certainly be more power-hungry or slower.
We have better energy efficiency being one die shrink behind. Also, the 68060 was 3.3V at the same time as the Pentium even though it came out later. We can easily pass them up when they build the Pentium 4 space heater .

Quote:
Originally Posted by Gorf View Post
We would face the same problems AMD did and does - in some years we might have a win, but most of the time we lack behind.
AMD is competing with the same thing. We are competing with a different and better thing.

Quote:
Originally Posted by Gorf View Post
So it would make more sense to invest our precious money in the development of good gfx-chips, good software and nice computers and leave the CPU to Intel and AMD. Let them fight the fight and enjoy the result: cheap and fast CPUs
Intel and AMD make commodity gfx chips too. I hope we have some pretty good gfx expertise from your improved R&D budget .

Quote:
Originally Posted by Gorf View Post
The only reason that would justify taking an other route, is a design that is fundamentally better in all areas: speed, efficiency, price and support for AmigaOS as well as Linux/BSD.
And it needs to be so good, that others switch over from x86 to our platform.
68K CISC is just not different enough to provide any real benefit over x86. Having a nicer ISA and nicer asm is not doing the trick.
Keeping the 68k is not another route. It is the path we are currently on and provides a big benefit in compatibility. There is a substantial benefit for low end and embedded systems in performance (better code density with room to improve it, 16 GP registers instead of 8 and much better energy efficiency).

Quote:
Originally Posted by Gorf View Post
Motorola and the CPU

Long negotiations, hard bargaining ... in the end a compromise:
Motorola helped us integrate the AAA+ chipset for the AmigaPS into one chip.
A second chip contains the 68030 and the licensed DSP, a third chip the Mpeg-decoder and 3D.
They also brought the 68060 including the DSP to their newest 0.42mu line. 3.3V and 66Mhz. In large numbers.

But for that we had to promise to join the PPC-Club.
You are not a very good negotiator then. I would ask them to show me a low end PPC which beats the 68060 at the same clock speed. When they bring out the PPC 603 with 8kB ICache, I would laugh at them as we run benchmarks. When they bring out the PPC 603e I would laugh at them some more and point out how their low end CPUs are destroying the PPC Mac market with their pathetic performance. I would require them to give us a discount on the cost of any PPC CPU equal to the cost of the increased ram needed by their PPC chips if we switch. Ha, ha. No. Also, an SIMD unit could replace the DSP and MPEG decoder.

Last edited by matthey; 16 June 2017 at 00:27.
matthey is offline  
Old 16 June 2017, 00:19   #72
Gorf
Registered User

 
Join Date: May 2017
Location: Munich/Bavaria
Posts: 156
Quote:
What about embedded sales? Does Intel or ARM sell more CPUs today?
Yes: today.
But how are we going to survive on the desktop-market for 20 years, to get to now? Certainly not with CPUs for embedded.

Quote:
There is a substantial benefit for low end and embedded systems in performance
from 1994 to at least 2004 that is not something that helps us to stay alive.

Quote:
I hope we have some pretty good gfx expertise from your improved R&D budget
Oh yes, we do!

Quote:
What percentage of target customers need more than 64 bits of physical addressing per task, especially since we have excellent code density?
you mean more than 32bit?
we need 64bit just for two things: tagged pointers and a security feature.

Quote:
You are not a very good negotiator then. I would ask them to show me a low end PPC which beats the 68060 at the same clock speed....
well - it seems that nobody was. Not Apple the first time nor Steve Jobs, the man with the reality distortion field. Even he could not force Motorola to make decent chips in the end.

Quote:
Also, an SIMD unit could replace the DSP and MPEG decoder.
And it will in future. But right now, nobody has SIMD.
And the integration of cpu+dsp ist just a drop-in replacement.
Also the MPEG-chip needs to stay separated for now in the AmigaPS:
Is is a console, that means every game expects the same setup - in this case, the MPEG decoder has to work independently of the CPU...

We will not bring a 2. generation of the AmigaPS. We are done with consoles.

Last edited by Gorf; 16 June 2017 at 05:59.
Gorf is offline  
Old 16 June 2017, 01:20   #73
matthey
Registered User
 
Join Date: Jan 2010
Location: Kansas
Posts: 1,271
Quote:
Originally Posted by Gorf View Post
Yes: today.
But how are we going to survive on the desktop-market for 20 years, to get to now? Certainly not with CPUs for embedded.

from 1994 to at least 2004 that is not something that helps us to stay alive.
Motorola was doing just fine from embedded CPU sales even before the 68000 came out. That is how they survived and probably why they had the funding to develop the 68000 in the first place.

Quote:
Originally Posted by Murray Goldman
Motorola microprocessors went from sales of zero to sales of about 250 million dollars almost overnight ... and it was good margin.
http://eab.abime.net/showthread.php?t=83699

Quote:
Originally Posted by Gorf View Post
you mean more than 32bit?
we need 64bit just for two things: tagged pointers and a security feature.
I thought you like portable code? It is not portable to assume any other value than NULL=0 for a pointer and even that is not true on all hardware (but close enough). The security advantage of 64 bit pointers is a weak argument as it is just one way of providing security which protects against one kind of attack.

Quote:
Originally Posted by Gorf View Post
well - it seams that nobody was. Not Apple the first time nor Steve Jobs, the man with the reality distortion field. Even he could not force Motorola to make decent chips in the end.
Steve Jobs was happy with the Macintosh vs Amiga matchup in 1985 too.

Macintosh
black and white
no custom chips
beeps
single tasking

Amiga
4096 colors
custom chips for video acceleration
4 voice stereo sound
preemptive multitasking

I wish C= had had his marketing ability though. I never saw an Amiga in a school or business like the Macintosh. Yea, I guess the CPU doesn't matter if these other features aren't important either .

Quote:
Originally Posted by Gorf View Post
And it will in future. But right now, nobody has SIMD.
And the integration of cpu+dsp ist just a drop-in replacement.
Also the MPEG-chip needs to stay separated for now in the AmigaPS:
Is is a console, that means every game expects the same setup - in this case, the MPEG decoder has to work independently of the CPU...
My 68060@75MHz can decode 320x240 MPEG at about 26 fps with the new version of RIVA and there is room to optimize more.

http://eab.abime.net/showpost.php?p=...8&postcount=22

The 68060 gives impressive performance for no SIMD unit. It may be better to upgrade people to a 68060 than sell them MPEG hardware as the extra processing power is general purpose and easy to program .

Quote:
Originally Posted by Gorf View Post
We will not bring a 2. generation of the AmigaPS. We are done with consoles.
I think I would let sales numbers determine this. Personally, I like an upgradeable console which then allows an endless upgrade path (and revenue stream) like with computers. Sharing the same platform and OS also reduces costs (as it does with embedded). Why end the party early if we have a good thing going?
matthey is offline  
Old 16 June 2017, 03:05   #74
Gorf
Registered User

 
Join Date: May 2017
Location: Munich/Bavaria
Posts: 156
Quote:
Motorola was doing just fine from embedded CPU sales even before the 68000 came out. That is how they survived and probably why they had the funding to develop the 68000 in the first place.
Still not helping us in 1995.
Not at all.

Quote:
I thought you like portable code? It is not portable to assume any other value than NULL=0 for a pointer
Null will still be null - at least in the lower 48bit (or whatever we use as address-space.) And that is nothing a programmer has to worry about. The system does the work for you.

Quote:
The security advantage of 64 bit pointers is a weak argument as it is just one way of providing security which protects against one kind of attack
not the way, the Mill handles this. In our case it is a very strong argument.
more in 1998 (or at the mill-computing homepage..)

Quote:
My 68060@75MHz can decode 320x240 MPEG at about 26 fps with the new version of RIVA and there is room to optimize more.
Irrelevant:
The MPEG chip can do 640x240 (double VideoCD)
The AmigaPS has still a 68030. everything else would be to expensive - we are not upgrading the CPU of our console, because it is a console:
new games must also work on the first unit we sold 3 years ago.
We must sell it for max $399 we can not do that with a 68060 that costs over $200 on its own!

Quote:
I think I would let sales numbers determine this. Personally, I like an upgradeable console which then allows an endless upgrade path (and revenue stream) like with computers. Sharing the same platform and OS also reduces costs (as it does with embedded). Why end the party early if we have a good thing going?
  1. Sony is a strong even unfair competitor, they are willing to lose lots of money to drive others out of business.
  2. it's time to cash in. Instead of investing our earned money in the next generation, we spend it on our new cpu, new gfx and new Amigas.
  3. I have an other strategy in mind for gaming

Last edited by Gorf; 16 June 2017 at 05:46.
Gorf is offline  
Old 19 June 2017, 08:49   #75
Gorf
Registered User

 
Join Date: May 2017
Location: Munich/Bavaria
Posts: 156
In The Year 1996

The year we introduce the Ωmiga Chip Set: ΩCS

Before we talk more about our mystery CPU I have to share some news about the developments in the year 96: a new chipset and new models.

Ωmiga Chip Set:

the AAA+ chipset was the fundament of our turnaround. Is was everything that AGA was not and what AAA should have been. It was the basis for our successful next generation Amigas including our console AmigaPS. AAA++ was the consequent update to this: faster and more efficient, but only few new features. MGEG and 3D were powerful add ons, but not an intrinsic part of it. That is going to change now. And it will bring some other revolutionary new features as well.

Ω is the last letter in the greek alphabet and it was chosen for a reason: the Ωmiga Chip Set will be the last generation, that has full hardware support for all legacy modes. In generations after that it will be necessary to emulate the old planar modes and other features in software.

In good tradition we will give our functional units female names, but they are no longer separated chips, but be placed on two (and later one) die.

Everything is now in 3D (and 128bit):
As we are used to in 2017 there is no difference between pure 2D and 3D output. Even if your desktop environment looks flat, it is rendered as a 3D scene.
That concept is of course revolutionary in 1996. But because 3D-hardware is not powerful enough back than (even ΩCS is not), we use some tricks and shortcuts to enable this feature on high-resolution true-color screens:
Flat rectangular elements like screens, windows, widgets are bypassing parts of the render-pipeline. The 3D-chip (Kate) knows about the size of these elements and in what layer (z-buffer) they are, but skips transforming, mapping and shading and just determines what is visible at the moment and fetches the corresponding data and feeds the view to the Rainbow.
If we want to tilt or deform such a window, we can make make an exception, but the results may be slow on higher resolutions. On lower resolutions we can happily mix 2D and 3D elements.

We drop VRAM in favor of BEDO-RAM and SGRAM (support for both) with a 128bit wide bus.

the Andrea-Twins:
The long missing twin sister returned and both have schizophrenia.
Two Blitters and two Coppers take care of our 2D rendering by manipulating a texture in Kate's buffer. They are addressed by virtual-physical registers. They can work together on the same texture, to speed things up, or on different ones.
The second feature enables direct hardware blitting for sandboxed applications. You can run a game in full isolation of the rest of the system now.
The beam position for copper is simulated.
Schizophrenia?
Yes! Each one of the Andreas can act as up to 4 chips in a hyper-threading-like manner... or more like the xCORE does. When acting as two cores, every virtual core gets only halve of the cycles. Since clock speed is much higher now this is still faster than the original Andrea from AAA+
So in total the system can provide 8 "Andreas".
Mary:
Quadruples the available voices the same way.
Julia:
decodes MPEG, JPEG (also useful for textures), MP3
Kate:
Our 3D-chip and "blender". It does some 3d-calculations, controls the z-buffer and the texture-buffer, does mapping an shading, takes care of planar-conversion and color-lookup, sprites (now just small texture-elements), and old Lindas zooming-feature (=mapping).
And it has one more thing to offer: Voxel-Sprites
This are small (up to 32*32*32 "pixels") 3D-objekts, that are atom/voxel-based and not polygon-based. They live in a fast 128kB-cache that can hold up to 8 32^3 objects or 64 16^3 elements and much more for smaller sizes. they can be arranged in groups e.g. to form an enemy in a shooter-game or used a swarm in a particle-system. A special Copper can take care of the movement.

Every texture can now have 8bit transparency and Kate takes care of the blending.
the Rainbow:
In 1985 it was crazy to bring 4096 colors. We go crazy again an bring 36bit color depth in 1996.
We are still living in the glorious era of CRTs. If you buy a new monitor it will almost certainly be a tube. LCDs are still expensive, slow and have terrible colors.
But a good CRT can easily show more than 16 Million colors! We take advantage of this. (we also support this in our new line of wide-gamut monitors)

The "Rainbow" is at the end go our graphics-pipeline. It takes the output of Kate and manipulates it:
For usual 16bit or 24bit screens it can spread the values over the whole 36bit range (3*12bit), but that would look too colorful on a wide-gamut monitor. So it can also map the output to a smaller range and the colors will look right again. It also helps to to calibrate the monitor e.g. for digital photography. There is no real 36bit mode in the ΩCS - 32bit will do for now and one or two of the lowest bits of each color channel will be set to 0 in that case.
Hydra and ASB:
Gets another speed-up. Network via link-portal reaches now over 200Mbit. ASB supports 28Mbit e.g. for external drives.
Our Product-Line:

Amiga-Cube:
Introduced already last year with AAA++ chipset and a 68060+DSP will stay the same as our entry-level Amiga. Available with 66MHz or new with 75Mhz.
It offers good performance for all kind of office work and all 2D games. A 3D-card can be added.
Amiga-Server:
Also from last year. Gets some faster PPC processor(s).
Amiga 2000NG:
Our new A1000, since it has nothing in common with the original 2000. Is is a ultra-slim desktop-case with keyboard-garage. To archive this form factor we use laptop-drives. The body is just 2.5cm (1inch) thick. But it is stable enough to carry your CRT. And it is wide an deep enough for a full length ZorroIII or PCI board ... but just one.
  • ΩCS
  • processor-module with 68060@66 and PPC 604e@200*
*)each processor has its own memory controller, so it can really work in parallel.
Architects, designers, DTP-folk will just love it.
Amiga Tower rev 3:
A new look for our tower. As usual you can configure it at your wishes:
AAA++ or ΩCS, all available processor modules, drives etc.
Amiga AIO:
We are proud of our new line of monitors we created together with Iiyama.
They feature a 3:2 aspect ratio and wide-gamut color space. So we decided to put a new Amiga right inside of such a 16inch monitor: Amiga all in one.
Yes I know I copy the iMac there, but let's be honest: the original iMac was a good idea and it does not exist yet. And it looks even better than the original! ;-)
It comes with ΩCS and single or dual 68060.
This is ideal for schools, universities or libraries and of course for internet cafes.
For all of the above mentioned we offer special deals - because it is the best adverting we can get!
Is is also available without a hard drive and network-booting.
Amiga Laptop:
Still equipped with AAA++ and 68060. ΩCS is of course far to power hungry for a mobile device. Better displays and a suspend to RAM mode.
AmigaPS and Amiga Genie:
Both are still selling. We are now number two behind Sony PreyStation, but thats all right.
What else?
MS decided to ship Windows with the Internet Explorer and is attacking our browser monopoly. We did the same with AWB on Amigas since 93, but until now it was not free on Windows or MacOS except for private use. It is now. We also add new features like tapped browsing and a JIT for AScript.
We have the best search engine out there and offer services like free email and place download-button for AWB there. So we hope we can do better then Netscape did in the real world and keep up our market-share.

Sun is making much noise with Java. We are not so impressed. It has no JIT yet and is awfully slow and memory hungry. We think ENIM is the better language.

Our PCI gfx-card for PCs und Macs based on AAA++ without Mary (optional plus 3D) are selling extremely well. A ΩCS based card has to wail until next year.

Last edited by Gorf; 20 June 2017 at 02:14.
Gorf is offline  
Old 19 June 2017, 20:48   #76
babsimov
Registered User

 
Join Date: Jun 2017
Location: France
Posts: 24
Gorf, your Ωmiga Chip Set look great, but don't you think it would have been very expensive to produce at the time ?

Why going 36 bits for the graphics ? As i understand, human eyes can only see 16 millions colors.

Going to something not "industry standard" make standard monitor not compatible ? So Amiga monitor would be much expensive. Maybe i don't understand your move ?

I like the fully configurable Amiga tower, by the way, i think i have missed it in your previous post.

Your alternate reality in far more ambitious than the ones i have wrote. Many many new technologies, chipset, that i can't dream of.

Continue you timeline to today i'm impatient to see what to come.
babsimov is offline  
Old 20 June 2017, 02:12   #77
Gorf
Registered User

 
Join Date: May 2017
Location: Munich/Bavaria
Posts: 156
Quote:
Originally Posted by babsimov View Post
Gorf, your Ωmiga Chip Set look great, but don't you think it would have been very expensive to produce at the time ?
Let's compare it to reality:
1996: 3dfx Voodoo (only 3D), ATI Rage (including 2d, 3D, MPEG)
1997: Nvidia Riva128 with over 3.5 million transistors

So ΩCS would have more transistors than Voodoo and Rage and about equal to Riva ... maybe about 4 million. So it is high-class in 1996 but not impossible.

Quote:
Why going 36 bits for the graphics ? As i understand, human eyes can only see 16 millions colors.
If you are not color blind, you can see many billions of colors.
Even your small camera in your smartphone has at least 10 bit per color!
Most people are not aware that 24bit is long gone in many fields.
E.g. Blurray supports "deep color" up to 16bit per channel = 48bit!
HDMI supports more than 8bit per channel since version 1.3

Quote:
Going to something not "industry standard" make standard monitor not compatible ? So Amiga monitor would be much expensive. Maybe i don't understand your move ?
You can still perfectly use a standard monitor - is will just not provide you much more visible colors.

There are some funny things to know about CRTs.
Most of them had much better color-capabilities than all LCDs until very recently.
The first TV-sets, that followed the definition of NTSC from 1953 to the letter, had an enormous color range! But the phosphors they used, turned out to degrade soon, so they switched to less colorful coatings.
So if you buy a "sRGB" monitor today it will give you only 72% of the NTSC color-space...

To answer your question: It is much cheaper to produce a more colorful CRT in the 90s than it is now. And it is much easier to do than producing a wide-gamut LCD.

We are now selling millions of Amigas per month - so we can place large orders and our monitor would probably be just 50$ over others..

Quote:
I like the fully configurable Amiga tower, by the way, i think i have missed it in your previous post.
I think since 1993 we have it this way. :-)
Yes I think that is what many people, especially more experienced users wanted back than. Of course you will find pre-configured models in computer-shops as well.

Quote:
Your alternate reality in far more ambitious than the ones i have wrote. Many many new technologies, chipset, that i can't dream of.
Mine started slower than yours, but now this timeline has gained speed! :-) Chip-technology grows exponentially, so do we.

Last edited by Gorf; 20 June 2017 at 14:23.
Gorf is offline  
Old 20 June 2017, 13:18   #78
kovacm
Registered User
 
Join Date: Jan 2012
Location: Serbia
Posts: 268
In spite of this thread, I just read great article (as always from David L. Farquhar):
What happened to Digital Equipment Corporation?
http://dfarq.homeip.net/what-happene...t-corporation/

Where he claim than Commodore had plans to use DEC Alpha CPU in Amiga...
kovacm is offline  
Old 20 June 2017, 14:00   #79
Gorf
Registered User

 
Join Date: May 2017
Location: Munich/Bavaria
Posts: 156
@kovacm

In deed: it fits well.

I was thinking of buying Alpha from DEC in 1996 - that was the year DEC sold its patents to Intel and from that on Intel was manufacturing the Alpha... a big mistake from DEC, but they needed the money. And this transaction was the result of a lawsuit: DEC was suing Intel to use their patents to speed up the pentium. This fact was missing in the article. Two years later Compaq buys the rest of DEC.

The Alpha was a very well designed RISC CPU ... but we will have something even better.
Gorf is offline  
Old 20 June 2017, 14:11   #80
kovacm
Registered User
 
Join Date: Jan 2012
Location: Serbia
Posts: 268
Quote:
Originally Posted by Gorf View Post
... but we will have something even better.
where?
kovacm is offline  
AdSense AdSense  
 


Currently Active Users Viewing This Thread: 1 (0 members and 1 guests)
 
Thread Tools

Similar Threads
Thread Thread Starter Forum Replies Last Post
Alternate monitor BanisterDK support.Hardware 4 12 January 2017 22:42
Amiga timeline TroyWilkins Nostalgia & memories 23 05 September 2016 15:30
Timeline Yesideez Amiga scene 1 13 September 2007 08:12
Magazine cover artwork = Timeline? alexh AMR suggestions and feedback 1 28 May 2007 02:04
CAPS Release Timeline fiath project.SPS (was CAPS) 10 29 June 2004 17:10

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +2. The time now is 03:25.


Powered by vBulletin® Version 3.8.8 Beta 1
Copyright ©2000 - 2017, vBulletin Solutions, Inc.
Page generated in 0.56823 seconds with 15 queries