English Amiga Board


Go Back   English Amiga Board > Main > Nostalgia & memories

 
 
Thread Tools
Old 22 November 2021, 21:05   #801
redblade
Zone Friend
 
redblade's Avatar
 
Join Date: Mar 2004
Location: Middle Earth
Age: 40
Posts: 2,127
Quote:
Originally Posted by Gorf View Post
It’s not so much about PCs (office) but about the academic fields, about engineering, about rendering, simulations and so on - low end simply was an economical death trap.
Not only for Commodore, but for all other low end systems as well.

Apple got lucky the Mac was adopted by the DTP crowd… but it is a shame the Amiga did not became the natural platform for image manipulation and processing, for rendering and even for CAD and of course for digital audio workstations.
These are the fields it initially showed great prospect and started a lot of developments … but these soon moved on, because Commodore did not provide suitable powerful hardware.
The demand was there, the market was there - CBM wasn’t.
Yes you are right, It's about the software. I think by 1985 AutoCAD on DOS was now in use in many offices as they got in early and show by the pic on winworldpc.com for version 2, It could have easily been done on the Amiga as they were still using 640x200 resolution in 8 colours (well 16, If you used ANSI Bold Command). Aegis Draw did the good shareware thing and offered a demo version early on to the Fred Fish collection.





I also remember on the A3000 Manual they advertise the possibility that the machine could be used for CAD by showing a CAD drawing on it.



I just wonder if Commodore offered an Amiga package with Digipaint and Photon Paint early on if it would have attracted more users early on.

I guess the advantage Apple had with DTP was that the Laser Jet printer was network ready so x amount of users could all just plug into the one printer compared to the Amiga.
Attached Thumbnails
Click image for larger version

Name:	2x-af40ba03c218e201e40078c80c0e046a-Autocad 2.15 - Drawing.png
Views:	674
Size:	4.4 KB
ID:	73902   Click image for larger version

Name:	2x-dcd0143581dddbab7e5df9af3dc56484-Autocad 2.15 - Columbia.png
Views:	679
Size:	6.7 KB
ID:	73903   Click image for larger version

Name:	Introducing_the_Commodore_3000_1991_Commodore_0000-big.jpg
Views:	682
Size:	83.2 KB
ID:	73904  

Last edited by redblade; 22 November 2021 at 21:13.
redblade is offline  
Old 23 February 2022, 14:59   #802
axilmar
Registered User
 
Join Date: Feb 2022
Location: Athens, Greece
Posts: 41
While I love the Amiga (due to the fact it brought so many joys to me with its software, both games and productivity (Deluxe Paint, I am looking at you!)), I can't stop thinking how a few small changes could have vastly improved the system. Many of these changes are already mentioned in this thread, so I won't repeat them here.

One thing I would particularly want is for the blitter to have its own programs. I.e. just like the copper has copper lists, the blitter should have had blitter lists. The CPU would then prepare the next frame's blitter list as the current frame was rendered by the blitter, without tying up the copper. Or have two coppers actually, one driving the blitter and the other driving the color/palette changes!

Another thing that I would love is for the blitter to do some ...sprite scaling. I am in love with sprite scaling, since the moment I played Outrun in the arcades. I believe that the blitter could incorporate some logic to 'scale' individual bits, i.e. have a few more registers and logic gates that affect the source bits before they reach the part that applies the selected minterm and writes the result back to memory. After all, scaling up bits of a bitplane is nothing more than repeating those same bits at particular intervals, and very few registers could be needed for that.

If I designed a computer in the 80s, would I design it like the Amiga? Most probably I wouldn't. I would have dedicated custom cpu/ram for the sound, dedicated custom cpu/ram for the graphics, the main cpu would have its own dedicated ram (still an 68000 though), very fast buses from/to each cpu, with the largest bandwidth possible at the time (32 bits), and bus mastering when data needed to be transferred from one cpu's ram to the other cpu's ram. And I would possibly do banked ram access, so when for example the graphics cpu rendered the display from one memory bank, the main cpu could work in the other memory banks of the display without wait states, and then the main cpu would switch banks on the co-processors, so as that they work on the new data, while the main cpu prepares the data in the other banks. The end result would be better games, better arcade conversions (my favorites), easier programmability and better machine longevity (as this kind of design probably scales better than sharing of the same memory by multiple devices using cycles)...
axilmar is offline  
Old 23 February 2022, 16:44   #803
sandruzzo
Registered User
 
Join Date: Feb 2011
Location: Italy/Rome
Posts: 2,281
Amiga, lacked of some HW optimizations. In my opinion she seems unfinished.
sandruzzo is offline  
Old 23 February 2022, 16:56   #804
dreadnought
Registered User
 
Join Date: Dec 2019
Location: Ur, Atlantis
Posts: 1,902
Quote:
Originally Posted by axilmar View Post
If I designed a computer in the 80s, would I design it like the Amiga? Most probably I wouldn't.

Problem is, you are speaking with nearly 4 decades of hindsight. So it's really hard to say what one would or wouldn't do being there and then.



It's a bit like people in AD 2060, who'll be looking at modern tech. "You mean, they made it so you can't actually open & fix the phone or change the battery? Deary me."


(Nah, who am I kiddin. They won't give a toss.)
dreadnought is offline  
Old 23 February 2022, 20:49   #805
TEG
Registered User
 
TEG's Avatar
 
Join Date: Apr 2017
Location: France
Posts: 567
Quote:
Originally Posted by axilmar View Post
While I love the Amiga (due to the fact it brought so many joys to me with its software, both games and productivity (Deluxe Paint, I am looking at you!)), I can't stop thinking how a few small changes could have vastly improved the system. Many of these changes are already mentioned in this thread, so I won't repeat them here.

One thing I would particularly want is for the blitter to have its own programs. I.e. just like the copper has copper lists, the blitter should have had blitter lists. The CPU would then prepare the next frame's blitter list as the current frame was rendered by the blitter, without tying up the copper. Or have two coppers actually, one driving the blitter and the other driving the color/palette changes!

Another thing that I would love is for the blitter to do some ...sprite scaling. I am in love with sprite scaling, since the moment I played Outrun in the arcades. I believe that the blitter could incorporate some logic to 'scale' individual bits, i.e. have a few more registers and logic gates that affect the source bits before they reach the part that applies the selected minterm and writes the result back to memory. After all, scaling up bits of a bitplane is nothing more than repeating those same bits at particular intervals, and very few registers could be needed for that.

If I designed a computer in the 80s, would I design it like the Amiga? Most probably I wouldn't. I would have dedicated custom cpu/ram for the sound, dedicated custom cpu/ram for the graphics, the main cpu would have its own dedicated ram (still an 68000 though), very fast buses from/to each cpu, with the largest bandwidth possible at the time (32 bits), and bus mastering when data needed to be transferred from one cpu's ram to the other cpu's ram. And I would possibly do banked ram access, so when for example the graphics cpu rendered the display from one memory bank, the main cpu could work in the other memory banks of the display without wait states, and then the main cpu would switch banks on the co-processors, so as that they work on the new data, while the main cpu prepares the data in the other banks. The end result would be better games, better arcade conversions (my favorites), easier programmability and better machine longevity (as this kind of design probably scales better than sharing of the same memory by multiple devices using cycles)...

The problem is that, in the real world, you're limited by: money, time and motivation on the long run.

So more complexity you add, more chance to never deliver a final product you have. I think that Jay Miner gone to the extreme of the complexity of the architecture it was possible to achieve with the resource he had.

However your idea of a second copper to pilot the blitter is interesting.
TEG is offline  
Old 24 February 2022, 00:31   #806
Bruce Abbott
Registered User
 
Bruce Abbott's Avatar
 
Join Date: Mar 2018
Location: Hastings, New Zealand
Posts: 2,547
Quote:
Originally Posted by axilmar View Post
I believe that the blitter could incorporate some logic to 'scale' individual bits, i.e. have a few more registers and logic gates that affect the source bits before they reach the part that applies the selected minterm and writes the result back to memory. After all, scaling up bits of a bitplane is nothing more than repeating those same bits at particular intervals, and very few registers could be needed for that.
Interesting. Can you show us in detail how it could be done with 'very few registers'?


Quote:
If I designed a computer in the 80s, would I design it like the Amiga? Most probably I wouldn't. I would have dedicated custom cpu/ram for the sound, dedicated custom cpu/ram for the graphics, the main cpu would have its own dedicated ram (still an 68000 though), very fast buses from/to each cpu, with the largest bandwidth possible at the time (32 bits),
Sounds expensive, and an inefficient use of RAM.

But in fact the Amiga does have dedicated RAM for graphics and sound when you add FastRAM, and it even has a dedicated CPU for graphics (the Copper).

32 bits wasn't the largest bus possible at the time, but the 68020 wasn't available until mid 1985 (well after the Amiga's architecture was designed) so it didn't make sense to have a 32 bit main CPU bus. Graphics and sound chips could have used any bus width, but cost is directly proportional to bus width so 32 bits would have cost twice as much as 16 bits.

Of course in later years they did go to 32 bits, which is how the A1200 can do so much more than the A500.


Quote:
And I would possibly do banked ram access
Why?

I think you may be missing something very important here. You can design a machine with any performance you like (even with 1982 tech) but it won't go anywhere if you can't build it for a good price. The Amiga was originally designed as a gaming / home computer, so it had to be cheap. That's why it used dedicated custom chips, and shared RAM, and no fancy CPUs controlling graphics and sound, and everything squeezed onto a single board with the minimum number of chips to do the job.
Bruce Abbott is offline  
Old 24 February 2022, 00:52   #807
axilmar
Registered User
 
Join Date: Feb 2022
Location: Athens, Greece
Posts: 41
Quote:
Originally Posted by dreadnought View Post
Problem is, you are speaking with nearly 4 decades of hindsight. So it's really hard to say what one would or wouldn't do being there and then.
Back in 1987, when I was in high school, I regularly 'designed' computer architectures on paper. I put 'designed' into quotes, because I didn't really do any design, I simply put down specs, but not overly unrealistic ones.

I was looking mainly at the X68000, from information off the magazines Edge and ACE, and I also got a few hints from arcade machines when these magazines showed technical details of those machines, and from workstation machines as presented in local Greek magazines.

So, if I belonged to a team that would design a home computer for the 80s/90s, the way I'd want it to be done is roughly how I describe it above.
axilmar is offline  
Old 24 February 2022, 00:53   #808
axilmar
Registered User
 
Join Date: Feb 2022
Location: Athens, Greece
Posts: 41
Quote:
Originally Posted by TEG View Post
The problem is that, in the real world, you're limited by: money, time and motivation on the long run.

So more complexity you add, more chance to never deliver a final product you have. I think that Jay Miner gone to the extreme of the complexity of the architecture it was possible to achieve with the resource he had.

However your idea of a second copper to pilot the blitter is interesting.
I actually think that dedicated cpus + dedicated buses + dedicated memory, and all these communicating by a specific standard is less complex than the Amiga architecture, but then I may be wrong...;-).
axilmar is offline  
Old 24 February 2022, 01:16   #809
axilmar
Registered User
 
Join Date: Feb 2022
Location: Athens, Greece
Posts: 41
Quote:
Originally Posted by Bruce Abbott View Post
Interesting. Can you show us in detail how it could be done with 'very few registers'?


Sounds expensive, and an inefficient use of RAM.

But in fact the Amiga does have dedicated RAM for graphics and sound when you add FastRAM, and it even has a dedicated CPU for graphics (the Copper).

32 bits wasn't the largest bus possible at the time, but the 68020 wasn't available until mid 1985 (well after the Amiga's architecture was designed) so it didn't make sense to have a 32 bit main CPU bus. Graphics and sound chips could have used any bus width, but cost is directly proportional to bus width so 32 bits would have cost twice as much as 16 bits.

Of course in later years they did go to 32 bits, which is how the A1200 can do so much more than the A500.


Why?

I think you may be missing something very important here. You can design a machine with any performance you like (even with 1982 tech) but it won't go anywhere if you can't build it for a good price. The Amiga was originally designed as a gaming / home computer, so it had to be cheap. That's why it used dedicated custom chips, and shared RAM, and no fancy CPUs controlling graphics and sound, and everything squeezed onto a single board with the minimum number of chips to do the job.
In order to find which pixel to reproduce when scaling, you simply need a fixed width integer decimal number computation, an addition of integers, a rounding and a shifting. First you calculate the addition step, which comes from source width/destination width, then you have a pixel index which you increase by the step. Then you round the result, and select the pixel at the specified index.

I.e. in pseudocode:

Code:
let source_pixel_step = source_width / destination_width
let source_pixel_index = 0
for dst_pixel_index = 0 to destination_width
    let actual_source_pixel_index = round(source_pixel_index)
    destination[dst_pixel_index] = source[actual_source_pixel_index]
    source_pixel_index += source_pixel_step
end for
Once the pixels are produced, then they can be fed to the regular blitter functions, for actual blitting.

The above is for scaling across the X axis; for the Y axis, the same algorithm computes the source image row.

When the video ram uses bitplanes, the scaling can be done in parallel for each bitplane.

So the registers needed would be source width, destination width, source height, destination height.

Dedicated hardware and a bit of internal RAM could then offer a fixed scaling function, from X1 to X2 and for a specific row size (say, 16 pixels across). Then one could scale larger images by precomputing scale column widths. Or the hardware could be parallelized by splitting up the image into tiles and do the above scaling in parallel (first scaling of tile widths, then scale the tiles).

There is a lot to be done in this area and there is quite a lot of room for optimizations.

Now, regarding bus widths, since the main CPU would be 16-bit, that would have been the main CPU bus. The coprocessors would have 32-bit buses though.

I don't agree that the cost is linearly proportional to the bus width. I.e. I don't agree that a 32-bit bus would have twice the cost as a 16-bit bus. Maybe initially, but then the cost would go down according to how the production would be scaled.

I believe that with clever design, and a few extra bucks more expensive than the Amiga, a machine like the one I described could be created. After all, the Amiga's custom chips were special, weren't they? they weren't off-the-shelve chips, so the cost was mostly the design and the special manufacturing needed.

Well, maybe...;-).
axilmar is offline  
Old 24 February 2022, 07:29   #810
Bruce Abbott
Registered User
 
Bruce Abbott's Avatar
 
Join Date: Mar 2018
Location: Hastings, New Zealand
Posts: 2,547
Quote:
Originally Posted by axilmar View Post
In order to find which pixel to reproduce when scaling, you simply need a fixed width integer decimal number computation, an addition of integers, a rounding and a shifting. First you calculate the addition step, which comes from source width/destination width, then you have a pixel index which you increase by the step. Then you round the result, and select the pixel at the specified index...

So the registers needed would be source width, destination width, source height, destination height.
Sounds complicated.

Quote:
I don't agree that the cost is linearly proportional to the bus width. I.e. I don't agree that a 32-bit bus would have twice the cost as a 16-bit bus. Maybe initially, but then the cost would go down according to how the production would be scaled.
Actually it could have been more than double. In 1985 only DIP packages were available. Going from 16 bits to 32 bits would have required at least 16 more pins. If you are lucky it might fit in a 64 pin package, but could Commodore produce such chips in 1985? If not they would have to get someone else to make them, at an inflated price. Then the A1000 would be too expensive and everybody looses interest. Result - Amiga is a dead duck.

That's obviously not all though, because to make full use of those 32 bits a lot of internal registers need be double the width. That means a lot more transistors and a much larger die. Could Commodore make a chip of that size with good yield? I doubt it. So they would probably have to split it into several chips, costing more and using up more board space (which also costs more).

In 1992 (7 years later) Commodore produced the AGA chipset with basically the same Agnus (still only 16 bit blitter etc.) and a 32 bit equivalent of Denise (Lisa) that was not made by Commodore - because 7 years later they still couldn't make it in-house. And this was only an 'incremental' update.

But let's imagine a fantasy world where Commodore had the ability to make such hardware at minimal cost increase. Imagine trying to develop that 32 bit chipset (with all the extras people would expect from it today) from scratch in 1984! The original Agnus prototype alone was 8 large boards full of wire-wrapped TTL chips.

Quote:
I believe that with clever design, and a few extra bucks more expensive than the Amiga, a machine like the one I described could be created.
Jay Miner, what did he know? 50+ years old when he designed the Amiga - probably still thinking of vacuum tubes, eh? I bet a clever young lad like like you would have run rings around him.

Quote:
After all, the Amiga's custom chips were special, weren't they? they weren't off-the-shelve chips, so the cost was mostly the design and the special manufacturing needed.
Yes obviously, except the initial design involved building and debugging a prototype made with off-the-shelf chips, which would have cost a fortune if it went into production. So the circuit had to be redesigned as a custom chip, using all the tricks they could think of to reduce the number of transistors needed. And if something didn't work right they couldn't just grab a wirewrap tool and change the wiring, they had to create a new set of masks and make another batch of custom chips.

But after getting the design right they could churn out custom chips much cheaper than using off-the-shelf parts, as well as save on PCB area and complexity. Since Commodore owned MOS they got the chips dirt cheap (provided the yield was good). So it was mostly the design that cost them. If the design had been more complex though, the chips probably would have been significantly harder to make (perhaps not with a newer process, but that would have required serious investment in a new foundry).

If you look at what other cutting-edge manufacturers were making at the time, their more complex chips weren't cheap. For example Intel's 82258 Advanced DMA controller (a less sophisticated chip than your 32 bit Agnus) cost US$170 in 100 up quantities.

For a parallel in the PC world, consider the Intel 80386DX. An expensive chip with a lot of pins, and a 32 bit local bus which made early 386DX PCs very expensive. Then Intel brought out the 80386SX specifically to get the cost down - not just for the CPU but also the motherboard chipsets, which with the narrower bus could easily be condensed into a single chip. 386SXs quickly took over the market (despite lower performance) because they were much cheaper. This was a winning strategy for Intel, as was sticking to 16 bits for Commodore.

The A1000 was expensive compared to other popular home computers like the C64, but performance was so incredible in comparison that it was (just) worth the money. A 'full 32 bit' A1000 would have priced itself off the market. The A500 hit the sweet spot of performance vs price. Anybody who was familiar with the limitations of 8 bit home computers was gobsmacked by the A500, and making it full 32 bit wouldn't have changed that much - but a 50% price increase sure would have put a damper on it.
Attached Thumbnails
Click image for larger version

Name:	agnus1.JPG
Views:	68
Size:	366.8 KB
ID:	74802   Click image for larger version

Name:	agnus2.JPG
Views:	65
Size:	621.0 KB
ID:	74803  
Bruce Abbott is offline  
Old 24 February 2022, 10:45   #811
robinsonb5
Registered User
 
Join Date: Mar 2012
Location: Norfolk, UK
Posts: 1,153
Quote:
Originally Posted by axilmar View Post
Code:
let source_pixel_step = source_width / destination_width
let source_pixel_index = 0
for dst_pixel_index = 0 to destination_width
    let actual_source_pixel_index = round(source_pixel_index)
    destination[dst_pixel_index] = source[actual_source_pixel_index]
    source_pixel_index += source_pixel_step
end for
Once the pixels are produced, then they can be fed to the regular blitter functions, for actual blitting.

It can actually be much simpler than that - at least for scaling up (scaling down is trickier, but also arguably less useful)

Internally almost everything is already built in terms of shift registers - so for example a 16-bit wide word of sprite data arrives from DMA, is latched into a shift register and shifted out one bit at a time.
To make the sprite wider, maintain an internal counter and make the shifting happen only when that counter overflows, instead of on every pixel.
The value added to that counter per pixel would be software-controlled. If that value's at its maximum, the counter overflows every pixel and the sprite is its usual size. If that value is at 50%, the sprite ends up twice the size, and so on.
robinsonb5 is offline  
Old 24 February 2022, 12:11   #812
grond
Registered User
 
Join Date: Jun 2015
Location: Germany
Posts: 1,918
The simple scaling algos are iterative in their nature and are thus slow. Even OCS-DMA works 16 bits at a time, no you would have to iteratively process individual bits to find the ones you have to double. Vertical scaling, however, would be pretty much free but that can be done with a copper list as well.

More importantly the output of those simple scaling algos looks horrible. Admittedly we do have a lot of Amiga games with simple scaling of 2D graphics and they always looked horrible; I agree that hardware accelerated horrible looks may be better than non-hardware accelerated horrible looks but only in that they would eat less CPU-time to achieve the horrible look.
grond is offline  
Old 24 February 2022, 13:09   #813
axilmar
Registered User
 
Join Date: Feb 2022
Location: Athens, Greece
Posts: 41
Quote:
Originally Posted by robinsonb5 View Post
It can actually be much simpler than that - at least for scaling up (scaling down is trickier, but also arguably less useful)

Internally almost everything is already built in terms of shift registers - so for example a 16-bit wide word of sprite data arrives from DMA, is latched into a shift register and shifted out one bit at a time.
To make the sprite wider, maintain an internal counter and make the shifting happen only when that counter overflows, instead of on every pixel.
The value added to that counter per pixel would be software-controlled. If that value's at its maximum, the counter overflows every pixel and the sprite is its usual size. If that value is at 50%, the sprite ends up twice the size, and so on.

Yeah, good catch. As I said, there is plenty of room for optimization.
axilmar is offline  
Old 24 February 2022, 13:11   #814
axilmar
Registered User
 
Join Date: Feb 2022
Location: Athens, Greece
Posts: 41
Quote:
Originally Posted by grond View Post
The simple scaling algos are iterative in their nature and are thus slow. Even OCS-DMA works 16 bits at a time, no you would have to iteratively process individual bits to find the ones you have to double. Vertical scaling, however, would be pretty much free but that can be done with a copper list as well.

More importantly the output of those simple scaling algos looks horrible. Admittedly we do have a lot of Amiga games with simple scaling of 2D graphics and they always looked horrible; I agree that hardware accelerated horrible looks may be better than non-hardware accelerated horrible looks but only in that they would eat less CPU-time to achieve the horrible look.

It would be a lot faster than software scaling though.


Personally I like very much the scaled graphics of Outrun/Afterburner/Powerdrift/Galaxy Force.

Anyway, it would be optional. If a developer wouldn't want to use it, so be it.
axilmar is offline  
Old 24 February 2022, 13:15   #815
axilmar
Registered User
 
Join Date: Feb 2022
Location: Athens, Greece
Posts: 41
Quote:
Originally Posted by Bruce Abbott View Post
Sounds complicated.

Actually it could have been more than double. In 1985 only DIP packages were available. Going from 16 bits to 32 bits would have required at least 16 more pins. If you are lucky it might fit in a 64 pin package, but could Commodore produce such chips in 1985? If not they would have to get someone else to make them, at an inflated price. Then the A1000 would be too expensive and everybody looses interest. Result - Amiga is a dead duck.

That's obviously not all though, because to make full use of those 32 bits a lot of internal registers need be double the width. That means a lot more transistors and a much larger die. Could Commodore make a chip of that size with good yield? I doubt it. So they would probably have to split it into several chips, costing more and using up more board space (which also costs more).

In 1992 (7 years later) Commodore produced the AGA chipset with basically the same Agnus (still only 16 bit blitter etc.) and a 32 bit equivalent of Denise (Lisa) that was not made by Commodore - because 7 years later they still couldn't make it in-house. And this was only an 'incremental' update.

But let's imagine a fantasy world where Commodore had the ability to make such hardware at minimal cost increase. Imagine trying to develop that 32 bit chipset (with all the extras people would expect from it today) from scratch in 1984! The original Agnus prototype alone was 8 large boards full of wire-wrapped TTL chips.

Jay Miner, what did he know? 50+ years old when he designed the Amiga - probably still thinking of vacuum tubes, eh? I bet a clever young lad like like you would have run rings around him.

Yes obviously, except the initial design involved building and debugging a prototype made with off-the-shelf chips, which would have cost a fortune if it went into production. So the circuit had to be redesigned as a custom chip, using all the tricks they could think of to reduce the number of transistors needed. And if something didn't work right they couldn't just grab a wirewrap tool and change the wiring, they had to create a new set of masks and make another batch of custom chips.

But after getting the design right they could churn out custom chips much cheaper than using off-the-shelf parts, as well as save on PCB area and complexity. Since Commodore owned MOS they got the chips dirt cheap (provided the yield was good). So it was mostly the design that cost them. If the design had been more complex though, the chips probably would have been significantly harder to make (perhaps not with a newer process, but that would have required serious investment in a new foundry).

If you look at what other cutting-edge manufacturers were making at the time, their more complex chips weren't cheap. For example Intel's 82258 Advanced DMA controller (a less sophisticated chip than your 32 bit Agnus) cost US$170 in 100 up quantities.

For a parallel in the PC world, consider the Intel 80386DX. An expensive chip with a lot of pins, and a 32 bit local bus which made early 386DX PCs very expensive. Then Intel brought out the 80386SX specifically to get the cost down - not just for the CPU but also the motherboard chipsets, which with the narrower bus could easily be condensed into a single chip. 386SXs quickly took over the market (despite lower performance) because they were much cheaper. This was a winning strategy for Intel, as was sticking to 16 bits for Commodore.

The A1000 was expensive compared to other popular home computers like the C64, but performance was so incredible in comparison that it was (just) worth the money. A 'full 32 bit' A1000 would have priced itself off the market. The A500 hit the sweet spot of performance vs price. Anybody who was familiar with the limitations of 8 bit home computers was gobsmacked by the A500, and making it full 32 bit wouldn't have changed that much - but a 50% price increase sure would have put a damper on it.
Too bad Jay is not around any more to help us with this. I am not disputing his work, of course, and am I not even talking about Commodore.

Regarding the cost, maybe you are right, but maybe, not in 1985, but in 1987/88/89, it was more feasible to have 32 bit buses for custom chips.

Anyway, if such a machine was available, even if it cost twice the A500, I would still save to buy it. Playing Outrun at home, with a quality close to the arcade, would be a dream come true.

I would have bought a X68000 if it was available here in Greece. But it wasn't. I contacted the local Sharp dealers and none of them have even heard about it.
axilmar is offline  
Old 24 February 2022, 15:41   #816
roondar
Registered User
 
Join Date: Jul 2015
Location: The Netherlands
Posts: 3,411
Quote:
Originally Posted by axilmar View Post
Anyway, if such a machine was available, even if it cost twice the A500, I would still save to buy it. Playing Outrun at home, with a quality close to the arcade, would be a dream come true.

I would have bought a X68000 if it was available here in Greece. But it wasn't. I contacted the local Sharp dealers and none of them have even heard about it.
It should be pointed out that the X68000, while very powerful, was also anything but cheap. It cost around $4000-5000 (converted from Yen) when first launched. All that power did cost
roondar is offline  
Old 09 March 2022, 13:08   #817
axilmar
Registered User
 
Join Date: Feb 2022
Location: Athens, Greece
Posts: 41
Quote:
Originally Posted by roondar View Post
It should be pointed out that the X68000, while very powerful, was also anything but cheap. It cost around $4000-5000 (converted from Yen) when first launched. All that power did cost
Cost is in the eye of the beholder.

It could have cost less should Sharp wanted it to cost less.
axilmar is offline  
Old 09 March 2022, 13:32   #818
axilmar
Registered User
 
Join Date: Feb 2022
Location: Athens, Greece
Posts: 41
One thing I would improve is the blitter registers.

I would do it like this:

BLTxPTR = 32-bit address of data.
BLTxROWSIZE = 16-bit row size of data, in pixels, per plane.
BLTxXPOS = 16-bit position into data, in pixels.
BLTxYPOS = 16-bit y position into data, in pixels.

The above are per channel.

BLTSRCWIDTH = 16-bit blit width, in pixels.
BLTSRCHEIGHT = 16-bit blit height, in pixels.
BLTCON = 16-bit minterm selector/channel selector/bitplane count; writing this would start the blit.

The blitter would then calculate modulos, mask first and last words, words per row to blit based on bitplane count and sizes, etc.

Having the 68000 do all the shifts and multiplications creates a heavy burden both for the programmer and the CPU. Shifts and multiplications for the 68000 are quite heavy and eat a lot of CPU cycles.

Plus, in this way the blitter is abstracted in such a degree that transition from 16-bit blitter to 32-bit blitter would be (almost) automatic, because the blitter itself would know the size of each word, the program wouldn't have to be concerned what is the size of the word that is fetched from memory by DMA at each cycle.

Also, this would help towards the transition to chunky pixels: the BLTCON register could also have a field named 'bits per pixel', which would be 1 initially, and later when the Amiga was expanded to support different bits per pixel, this field would be used to select the appropriate format. The blitting algorithms wouldn't change from the perspective of the programmer, because they would be specified in pixels.

Furthermore, if the system knows the bitplane count, the problem of having the mask data repeated for interleaved image blits would go away: the blitter would know how to go back to the mask row start for each bitplane row, thus not needing to specify the mask multiple times, once for each bitplane, when planes are interleaved.

And if scaling was added to the hardware, I'd add two more registers:

BLTDSTWIDTH = 16-bit destination width, in pixels.
BLTDSTHEIGHT = 16-bit destination height, in pixels.

The blitter would then have to calculate scaling ratios, in the same manner the system function ScaleDiv() does.
axilmar is offline  
Old 09 March 2022, 15:33   #819
grond
Registered User
 
Join Date: Jun 2015
Location: Germany
Posts: 1,918
Quote:
Originally Posted by axilmar View Post
Also, this would help towards the transition to chunky pixels
The blitter doesn't need to know about planar or chunky data formats. It could just as well blit chunky data as it blitted planar data. The mask data would just be a bit bloated but that wouldn't be much of a problem.

That, in fact, is what irks me the most about the AGA's lack of an 8 bit chunky mode: they didn't even have to change much, basically it would just have required a few rerouted wires before palette look-up and a switch to activate the mode.
grond is offline  
Old 09 March 2022, 16:07   #820
Thomas Richter
Registered User
 
Join Date: Jan 2019
Location: Germany
Posts: 3,216
Quote:
Originally Posted by grond View Post
The blitter doesn't need to know about planar or chunky data formats. It could just as well blit chunky data as it blitted planar data. The mask data would just be a bit bloated but that wouldn't be much of a problem.

That's only true for regular blits. The line drawer has to be different in chunky, and the fill mode as well.


There is something else annoying about the blitter, namely that the FWM and LWM registers are too short, or they are applied relative to shifting at the wrong time. This has the consequence that for a generic blit, the entire Blit A source needs to point to an entire line of a pre-calculated line mask, FWM and LWM registers are in general not sufficient to run a blit from an arbitrary position in source to an arbitrary position in the destination.


The blitters you find in PC "super VGA" chipsets all take dimensions and pixel offsets, not masks, but they all operate in "chunky" only. Some have line drawing algorithms as well (and sometimes they are buggy...). Otherwise, the construction is not too different. There are two sources (there called "pattern" and "source", comparable to the blitter A and B input), and a third source, which is always identical to the destination (C for the Amiga), but instead of FWM and LWMs, the blitter takes pixel coordinates, which is much more practical.


Sometimes, additional logic is present that allows bit-expansions, or planar to chunky expansion - and that's something a "chunky" blitter would also need, namely to implement a "BltTemplate()" (aka Text()) function smoothly. It takes a bitmask and creates from that a pixel pattern in the target format (chunky, hi-color or true-color).



What we call a "minterm" in Amiga land is a "ROP" in PC land, but otherwise, it's constructed exactly the same way (just written down in a completely convoluted way, though the logic is exactly as with the Amiga blitter).


Thus, a lot of things that are being asked for here are part of the standard PC 2D acceleration logic.
Thomas Richter is offline  
 


Currently Active Users Viewing This Thread: 1 (0 members and 1 guests)
 
Thread Tools

Similar Threads
Thread Thread Starter Forum Replies Last Post
Non-Amiga things that remind you of Amiga things? Fingerlickin_B Retrogaming General Discussion 1048 19 March 2024 11:50
wanting to experiment, using Amiga (emulator) as my day to day machine, need advice mmace New to Emulation or Amiga scene 14 19 March 2020 11:32
Why game companies didn't make better games for Amiga ancalimon Retrogaming General Discussion 35 17 July 2017 12:27
New Year Day = throw CD32 in the dishwasher day Paul_s Hardware mods 16 03 January 2009 19:45
Amazing things you've done with your Amiga mr_a500 Amiga scene 67 05 July 2007 19:45

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +2. The time now is 04:15.

Top

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.
Page generated in 0.16836 seconds with 16 queries