English Amiga Board


Go Back   English Amiga Board > Coders > Coders. Asm / Hardware

 
 
Thread Tools
Old 24 August 2018, 22:35   #201
Bruce Abbott
Registered User
 
Bruce Abbott's Avatar
 
Join Date: Mar 2018
Location: Hastings, New Zealand
Posts: 2,546
Quote:
Originally Posted by plasmab View Post
It doesnt limit it on the Amiga accessing chip ram. But it does accessing fast memory.
True, and that means a 68000 running at 7MHz doesn't make best use of possible Fast RAM bandwidth. Which was a good thing. Why? because having no great performance improvement with Fast RAM meant that developers didn't have a great incentive to use it, and users weren't forced to buy more expensive machines to run the latest software.

Contrast this with what happened in the PC world. The original PC ran an 8 bit bus at 4.77MHz, and software was designed to match that speed. The next development was XTs running at up to 10MHz, which needed a 'turbo' button to reduce speed to 4.77MHz for compatibility. Then came 16 bit 286s, which often also had a button to slow them down to 6 or 8 MHz. This trend continued with the 386DX/SX and 486, so developers had a plethora of machines with different processing speeds to cater to. With a choice of 'dumbing down' to the XT, producing multiple versions optimized for different machines, or just developing for the latest hardware, many developers took the latter route - or worse, produced sluggish bloatware that would only run at an acceptable speed on the next generation hardware. You could say this was good because it drove the development of ever more powerful hardware, but that came at high cost to manufacturers and users, who were constantly trying to keep up.

Commodore could have taken the same route, with each new machine incorporating the latest 68K CPU and fastest RAM running at max bus speed. But that would have continued to make each new Amiga too expensive and incompatible with a large proportion of the existing software base (which was the biggest problem the Amiga faced from day 1). At the same time that some Amiga users were complaining about the A500+ not running all games or the A600 missing a numeric keypad, trying to get a game to run properly on a PC was often a nightmare. This was a very common complaint from PC users and a major selling point for the Amiga.

From the developers and users point of view, compatibility is more important than wringing the last drop of performance out of the CPU. The Amiga's custom chips were its real advantage, so so breaking compatibility to get an extra 20% CPU speed would have been counterproductive. IMO Commodore made the right choice here. Other machines that did attempt to max out CPU performance suffered as result. The C16/Plus 4 was a good example - nobody cared that the CPU was faster, only that C64 compatibility was compromised. The C128 had a fallback mode that was almost 100% compatible, but the higher price killed it.

Quote:
you're assuming the CPU only uses 16 bit wide instructions.. thats not always the case. often it has to wait for the operand to even start the execution (e.g. a full address load is potentially 3 x 16 bits wide... = (4+4+4) 12 clock cycles ? whereas with a faster bus you'd get that in 2+2+2+(2 for execution) = 8.
Yes, it's true that the 68000 is not as efficient as it could be. So what? Neither was the 6502 or the Z80, but both were very popular CPUs that did a good job. Bus efficiency was improved in later version of all these processor lines, yet the original chips still do a good job in retro and DIY machines. The key is optimizing the entire system, not getting hung up on minor inefficiencies in one area that don't make a big difference overall.
Bruce Abbott is offline  
Old 24 August 2018, 22:37   #202
plasmab
Banned
 
plasmab's Avatar
 
Join Date: Sep 2016
Location: UK
Posts: 2,917
Quote:
Originally Posted by Bruce Abbott View Post
Yes, it's true that the 68000 is not as efficient as it could be. So what? Neither was the 6502 or the Z80, but both were very popular CPUs that did a good job. Bus efficiency was improved in later version of all these processor lines, yet the original chips still do a good job in retro and DIY machines. The key is optimizing the entire system, not getting hung up on minor inefficiencies in one area that don't make a big difference overall.
I'm not trying to make a point beyond this. Well except that that my OCD goes into overdrive watching the thing on the scope...
plasmab is offline  
Old 24 August 2018, 22:42   #203
roondar
Registered User
 
Join Date: Jul 2015
Location: The Netherlands
Posts: 3,410
Quote:
Originally Posted by plasmab View Post
At the risk of getting shot down ... they are used when the video, audio or blitter needs them. Other times they aren’t used. E.g with sound off and no blitting going on they are wasted in the vertical blanking period.

Hope my ignorant answer isn’t too rude!

EDIT: add floppy to that list
No one is going to shoot you down for telling something that is widely known.

The above is exactly why a lot of Amiga games try to concentrate blitting in the vertical blank periods, it's most efficient to blit during periods of low/no DMA activity.
roondar is offline  
Old 24 August 2018, 23:06   #204
Bruce Abbott
Registered User
 
Bruce Abbott's Avatar
 
Join Date: Mar 2018
Location: Hastings, New Zealand
Posts: 2,546
Quote:
Originally Posted by plasmab View Post
people like you said that they couldn’t go to the moon in the 1960s.. yet it was done.
And what did they do it with?

microprocessors used in spacecrafts

The Galileo used an RCA CPD1802, the same CPU that my first computer came with. That thing was dog slow, yet still managed to run interpreted games at an acceptable speed. How did it do it? By matching the CPU to a 64x48 pixel graphics chip with DMA RAM access, and having an efficient language that could be interpreted with little overhead.

The next computer I bought was a ZX81, which had disappointing performance despite its much more powerful Z80 processor. Why? A shockingly inefficient video display subsystem that wasted 75% of the CPU cycles, and a bloated BASIC interpreter. The CPU wasn't the problem.
Bruce Abbott is offline  
Old 25 August 2018, 00:15   #205
plasmab
Banned
 
plasmab's Avatar
 
Join Date: Sep 2016
Location: UK
Posts: 2,917
Quote:
Originally Posted by Bruce Abbott View Post
And what did they do it with?

microprocessors used in spacecrafts

The Galileo used an RCA CPD1802, the same CPU that my first computer came with. That thing was dog slow, yet still managed to run interpreted games at an acceptable speed. How did it do it? By matching the CPU to a 64x48 pixel graphics chip with DMA RAM access, and having an efficient language that could be interpreted with little overhead.

The next computer I bought was a ZX81, which had disappointing performance despite its much more powerful Z80 processor. Why? A shockingly inefficient video display subsystem that wasted 75% of the CPU cycles, and a bloated BASIC interpreter. The CPU wasn't the problem.
Nice links. My post was designed to illustrate that when the tools arent good enough you build new tools. Comes at a price. But I spent 11 years at university in an electronics dept and that was the mantra when your kit didnt do what you wanted. you just built better kit. Most of your time was spent doing that actually.
plasmab is offline  
Old 25 August 2018, 14:20   #206
idrougge
Registered User
 
Join Date: Sep 2007
Location: Stockholm
Posts: 4,332
Quote:
Originally Posted by plasmab View Post
When you do this kind of stunt on an 8 bit machine you use the stack pointer for either the Source or destination and push/pop as appropriate. So it’s like having one Amiga address register that has the (Ax)- syntax only.
Except the 6502 can't move its stack pointer.

It works for the Z80, though.

Quote:
Originally Posted by plasmab View Post
But as already stated the RAM of the era could do 70ns... that’s one 14mhz clock cycle. This is why I complain... these machine waste so much of their available RAM bandwidth
The A1000 and A500 (at least rev 5 and earlier) used 150 ns memory. Perhaps even slower in the A1000.
70 ns memory, if it even existed in 1985, was too expensive for personal computers.

Quote:
Originally Posted by plasmab View Post
Yes. Actually I get and know all of the reasons. If only they'd make the chips in their catalog without the silly designs. i.e. the 030. They dont make it anymore
I think you could ask Rochester Electronics to make them for you. However, costs might be prohibitive.
https://www.rocelec.com/part/BTOFREREIMC68020CRC33E

Quote:
Originally Posted by plasmab View Post
Well it was a bit of a troll on my part. I've seen a lot of premature optimisation in my career that actually led to unmaintainable software that didn't actually perform that well.
You can also regard the 68000 as an expression of this. A transistor count of 68000 was quite huge for a CPU in 1979, much of which was "wasted" on microcode. But the use of microcode allowed Motorola to get their CPU out much earlier than the competitors (Z8000, NS32016 etc) and with much fewer bugs thanks to use of higher-level design methods than placing gates manually.

Last edited by idrougge; 25 August 2018 at 15:09.
idrougge is offline  
Old 25 August 2018, 16:18   #207
plasmab
Banned
 
plasmab's Avatar
 
Join Date: Sep 2016
Location: UK
Posts: 2,917
I learned that stunt on the z80.

I’ve possibly just always had posh Amigas then. Even then.. with 280ns per memory slot the Amiga isn’t exactly pushing the envelope.

I guess it would be a trade off between 4 x 150ns slots = 600ns (6.666mhz) and getting more from RAM or keeping the clock period divisible/multiplicable by the pixel clock... I guess what we got is better in that respect.

I contacted Rochester but it was more of a colonoscopy than a quote.
plasmab is offline  
Old 25 August 2018, 16:26   #208
roondar
Registered User
 
Join Date: Jul 2015
Location: The Netherlands
Posts: 3,410
Quote:
Originally Posted by plasmab View Post
I’ve possibly just always had posh Amigas then. Even then.. with 280ns per memory slot the Amiga isn’t exactly pushing the envelope.
Without trying to restart the semantic discussion of what envelope is being pushed exactly, I'd say it did all right for a 1984/1985 consumer oriented design.

We got roughly 7mb/sec to play with (for the whole system) and that was, as far as I can find, actually pretty good for a consumer system at that point. More so when you consider the price point.
roondar is offline  
Old 25 August 2018, 16:30   #209
plasmab
Banned
 
plasmab's Avatar
 
Join Date: Sep 2016
Location: UK
Posts: 2,917
Quote:
Originally Posted by roondar View Post
Without trying to restarts the semantic discussion of what envelope is being pushed exactly, I'd say it did all right for a 1984/1985 consumer oriented design.

We got roughly 7mb/sec to play with (for the whole system) and that was, as far as I can find, actually pretty good for a consumer system at that point. More so when you consider the price point.
If my RAM is 150ns i basically try and design the circuits around it to be as close to that as possible. Remembering that RAM was the most expensive thing about the machine at the time (? at least thats my recollection... correct me if im wrong. Im not trying to assert anything beyond that i once paid > £100 for a megabyte of ram upgrade)

I read a lot of rants by Sophie Wilson on the subject of early 80s machines and their failure to max their RAM and that mantra has stuck with me i guess.
plasmab is offline  
Old 25 August 2018, 16:38   #210
roondar
Registered User
 
Join Date: Jul 2015
Location: The Netherlands
Posts: 3,410
Quote:
Originally Posted by plasmab View Post
If my RAM is 150ns i basically try and design the circuits around it to be as close to that as possible. Remembering that RAM was the most expensive thing about the machine at the time (? at least thats my recollection... correct me if im wrong. Im not trying to assert anything beyond that i once paid > £100 for a megabyte of ram upgrade)

I read a lot of rants by Sophie Wilson on the subject of early 80s machines and their failure to max their RAM and that mantra has stuck with me i guess.
Without knowing how the Amiga chip ram interface was implemented and why they needed 150ns chips I really can't tell.

I do recall reading an article about the Atari ST (which is in some ways quite similar in how it also uses 150ns RAM and a 68000) where the designer pointed out they actually needed RAM at that speed - and that machine has nearly the exact same bandwidth figures as the Amiga (as in, roughly 8mb/sec if you include shifter which is also on the bus).

No idea then, sorry.
roondar is offline  
Old 26 August 2018, 06:50   #211
Bruce Abbott
Registered User
 
Bruce Abbott's Avatar
 
Join Date: Mar 2018
Location: Hastings, New Zealand
Posts: 2,546
Quote:
Originally Posted by plasmab View Post
Remembering that RAM was the most expensive thing about the machine at the time
Yes, RAM was generally the most expensive part. Commodore originally wanted the A1000 to have only 128k on the motherboard, to keep costs down.

Quote:
If my RAM is 150ns i basically try and design the circuits around it to be as close to that as possible.
But support chips also have propagation delays which need to be minimized. In 1983 the cheap high speed low power options we have today were not available, so faster chips were more expensive and drew a lot more power - requiring a more expensive power supply, better heat-sinking or fan cooling etc., all of which could raise the cost significantly.

And as timing approaches the minimum the design gets more critical, less tolerant of things like PCB layout and component variations, and generally harder to get right. Development costs go up, reliability goes down. Running at the ragged edge to get a little better performance might sound like a good idea, but would be a disaster if 1 in 100 machines randomly crashed or had visible glitches.
Bruce Abbott is offline  
Old 26 August 2018, 10:03   #212
plasmab
Banned
 
plasmab's Avatar
 
Join Date: Sep 2016
Location: UK
Posts: 2,917
Quote:
Originally Posted by Bruce Abbott View Post
Yes, RAM was generally the most expensive part. Commodore originally wanted the A1000 to have only 128k on the motherboard, to keep costs down.

But support chips also have propagation delays which need to be minimized. In 1983 the cheap high speed low power options we have today were not available, so faster chips were more expensive and drew a lot more power - requiring a more expensive power supply, better heat-sinking or fan cooling etc., all of which could raise the cost significantly.

And as timing approaches the minimum the design gets more critical, less tolerant of things like PCB layout and component variations, and generally harder to get right. Development costs go up, reliability goes down. Running at the ragged edge to get a little better performance might sound like a good idea, but would be a disaster if 1 in 100 machines randomly crashed or had visible glitches.
Agree in general but the 74LS series was available and used in the BBC B. I guess the ASIC tech may have been more basic. Thinking about it.. the most difficult thing for a memory controller to do a the time would be address decoding to make sure that the address in question was actually for RAM.

PCB routing isnt much of an issues in terms of delays. A signal can travel 1 meter in 3.3ns in copper and around 5-7ns/m in "clean" semiconductors (this is an intentional simplification and I know ALL the details about calculating this for variously doped semiconductors of all varieties). The gate to gate delays is the biggest factor i can think of.

So I would have designed the 68000 bus interface to place the address on the bus in S0, assert AS on the falling edge of the clock and then latch the data (for a read) on the next postive edge of the clock unless an "I'm not ready" signal is asserted.

Why would i have done this?.... because if i was sitting in a room designing a chip in 1978 I would have been able to see that component speeds were improving all the time and I would have been thinking about how my design could take advantage of that.

But I wasnt in the room. None of us can say what the outcomes of the meetings they had on it were. What tools they could have used if they'd been willing to pay. What quotes they had for things. For all I know the chief designer wanted to do it my way and was shot down by his boss/manager. That has happened to me more than once in my career. So I am not blaming anyone or saying the chip is crap. I am just dreaming about what might have been.
plasmab is offline  
Old 26 August 2018, 12:24   #213
Megol
Registered User
 
Megol's Avatar
 
Join Date: May 2014
Location: inside the emulator
Posts: 377
Spent some time reading old magazines and other sources of information of the great dark ages of computing (;p).

The short of it: there were no 70ns DRAM in 1979 when the 68000 was released, there were no 70ns DRAM in 1984 (that I found). There were 80ns DRAM however - specified suitable to mainframes(!!) and optimized for low access times, no exactly fitting the market the 68k was in, seems to be the equivalent of RLDRAM.

Up to at least 1984 there were no fast page mode, only page mode. Fast memories had a minimum cycle time of about 230ns with full RAS/CAS, down to about 120ns in page mode (there were however at least one chip with an 100ns page mode timing in 1984).
Most of the DRAM was much slower.

The page size varied of course with density with 64 to 128 entries relatively common with 256 entries on the higher scale.

So there's that. A reasonable 68000 designer wouldn't design for a memory standard over 6 years in the future, they'd design for something that could be built using available and expected components. Combine that with the constraints and design goals they had and making a memory optimized design isn't realistic IMO.

The 68000 cost about $500 when released, the 8086 cost about $87.
In 1984 the cheapest 1MiB card I could find in the limited time spent was about $2000 BTW.

Data sources: Byte magazine, Mostek databooks, Texas Instruments databooks, misc. searches. Bitsavers and archive.org for the win.
Megol is offline  
Old 26 August 2018, 12:51   #214
plasmab
Banned
 
plasmab's Avatar
 
Join Date: Sep 2016
Location: UK
Posts: 2,917
Quote:
Originally Posted by Megol View Post
Spent some time reading old magazines and other sources of information of the great dark ages of computing (;p).

The 68000 cost about $500 when released, the 8086 cost about $87.
Thats a bit of a price difference. Nice work.
plasmab is offline  
Old 26 August 2018, 13:26   #215
roondar
Registered User
 
Join Date: Jul 2015
Location: The Netherlands
Posts: 3,410
Quote:
Originally Posted by Megol View Post
Up to at least 1984 there were no fast page mode, only page mode. Fast memories had a minimum cycle time of about 230ns with full RAS/CAS, down to about 120ns in page mode (there were however at least one chip with an 100ns page mode timing in 1984).
Most of the DRAM was much slower.
I think I finally get it. With the small disclaimer I am not a RAM chip expert.

See, something had been bothering me about the BBC Micro vs Amiga example - the RAM used in the BBC Micro was 100ns, but the system only accessed it at 250ns (4MHz clock, 1 access per cycle = 250ns - or if you prefer, 2MHz clock, 2 accesses per cycle = 250ns). That didn't make any sense to me, why access memory at only 40% of it's access speed? Same with the Amiga's 280ns maximum access speed on 150ns RAM (only 53% of access speed).

But now I think I do. What's happening is that these systems don't use page mode. They use random-access mode. And memory used for random access is much slower than it's rated (page mode) speed.

Just check the datasheets: the KM4164B-10 used in the BBC Micro has a random access cycle of 190ns. That's a lot closer to the actual memory access speed of the system than 100ns is. Similarly, for the 41256-15 commonly used in the early Amigas the random access cycle is 260ns. That is really close to the 280ns the system accesses memory at.

Given this data, it makes a great deal of sense these systems where the way they are: these where CPU's without a cache. Both the CPU & DMA can (and do) request any memory address at any time in systems like this. It'd be a nightmare to try and make a memory controller that deals with that and still tries to keep the RAM running in page mode most of the time.

Last edited by roondar; 26 August 2018 at 13:44.
roondar is offline  
Old 26 August 2018, 13:30   #216
plasmab
Banned
 
plasmab's Avatar
 
Join Date: Sep 2016
Location: UK
Posts: 2,917
Quote:
Up to at least 1984 there were no fast page mode, only page mode. Fast memories had a minimum cycle time of about 230ns with full RAS/CAS, down to about 120ns in page mode (there were however at least one chip with an 100ns page mode timing in 1984).
Most of the DRAM was much slower.
This is wrong (EDIT: Or at least partially wrong.. see below). Ram timings arent quoted in FPM times. How could they be before FPM was invented. Its a part of the random access time. The BBC used 100ns ram before 1984.

Quote:
Originally Posted by roondar View Post
But now I think I do. What's happening is that these systems don't use page mode. They use random-access mode. And memory used for random access is much slower than it's rated (page mode) speed.
I thought that was well known. AFAIK only Jens cards use Fast Page mode (on the amiga... maybe he's using the EDO variety). The TF328 doesnt even use it. Fast page mode is when you read a burst out and you can do that without starting the whole cycle again.

The headline RAM numbers are for random access. Not FPM or EDO.

Quote:
Just check the datasheets: the KM4164B-10 used in the BBC Micro has a random access cycle of 190ns. That's a lot closer to the actual memory access speed of the system than 100ns is. Similarly, for the 41256-15 commonly used in the early Amigas the random access cycle is 260ns. That is really close to the 280ns the system accesses memory at.
Maybe then the RAM speeds are quoted in RAS to CAS times...

EDIT: Haynie wanted to use it on something A3000 related but he got told not to for reliability or something. I forget the exact reason.

Last edited by plasmab; 26 August 2018 at 13:38.
plasmab is offline  
Old 26 August 2018, 13:49   #217
plasmab
Banned
 
plasmab's Avatar
 
Join Date: Sep 2016
Location: UK
Posts: 2,917
Quote:
Originally Posted by roondar View Post
the KM4164B-10 used in the BBC Micro has a random access cycle of 190ns. That's a lot closer to the actual memory access speed of the system than 100ns is.
Incorrect. The 4816AP was used in the BBC B..

Page 229

http://www.bitsavers.org/components/..._Data_Book.pdf
plasmab is offline  
Old 26 August 2018, 13:57   #218
plasmab
Banned
 
plasmab's Avatar
 
Join Date: Sep 2016
Location: UK
Posts: 2,917
The BBC B specifically used the 4816AP-3

http://mdfs.net/Info/Comp/BBC/Circuits/BBC/bbc.gif

The manual says..

Access Time From RAS: 100ns
Access Time From CAS: 55ns

So my assertion stands.

I accept that you would need time either side for setup etc.

Another way to put it would be that you would have been stupid to use that RAM with the 68000 as fast ram.. because you'd never max it out.

EDIT: There is another time quoted below for random access.. 235ns.. thats with a refresh i believe.

Last edited by plasmab; 26 August 2018 at 14:11.
plasmab is offline  
Old 26 August 2018, 14:35   #219
roondar
Registered User
 
Join Date: Jul 2015
Location: The Netherlands
Posts: 3,410
Quote:
Originally Posted by plasmab
Incorrect. The 4816AP was used in the BBC B..

Page 229

http://www.bitsavers.org/components/..._Data_Book.pdf
I did get the part number I used from your own quote on page 6
Anyway, it's not that important - if you check the datasheets they're pretty similar, except the 4816AP-3 has slower random access cycles posted.



Quote:
Originally Posted by plasmab View Post
The BBC B specifically used the 4816AP-3

http://mdfs.net/Info/Comp/BBC/Circuits/BBC/bbc.gif

The manual says..

Access Time From RAS: 100ns
Access Time From CAS: 55ns

So my assertion stands.

I accept that you would need time either side for setup etc.

Another way to put it would be that you would have been stupid to use that RAM with the 68000 as fast ram.. because you'd never max it out.

EDIT: There is another time quoted below for random access.. 235ns.. thats with a refresh i believe.
Well, I found that 235ns figure as well (and a 260ns one for the RAM in the Amiga) and my reasoning is as follows (I admit, it could be wrong):

The BBC micro runs at 2MHz, but accesses memory at both rising and falling edge of the clock due to the video chip. That is basically equivalent to a 4MHz clock. Such a clock does 1 memory cycle every 250ns (1/4.000.000). A clock that would do 1 memory cycle every 100ns would run at 10MHz.

This fits with the wikipedia article linked to earlier:
Quote:
Originally Posted by BBC Micro wiki
This gave the BBC Micro a fully unified memory address structure without speed penalties. To use the CPU at full speed (2 MHz) required the memory system to be capable of performing four million access cycles per second. Hitachi was the only company, at the time, that made a DRAM that went that fast
Note that the claim is 4 million accesses per second (of which 2 million go to the CPU as the 6502 does one access per cycle). That's still equivalent to 1/4 million=250ns.


So my question becomes: why do you need 100ns RAM to service a system that only accesses memory once every 250ns? The same goes for the Amiga. The system required 150ns RAM. So why is that? Why not just run it with 280ns RAM (which would be much cheaper) and be done with it?

Because to me that just does not add up. Either the 235ns figure is correct for memory accesses and the remaining 15ns are either used by other stuff on the board or as 'headroom', or the 100ns figure is correct and then what the hell does the system do for the remaining 150ns?!


I hope you see why I don't get this. I'm not trying to troll or be rude, but a simple division of 1/clock speed shows that the numbers you quote can't really work without something else taking up a significant portion of time. And if there's a reason for this, I'm more than happy to admit I'm wrong.
roondar is offline  
Old 26 August 2018, 15:15   #220
plasmab
Banned
 
plasmab's Avatar
 
Join Date: Sep 2016
Location: UK
Posts: 2,917
Quote:
Originally Posted by roondar View Post

So my question becomes: why do you need 100ns RAM to service a system that only accesses memory once every 250ns? The same goes for the Amiga. The system required 150ns RAM. So why is that? Why not just run it with 280ns RAM (which would be much cheaper) and be done with it?

Because to me that just does not add up. Either the 235ns figure is correct for memory accesses and the remaining 15ns are either used by other stuff on the board or as 'headroom', or the 100ns figure is correct and then what the hell does the system do for the remaining 150ns?!


I hope you see why I don't get this. I'm not trying to troll or be rude, but a simple division of 1/clock speed shows that the numbers you quote can't really work without something else taking up a significant portion of time. And if there's a reason for this, I'm more than happy to admit I'm wrong.
OK i think i understand the RAM now. It needs a RAS precharge of another 100ns before you can access it again. Hence you can randomly access it every now and again at 100ns but you need to give it 100ns to recover.. bit misleading number IMHO.

I dont want the amiga to be faster. I wanted the 68000 bus to run faster. My point has never been anything to do with the Amiga really. You could use the 68K in places where you have no video or have any need for DMA etc.
plasmab is offline  
 


Currently Active Users Viewing This Thread: 1 (0 members and 1 guests)
 
Thread Tools

Similar Threads
Thread Thread Starter Forum Replies Last Post
Any software to see technical OS details? necronom support.Other 3 02 April 2016 12:05
2-star rarity details? stet HOL suggestions and feedback 0 14 December 2015 05:24
EAB's FTP details... Basquemactee1 project.Amiga File Server 2 30 October 2013 22:54
req details for sdl turrican3 request.Other 0 20 April 2008 22:06
Forum Details BippyM request.Other 0 15 May 2006 00:56

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +2. The time now is 05:46.

Top

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.
Page generated in 0.13626 seconds with 16 queries