English Amiga Board


Go Back   English Amiga Board > Coders > Coders. Asm / Hardware

 
 
Thread Tools
Old 05 April 2016, 23:05   #1
nocash
Registered User
 
Join Date: Feb 2016
Location: Homeless
Posts: 64
Floppy Drive access via Parallel Port

I am about to make some utility for writing double densitity discs using a floppy drive connected to PC via parallel port, for getting some cheap/simply way to boot homebrew code on Amiga. The ADTWin utility (http://m1web.de/ADTWin/) is already doing right that, but it requires WinXP or higher, and apart from that requirement, it might be just fun to figure out how to write a similar utility.
Below are some work in progress notes (as by now, mainly a bunch of ideas how to solve certain problems, and links to documents with useful information, and some notes/questions about problems that I haven't solved yet). Any help & suggestions would be welcome!

For the parallel port cable, I am using the same wiring as on the ADTWin page, except that I've wired it as an IDC-to-Centronics-adaptor instead of IDC-to-DB25-cable (so I can use the adaptor with a regular printer cable).

Here are a bunch of datasheet for PC floppy drives (all 3.5" HD):
http://matthieu.benoit.free.fr/pdf/F...0(Rev%20B).pdf - Teak FD-235HF datasheet (very detailed)
http://www.techtravels.org/wp-conten...21B-070103.pdf - Samsung SFD-321B datasheet (quite detailed)
http://www.silmoh.com/pezzi2/DS/98-DS.PDF - Sony MPF920-Z datasheet (pinout only)
The teak datasheet seems to be most detailed, the signal descriptions in Samsung datasheet are look a bit more brief (eg. nothing about extra-timings needed upon changing step direction, and nothing about INDEX signal being suppressed during spin-up; either the Samsung drive doesn't need/have those features, or the datasheet lacks those descriptions).

The Write Data signal is quite well in scope of parallel ports: For double density MFM bits, one must issue LOW pulses once every 4us, 6us, or 8us. Parallel port I/O opcodes take about 1.3us (+/-around 0.2us for computers with slightly different waitstates and different protected mode operating system overload). For a complete LOW-then-HIGH signal one would need about 2.6us (and then use some delay to produce a full 4us/6us/8us cycle).
The datasheets specify max 1.1us LOW duration, which isn't possible with parallel ports... but I guess/hope that that limit is meant to refer to high density discs (which would require 2us/3us/4us per LOW-HIGH signals, and any LOW longer than 1.1us would mean that the following HIGH pulse could get pretty short).

For computing the required per-bit delay timings, one could use some of the PCs standard timers: There should be a system timer (traditionally 18.2Hz) an all PCs, almost all 32bit PCs should also feature a 1024Hz timer, and of course, the RTC should be able to produce a 1Hz signal. Windows is rounding the 18.2Hz and 1024Hz timings to "milliseconds" which could produce nasty rounding errors on shorter periods, and there are probably some more obstacles where different windows versions might behave differently, or where other tasks/irqs mess up the return values of the get-timer functions.
Another clock source would be the INDEX signal from the floppy drive. After spin-up it should occur constantly at 5Hz (ie. 300 rotations/minute). I haven't tested if it's reliable yet, but at them moment I am tempted to use that signal instead of the PC's own timers. Using INDEX would eliminate some OS related problems, and, if the disc spins at, say, 302 rpm instad of 300 rpm, then it would be in fact better to use that "incorrect" timing for getting the most exact number of bits per rotation (unless of course if the drive would randomly change the rotation speed between 298 rpm and 302 rom).

For the actual delays (in relation to the above timing source), easiest might be using the PC's RDTSC opcode which should work on most or all Pentium or later processors.
For pre-pentium compatibility one could measure the time per I/O waitstate, and the timing for "wait-by-loop" delays, and then combine those timings to produce the desired timing for "OUT-opcode-plus-wait-by-loop", but that would be twice as difficult as RDTSC, and I'll be probably too lazy to get through that solution.

The LOW pulses for MFM bits must be either issued at the begin of the bit, or in the middle of the bit. From what I've learned about the Amiga hardware, that appears to be a well-known thing in the Amiga world: Translating a databit into a 2bit MFM value as so:
Code:
1 	-> 	01 	
0 	-> 	10 	if following a 0 
0 	-> 	00 	if following a 1
For the actual transfer via WDATA pin, that 2bit values are apparently inverted, and they are transferred bit0 first, and bit1 last.
Ie. the "01" (for a "1" bit) would cause a LOW pulse at begin of the bit, and stay HIGH in the middle of the bit. The "10" would do it vice-versa. The "00" would just stay HIGH without issuing any LOW pulse.

What I haven't figured out yet is if BYTES are transferred MSB (bit7) first, or LSB (bit0) first (?)

And, I haven't yet fully understood the Amiga's sector encoding. The FAQ from Laurent Clévy (http://lclevy.free.fr/adflib/adf_info.html) seems to describe almost everything about it. But some of the descriptions are looking rather confusing or incorrect. Especially the part about "odd bits" and "even bits" in the sector description looks completely wrong to me. I could only imagine that the Amiga OS might be storing the bits that way in some temporary variables before constructing that actual data block (?)
The http://wiki.amigaos.net/wiki/Amiga_F...hysical_Layout page does also have some info on sector encoding. It's looking less confused as the FAQ, but it's also less complete. The two things that I couldn't grasp yet are:

How are the checksums calculated? The FAQ has some info on it, but it's related to the oddly arranged odd/even bits. It sounds as if the checksum is computed on the "encoded" MFM data, and the resulting checksum would be then itself encoded into MFM data format?
Well, but I couldn't guess which even/odd bits are located where :- / better description or checksum source code would be great. Some binary example for sector in unencoded & MFM-encoded form would be also nice!

And what are those "OS recovery info, reserved for future use, sector label" bytes? Are they just zero (in unencoded form) or don't care or do they contain anything useful?

Oh, and does anybody know a tiny .ADF image for testing the parallel port write function? Best would be something that requires writing only a single track (with bootsectors, plus maybe a few more sectors on the same track).

And, one last thing: I've also though about READING floppies via parallel port. The datasheets specify very short LOW pulses on the RDATA pin (0.15us - 0.8us), reading those pulses is probably impossible with a parallel port reading rate of 1.3us (unless one would use a capacitor to stretch the pulses, but I guess that would require manual calibration for different FDDs and different parallel ports).
The other idea would be to toggle a flipflop upon LOW pulses, then the resulting signals would become 4us/6us/8us in length (ie. the same format as how they are physically stored on the disc surface). And that kind of signals could be read via parallel ports. At least, I think, that it could work with some timing/logic...
When reading data at 1.3us rate, the leading and trailing edges of a pulse could be off by about 1.2999us each, which could produce an error of about 2.5999us. That error would make it difficult to distinguish between 4us/6us/8us.
But, the signals are know to change on a "2us grid", so, for example, if the parallel port sees a transition at time=125.1, then one could assume that the transition must have actually occurred at time=124.0, that should allow to reconstruct exact signal timings.
Hardware-wise, a 74LS74 flipflip with CLK=RDATA and D=/Q might work, or a 74LS93 counter for proding CLK/2 should also work. Or maybe there even smaller/cheaper chips. A circuit with transistor/resistor would also work, but it would probably become "bigger" than a 74LS74.
At software-side, finding the "origin" for the 2us grid would be probably easy, but one would probably need to readjust the grid during reading, which might be getting a bit more difficult.

Last edited by nocash; 06 April 2016 at 01:21.
nocash is offline  
Old 05 April 2016, 23:11   #2
desiv
Registered User
 
desiv's Avatar
 
Join Date: Oct 2009
Location: Salem, OR
Posts: 1,770
Nice, love tech projects like this...
But just FYI:
Quote:
Originally Posted by nocash View Post
... for getting some cheap/simply way to boot homebrew code on Amiga.
Actually, a cheap way of doing this would probably be a Gotek with the Cortex firmware in the Amiga and just using a USB stick to get the disk image from the PC to the Amiga.

That said, "just because" is a valid enough reason for me.. ;-)

desiv
desiv is offline  
Old 05 April 2016, 23:17   #3
ReadOnlyCat
Code Kitten
 
Join Date: Aug 2015
Location: Montreal/Canadia
Age: 52
Posts: 1,178
Quote:
Originally Posted by desiv View Post
Nice, love tech projects like this...
Yup. Godspeed on this task!

Quote:
Originally Posted by desiv View Post
Actually, a cheap way of doing this would probably be a Gotek with the Cortex firmware in the Amiga and just using a USB stick to get the disk image from the PC to the Amiga.
[...]
desiv
The absolute cheapest would be to boot on a floppy, load a parallel port driver and continue the boot from a parallel port mounted device located on the PC.

ReadOnlyCat is offline  
Old 05 April 2016, 23:26   #4
desiv
Registered User
 
desiv's Avatar
 
Join Date: Oct 2009
Location: Salem, OR
Posts: 1,770
Quote:
Originally Posted by ReadOnlyCat View Post
The absolute cheapest would be to boot on a floppy, load a parallel port driver and continue the boot from a parallel port mounted device located on the PC.
Assuming your PC has a parallel port...
And a floppy drive to attach to it...

Both of which are getting less and less likely.. ;-)

desiv
desiv is offline  
Old 05 April 2016, 23:38   #5
nocash
Registered User
 
Join Date: Feb 2016
Location: Homeless
Posts: 64
Yes, I know. Once when I have made a boot disc, then I'll continue using direct PC-to-Amiga transfers via parallel port cable, and won't ever need to write any discs. In long term, I'll probably also patch the BIOS ROM to get the PC-to-Amiga transfers working without reading from discs.
At the moment, I only have a raw Amiga 500 with five games and with broken keyboard and without any bootable utitlity discs. In that situation, the FDD adaptor seems to be cheapest and easiest solution to run any code on the Amiga (apart from blowing up 4-5 days on getting it to work).
nocash is offline  
Old 06 April 2016, 14:56   #6
mark_k
Registered User
 
Join Date: Aug 2004
Location:
Posts: 3,347
Take a look at Krishnasoft's MPDOS Professional.

There are cables that plug into the PC parallel port. One has Amiga keyboard and mouse port connectors (so you can control the Amiga from the PC). The other cable connects to the floppy drive connector on the Amiga main board which allows you to emulate Amiga floppy drives. The software apparently works with Windows 3.1 and later.

Some of the blurb from the above page:
Code:
Simulates up to 4 Atari disk drives (D1:, D2:, D3:, D4:)
Simulates up to 4 Amiga disk drives (DF0:, DF1:, etc.)
Simulates Atari cassette player (C:) and printer (P1:..P9:)
KDOS4-- a fast binary file uploader
Built-in editor for creating Atari ASM and Atari BASIC source files
6502 Assembler (compile and upload directly to Atari)
Simulates Amiga and Atari disk drives along with Amiga joystick*, mouse*, and keyboard*
Multimedia CDROM included (runs on PC, Amiga, and Atari using distributive programming)
680x0 Assembler (compile and upload directly to Amiga)
Sample source code for compiling to an amiga image disk and amiga executable and Atari image disk and Atari executables.
DOS-based utilities including 6502 disassembler and ROM checksum
Works with easy to use parallel port cable provided with MPDOS Standard
Hardware level simulation (no drivers required on target machine)
Simple GUI interface for simulating peripherals, compiling, and uploading
On-line 100+ page manual with technical and general information
(I actually bought one a few years ago, but never got around to trying it out.)
mark_k is offline  
Old 07 April 2016, 13:18   #7
nocash
Registered User
 
Join Date: Feb 2016
Location: Homeless
Posts: 64
The floppy simulation cable is also an interesting idea. It would be a bit easier to handle if it would connect to the external DB23 connector on the Amiga (instead of the internal 34pin connector on the Amiga's mainboard), but I don't know if the Amiga could boot from the external connector.
Technically, it's same as accessing a FDD via parallel, but with the data transferred in opposite direction, and the timings would be a bit tighter (for simulating reads it would need to squeeze two OUTs (for RDATA low-high pulses) and one IN (for STEP/WGATE sensing) into 4us, which would fit nicely if the PC's I/O waitstates are 1.3us, but might get problematic if they are too much slower than 1.3us).
But I guess that the timing could work with most or all PCs, and with using just a raw cable. Though it's also possible that the cable contains extra components hiding in the case of the DB25 connector, did you ever look in there to see how it's wired up?

I've thought more about the sector encoding described in the FAQ, and I suspect that I've just misunderstood the description (I was thinking that odd/even bits would refer to the halves of the MFM encoded 2bit values). But now I guess that odd/even refers to bit1,3,5,7 and bit0,2,4,6 of the actual (non-MFM) data values. Then, for example, if the 512-byte sector body would be filled with AAh bytes, then the Amiga would first write 2048 bits with value "1" (for the odd bits), and then followed by 2048 bits with value "0" (for even bits) (and of course, with the "1" and "0" encoded to "2bit" MFM values. Could that be true?
If that's right, then the only question would be the bit-order. Intuitively, bit1 or bit7 would appear logical as first-odd-bit of first byte. But with the 16bit big-endian databus in mind, it might also start with bit1 of second byte (aka bit1 of first 16bit word).
Either way, the 2bit MFM bits seem to be output LSB first, so I would assume that the odd/even bits are also LSB first (ie. probably starting with bit1 of first byte-or-word, rather than bit7 or bit15). So my probablility ranking for "which bit is transferred first" would be:
Code:
30% bit1 of second byte
20% bit1 of first byte
10% bit7 of first byte
40% neither of above
That, for the 512-byte main data chunk, and the header should consist of four smaller chunks (4-byte, 16-byte, 4-byte, 4-byte) each one with bits disordered in similar fashion.
For the leading gap/sync bytes at the begin of the header, I would (almost) assume that they are encoded more straightly on all 8bits per byte (instead of splitting those bytes into 4bit/odd and 4bit/even). The known stuff about the sync byte/word is:
Code:
  sync byte = A1h = 10100001
  ...encoded to MFM format...
  sync word 4489h = 01.00.01.00.10.00.10.01
And that should be so... because the encoding is said to work as so:
Code:
  1  ->  01 	
  0  ->  10  if following a 0 
  0  ->  00  if following a 1
Which, uh. The subtle differences between whom-or-what is following or is being-followed-by whatever isn't making the description absolutely foolproof. And, for the A1h=4489h translation, it would only make sense if "following a X" would actually mean "following a XY bitpair". Aka, for a "0" bit, output a "10" MFM-bitpair if the preceeding MFM-bitpair was "01" or "00". Hmmmm, yes, that could sound about right, doesn't it?

For getting timing implemented in windows, I've tried using SetPriorityClass=REALTIME and SetThreadPriority=CRITICAL, which should gain the highest possibly priority, and it should even make the mouse unresponsive. Unfortunately, the latter one isn't really true: It seems to halt other tasks, and prevents actions like moving other windows via mouse dragging, but the actual mouse arrow can be still moved around. Ctrl+Alt+Del is also still handled. So interrupts are still enabled, and even mouse arrow drawing is still done (though that could be maybe disabled by capturing/disabling the mouse arrow). But the interrupt problem could totally smash the timings if the IRQ handler takes 1us or longer. I am afraid that windows .exe files simply cannot disable interrupts. Maybe drivers could do such a thing, but I've never programmed anything but plain .exe's under windows yet.
The ADTWin tool does seem to include some driver (maybe solely for unlocking the parallel port under winXP and up), but the driver doesn't seem to be able to fix timing issues either (the ADTWin webpage recommends not to touch USB mice during transfer, or to try using PS/2 mice, and not to use any serial [mice] hardware in general).
So, timings will be probably a bit fragile. One nice thing is that one could sense any such problems via RDTSC:
Code:
wait for desired RDTSC value
issue two OUT opcodes to produce low-high pulses
verify expected RDTSC value (should be around 2.6us higher than previous value)
so one could automatically retry writing the whole track upon errors (and hope that there will be no new timing problem arising during next 200ms), and if the retries don't work out then one could at least throw an error message saying that writing wasn't successful. If necessary, one could port the utility to DOS, which should provide 100% perfect timing control.

Now what's up with a single-track ADF for testing? Like, on cartridge-based systems, there are lot's of 4Kbyte-ROM-competitions. And I would have hoped that people in Amiga scene would have made a bunch of equivalent single-track games/demos?
I can probably make a "blink-the-screen" demo myself, but with all the bitorder, checksum, MFM, timing issues, there are so many things that could go wrong, and it would be nice to be having some small ADF test file that is known to work on real hardware (without risking to get that part wrong, too).

Precomp is also still unclear to me. The Teak FDD datasheet does merely say that the drive can/may/might/need support it or so. But without details about when/how to implement it. I assume that precomp is meant to issue an extra delay before positive or negative magnetic transitions on higher track numbers - but the WDATA pin does merely allow to signalize transitions (without knowing/saying if it shall be a positive or negative transition). Is that stuff described somewhere? Otherwise one could only try to write discs with different guessed precomp timings, and then use an oscilloscope to see if the resulting disc outputs clean signals upon reading-back the written data.
EDIT: Other idea: Or does precomp relate to long/short pulses rather than positive/negative pulses? Something like short pulses being outweighted by long pulses, so one would need to stretch short pulses and shrink long pulses on certain tracks?

Oh, and the http://lclevy.free.fr/adflib/adf_info.html page says that a sector should start with "MFM value 0xAAAA AAAA (when decoded : two bytes of 00 data)".
But, theoretically, a "0" bit should encode as "10" or "00" in MFM format. So writing AAAAAAAAh could violate the MFM encoding (depending on the last bit on previous sector). Is that some known problem/feature of the Amiga disc encoding?

Last edited by nocash; 07 April 2016 at 21:08.
nocash is offline  
Old 08 April 2016, 02:17   #8
nocash
Registered User
 
Join Date: Feb 2016
Location: Homeless
Posts: 64
Everything back. Just checked wikipedia on MFM, https://en.wikipedia.org/wiki/Modifi...ncy_Modulation
Turns out that the A1h sync byte is a normal A1h byte with a "missing bit". Normally, A1h would encode as 44A9h (and, that's meant to be transferred MSB first), and for the special sync value, it's instead encoded as 4489h. So that explains the odd 4489h value in the docs, and also explains how one could separate between A1h as sync byte and A1h as data byte.

The MFM notation can be a bit confusing: A "1" does always translates to a "1" (=transition) with preceeding "0" and following "0" (=no transitions), ie. a "1" translates to "010". But as there are only "2 bits per bit", one would usually truncate one of the zeroes (and treat them as part of the next/previous bit), so the "010" would become "01" (without trailing 0, as described on wikipedia, and in the amiga docs), or somebody else might prefer "10" (without leading 0, as I had originally (mis-)interpreted the example in the teak datasheet).

Also tried to re-read the FAQ's checksum function:
Code:
#define MASK 0x55555555	/* 01010101 ... 01010101 */
unsigned long *input;	/* MFM coded data buffer (size == 2*data_size) */
unsigned long *output;	/* decoded data buffer (size == data_size) */
unsigned long odd_bits, even_bits;
unsigned long chksum;
int data_size;		/* size in long, 1 for header's info, 4 for header's sector label */
int count;

chksum=0L;
/* the decoding is made here long by long : with data_size/4 iterations */
for (count=0; count<data_size/4; count++) {
	odd_bits = *input;                /* longs with odd bits */
	even_bits = *(input+data_size);    /* longs with even bits : located 'data_size' bytes farther */
	chksum^=odd_bits;              /* eor */
	chksum^=even_bits;
        /*
         * MFM decoding, explained on one byte here (o and e will produce t) :
         * the MFM bytes 'abcdefgh' == o and 'ijklmnop' == e will become
         * e & 0x55U = '0j0l0n0p'
         * ( o & 0x55U) << 1 = 'b0d0f0h0'
         * '0j0l0n0p' | 'b0d0f0h0' = 'bjdlfnhp' == t
         */ 
	/* on one long here : */
	*output = ( even_bits & MASK ) | ( ( odd_bits & MASK ) << 1 );
	input++;    /* next 'odd' long and 'even bits' long  */
	output++;     /* next location of the future decoded long */
	}
chksum&=MASK;	/* must be 0 after decoding */
What is clear is that it's XORing 32bit units.
Confusingly it's using separate XOR's for the "odd" and "even" 32bit values (before merging them into a normal 32bit value).
Same result should occur when XORing the merged 32bit values (and doing a final "sum=sum XOR (sum/2)" after the loop).
And then, after the loop, each second bit is masked off, so only 16bits of the 32bit checksum are actually used?
And then comes the final comment saying that the checksum "must be 0 after decoding". I would have assumed that it would have to be equal to (or inverse of) the checksum entry in the header. But if it "must be 0" then it seems to be more the inverse of what I was thinking how it might work ;- (

No, wait! The comment means that the masked-off bits must be zero. And then, that crippled value must be equal to the header entry, right? Please, yes, please, let it be so!

Last edited by nocash; 08 April 2016 at 02:24.
nocash is offline  
Old 08 April 2016, 16:16   #9
phx
Natteravn
 
phx's Avatar
 
Join Date: Nov 2009
Location: Herford / Germany
Posts: 2,521
Are you sure the checksum example is complete?

It is correct so far that all 128 odd and even longwords of the data block are xored into the checksum, which was initialized with zero. But in the end you have to compare this calculated checksum with the checksum stored in the header, while disregarding all clock-bits (& 0x55555555) in both values.
phx is offline  
Old 08 April 2016, 20:14   #10
pandy71
Registered User
 
Join Date: Jun 2010
Location: PL?
Posts: 2,856
Amiga use this frequency/period for FDD:

PAL 506699.285714286Hz 1973.5571535103ns
NTSC 511363.636363636Hz 1955.5555555556ns

I believe that even simple uC with some parallel to serial and serial parallel register will be beneficial (and it will be low tech) - modern PC are not good in real time tasks (there is no consumer operating systems capable to do this).
pandy71 is offline  
Old 09 April 2016, 13:42   #11
nocash
Registered User
 
Join Date: Feb 2016
Location: Homeless
Posts: 64
It works! And it did even work on the very first write attempt: Removed the write-protect tab from a disc, did run the parallel-port writing tool on it, and put the disc into the amiga, and voila: it booted my test program.
Kinda nice surprise. After all my guessing and speculation and misunderstanding of the MFM encoding, I would have assumed that I would still have had various bugs hiding in the different stages of data encoding and writing functions. But I seem to have somehow managed to produce completely working fully perfect bits & pulses :- )

Quote:
Originally Posted by phx View Post
Are you sure the checksum example is complete?
Looks sorts of complete. As far as I understand it, it would produce the correct checksum for the main data block. And for the header, one would need to run the function on two smaller blocks (and XOR the results to get final header checksum; or one could omit the second step if the header's "label/recovery" area is zerofilled).
I've looked at your thread about saving highscores on a MFM track a bunch of times, the thread made me hope that it's possible to write MFM data, and also helped to understand the MFM info in the FAQ a bit better.

Quote:
Originally Posted by pandy71 View Post
Amiga use this frequency/period for FDD:
PAL 506699.285714286Hz 1973.5571535103ns
NTSC 511363.636363636Hz 1955.5555555556ns
Good point, then something like 1.964us should be best for writing PAL/NTSC amiga discs. But it should be close enough to the official 2us MFM timing, so I could probably safely stick with 2us.
The other important value would be the rotation speed. My Amiga has a Matsushita JU-253 floppy drive. But I haven't found a datasheet for it (is there one?) anyways, the drive must contain it's own oscillator, producing a constant rotation speed regardless of the amiga hardware, supposedly the usual 300rpm (aka five rotations per second, aka 200ms per rotation). Then the nominal capacity would be (+/- some tolerance):
Normal MFM = 200ms/(2*2us) = 50000 bits/rotation
Amiga PAL = 200ms/(2*1.9735us) = 50671 bits/rotation
Amiga NTSC = 200ms/(2*19555us) = 51137 bits/rotation

Yes, an external microprocessor would make the timing easier and more stable. But a DIY circuit that consists of only two plugs and a handful of short wires is cooler, and easier to implement without needing special components & chip programming utilities (apart from most people probably getting a heart attack when hearing that the thing requires a "parallel port").

The windows interrupt handler seems to be frequently stealing about 30us..100us, which is kinda fatal when needing 2us accuracy. But then I've inserted a CLI opcode (interrupt disable), mostly expecting windows to ignore it or to throw a privileged opcode exception... but surprisingly, it worked just fine: the mouse arrow can be no longer moved, and Ctrl+Alt+Del is no longer working, and my .exe seems to gain the whole cpu load, without even needing to deal with SetPriorityClass or SetThreadPriority. For curiosity, I've also tested CLI in a DOS program running in a DOS prompt window, and that worked, too. At least, CLI works well with my win98 computer, don't know how it would behave with WinXP and up, or on dual core processors.

For some reason, I am still getting timing errors ranging somewhere between 0.5us and 1.0us. That effect seems to be related to OUT opcodes (which are privileged instructions, so they would trigger some exception handler before passing them to real hardware, and as it seems, the execution time of the exception handler isn't constant).
When using RDTSC to measure the time per two OUT opcodes, I am getting randomly changing results ranging from 3631 to 4244 clock cycles on a 1GHz computer. I've repeated the measuring function a bunch of times (to eliminate cache problems, or possible problems related to some prescaler bound to the actual I/O waitstate counter), but that didn't help. On the other hand, using RDTSC to measure the time of a LOOP opcode is producing perfectly constant results (somewhat confirming that there isn't any background DMA responsible for the timing errors). So, whatever is happening there, it must be bound to window's I/O exception handler doing weird stuff.
Anyways, the disc writing seems to be reliable enough if the error is less than 1.0us, maybe the data won't last too many years, but it appears to be good enough for testing/booting some code on the amiga, at least when using a fast PC (on an older 100MHz computer, the same inaccuracy of 0..600 clock cycles would probably result in a 10-times higher timing error).

Oh, and here's my minimal .ADF image that I've used for testing, I haven't written an 68K assembler yet, so I've manually encoded the opcodes and manually computed the boot sector checksum:
Code:
;adf image:
 db 044h,04fh,053h,000h           ;000h..003h ;hdr: ID "DOS",flags
 db 057h,0b6h,0f5h,075h           ;004h..007h ;hdr: checksum
 db 000h,000h,003h,070h           ;008h..00Bh ;hdr: value 880 decimal
 db 041h,0f9h,000h,0dfh,0f1h,080h ;00Ch..011h ;mov  a0,0dff180h  ;color00
 db 052h,040h                     ;012h..013h ;addw d0,1
 db 030h,080h                     ;014h..015h ;movw [a0],d0
 db 060h,0fah                     ;016h..017h ;jmp  012h
 db (200h*11*160-18h) dup (0)     ;018h..     ;zeropad remaining disc space

;checksum calculation:
;      444F5300h
;  +   00000370h
;  +   41F900DFh
;  +   F1805240h
;  +   308060FAh
;  -------------
;     1A8490A89h
;  +          1h
;  -------------
;      A8490A8Ah
;  xor FFFFFFFFh
;  -------------
;      57B6F575h
The idea would have been that it would produce some sort of snow or flimmering, possibly altering only certain colors of the hand/floppy bootscreen, and hoping that the screen does use color00 at all.
The actual effect has been that the amiga displayed horizontal lines, drawn across the whole screen, with some wandering-effect in lack of vertical synchronization. Not exactly what I've expected how it would look like, but it looks reasonable, and makes me think that my disc has booted up correctly and that the test program is running well (unless somebody tells me that the Amiga is always drawing horizontal lines in case of boot errors).

Last edited by nocash; 09 April 2016 at 14:38.
nocash is offline  
Old 09 April 2016, 16:25   #12
pandy71
Registered User
 
Join Date: Jun 2010
Location: PL?
Posts: 2,856
Quote:
Originally Posted by nocash View Post
It works! And it did even work on the very first write attempt: Removed the write-protect tab from a disc, did run the parallel-port writing tool on it, and put the disc into the amiga, and voila: it booted my test program.
Kinda nice surprise.

It may need very fast CPU to work in reliable way or special OS configuration... NT is not real time kernel.

Quote:
Originally Posted by nocash View Post
Yes, an external microprocessor would make the timing easier and more stable. But a DIY circuit that consists of only two plugs and a handful of short wires is cooler, and easier to implement without needing special components & chip programming utilities (apart from most people probably getting a heart attack when hearing that the thing requires a "parallel port").

The windows interrupt handler seems to be frequently stealing about 30us..100us, which is kinda fatal when needing 2us accuracy. But then I've inserted a CLI opcode (interrupt disable), mostly expecting windows to ignore it or to throw a privileged opcode exception... but surprisingly, it worked just fine: the mouse arrow can be no longer moved, and Ctrl+Alt+Del is no longer working, and my .exe seems to gain the whole cpu load, without even needing to deal with SetPriorityClass or SetThreadPriority. For curiosity, I've also tested CLI in a DOS program running in a DOS prompt window, and that worked, too. At least, CLI works well with my win98 computer, don't know how it would behave with WinXP and up, or on dual core processors.
Win98 is DOS and as such it may be real time compliant - modern multitasking OS like Windows NT (so all common MS OS) are not real time, they use Hardware Abstraction Layer to prevent directly bit banging.
Direct access to hardware (such as printer port) require to violate security mechanism in Windows (perhaps need to install not signed drivers etc etc etc).

Quote:
Originally Posted by nocash View Post
For some reason, I am still getting timing errors ranging somewhere between 0.5us and 1.0us. That effect seems to be related to OUT opcodes (which are privileged instructions, so they would trigger some exception handler before passing them to real hardware, and as it seems, the execution time of the exception handler isn't constant).
When using RDTSC to measure the time per two OUT opcodes, I am getting randomly changing results ranging from 3631 to 4244 clock cycles on a 1GHz computer. I've repeated the measuring function a bunch of times (to eliminate cache problems, or possible problems related to some prescaler bound to the actual I/O waitstate counter), but that didn't help. On the other hand, using RDTSC to measure the time of a LOOP opcode is producing perfectly constant results (somewhat confirming that there isn't any background DMA responsible for the timing errors). So, whatever is happening there, it must be bound to window's I/O exception handler doing weird stuff.
Anyways, the disc writing seems to be reliable enough if the error is less than 1.0us, maybe the data won't last too many years, but it appears to be good enough for testing/booting some code on the amiga, at least when using a fast PC (on an older 100MHz computer, the same inaccuracy of 0..600 clock cycles would probably result in a 10-times higher timing error).
And this is my point... highly user configuration dependent... even HW parallel to serial conversion will reduce error by factor at least 8 - 74HCT597 is perfect to solve most of your problems.
pandy71 is offline  
Old 09 April 2016, 20:28   #13
nocash
Registered User
 
Join Date: Feb 2016
Location: Homeless
Posts: 64
Uh, which problems? Rhetorical question: As by now, it's working fine. And I don't think that it would be too good to install extra components before even testing if the current circuit is working on other computers.
A shift register wouldn't really help, it would be a bit tricky to build a clock generator for supplying the shift-clock, and in the end it should still need the exact same precise per-bit timing for knowing when to load new data into the shift-register.
For other OSes, my program includes some unlock-the-parallel-port driver for winXP which is also used in my other emus, but I don't know if anybody has ever tested that driver under NT or Vista or newer. The latter/newer ones do probably need some signed driver, or the "test signing" mode mentioned on the ADTWin page - which seems to be good enough to write discs on different OS versions and it appears to have been tested on hardware running at 266MHz or faster.
Concerning my own tool, I'll probably polish it a little in next some days, but I would very much doubt that anybody else would ever consider using it, even if it worked perfectly on all kinds of hardware/software configurations (or is there anybody reading this with a soldering iron in one hand, and a parallel port in the other?).
Anyways, my main goal has been getting the amiga to boot up, and after having solved that, the next step would be making some small PC-to-Amiga transfer utillity, writing it to my bootsector, and then I'll never need to use the PC-to-FDD adaptor anymore.
nocash is offline  
Old 10 April 2016, 15:19   #14
pandy71
Registered User
 
Join Date: Jun 2010
Location: PL?
Posts: 2,856
Quote:
Originally Posted by nocash View Post
Uh, which problems? Rhetorical question: As by now, it's working fine. And I don't think that it would be too good to install extra components before even testing if the current circuit is working on other computers.
It works for you, it works for some people, it may not work sometimes.

Quote:
Originally Posted by nocash View Post
A shift register wouldn't really help, it would be a bit tricky to build a clock generator for supplying the shift-clock, and in the end it should still need the exact same precise per-bit timing for knowing when to load new data into the shift-register.
597 have internal buffer so it can store 1 byte internally, of course some small FIFO will be much better but cheaply some small AVR with internal SPI may be better substitute.

Quote:
Originally Posted by nocash View Post
For other OSes, my program includes some unlock-the-parallel-port driver for winXP which is also used in my other emus, but I don't know if anybody has ever tested that driver under NT or Vista or newer. The latter/newer ones do probably need some signed driver, or the "test signing" mode mentioned on the ADTWin page - which seems to be good enough to write discs on different OS versions and it appears to have been tested on hardware running at 266MHz or faster.
Concerning my own tool, I'll probably polish it a little in next some days, but I would very much doubt that anybody else would ever consider using it, even if it worked perfectly on all kinds of hardware/software configurations (or is there anybody reading this with a soldering iron in one hand, and a parallel port in the other?).
Anyways, my main goal has been getting the amiga to boot up, and after having solved that, the next step would be making some small PC-to-Amiga transfer utillity, writing it to my bootsector, and then I'll never need to use the PC-to-FDD adaptor anymore.
Hook PC directly to Amiga - no sense to record FDD when you can emulate FDD directly.
pandy71 is offline  
Old 11 April 2016, 00:13   #15
nocash
Registered User
 
Join Date: Feb 2016
Location: Homeless
Posts: 64
I've tried writing some 880K game discs, results are a bit mixed: Around 3 games are working, but showing error/retry screens once and then (and do work after doing the retry). And some other 3 games are just hanging, maybe because they are using custom loaders without proper error/retry handling (one of the 3 games doesn't seem to have any bootsector at all, which might also explain problems on that game). The retries must occur either because of the timing errors caused by windows, or due to missing write precompensation...

So, it's time to look into write-precompensation. There's surprisingly little info about it in the internet, only some very basic descriptions, but not timing diagrams, nor details about how/when/where to apply precompensation on which pulses. One quite interesting floppy page is this: http://info-coach.fr/atari/hardware/FD-Hard.php (but without details on precomp), then there are bunch of old emails about precompensation: http://marc.info/?t=137607091900004&r=1&w=2 the useful part in that emails is saying to look at three patents:
http://www.google.ch/patents/US4173027 - 1977 write precompensation system
http://www.google.ch/patents/US4334250 - 1978 MFM data encoder with precomp
http://www.google.ch/patents/US5187614 - 1989 some newer stuff
the relevant one seems to be the patent from 1978, the above link contains some terribly distorted OCRed tables, but there's also a pdf version with more-or-less legible tables:
https://patentimages.storage.googlea.../US4334250.pdf
The first table shows six bit patterns that do require precompensation:
Code:
  BIT PATTERN AMOUNT OF
              COMPENSATION
  D C B A
  0 0 0 1     -250 nanoseconds   ;\
  0 1 1 0     -250 nanoseconds   ; precompensated
  1 1 1 0     -250 nanoseconds   ;/
  0 0 1 1     +250 nanoseconds   ;\
  1 0 1 1     +250 nanoseconds   ; delayed
  1 0 0 0     +250 nanoseconds   ;/
  | | | |
  | | | '----- next bit
  | | '------- current bit
  | '--------- previous bit
  '----------- pre-previous bit
The four bits are normal "data bits". To understand the table, it's helpful to insert the MFM "clock bits" between the "data bits", so, that clock+data bits would look as so:
Code:
  D   C   B   A
  0 1 0 1 0 0 1     -250 nanoseconds  ;-applied between "C" and "B"
  0 0 1 0 1 0 0     -250 nanoseconds  ;\
  1 0 1 0 1 0 0     -250 nanoseconds  ; applied at "B"
  0 1 0 0 1 0 1     +250 nanoseconds  ;
  1 0 0 0 1 0 1     +250 nanoseconds  ;/
  1 0 0 1 0 1 0     +250 nanoseconds  ;-applied between "C" and "B"
  |   |   |   |
  |   |   |   '----- next bit
  |   |   '--------- current bit
  |   '------------- previous bit
  '----------------- pre-previous bit
As one can see, SHORT pulses (2 bits apart) are made even SHORTER if they are followed or preceeded by LONGER pulses (3 or 4 bits apart).

To easy encoding, the patent contains a second table that is intended to translate the current bit (B) in respect to previous/next bits (D,C and A) to the final MFM signal with 250ns resolution:
Code:
        INPUT WORD        OUTPUT WORD
           D  C  B  A
  ADDR  A4 A3 A2 A1 A0    B7 B6 B5 B4 B3 B2 B1 B0
   0    0  0  0  0  0     0  0  1  0  0  0  0  0  ;\
   1    0  0  0  0  1     0  1<-0  0  0  0  0  0  ;
   2    0  0  0  1  0     0  0  0  0  0  0  1  0  ;
   3    0  0  0  1  1     0  0  0  0  0  0  0->1  ;
   4    0  0  1  0  0     0  0  0  0  0  0  0  0  ;
   5    0  0  1  0  1     0  0  0  0  0  0  0  0  ; with
   6    0  0  1  1  0     0  0  0  0  0  1<-0  0  ; precompensation
   7    0  0  1  1  1     0  0  0  0  0  0  1  0  ;
   8    0  1  0  0  0     0  0  0->1  0  0  0  0  ;
   9    0  1  0  0  1     0  0  1  0  0  0  0  0  ;
  10    0  1  0  1  0     0  0  0  0  0  0  1  0  ;
  11    0  1  0  1  1     0  0  0  0  0  0  0->1  ;
  12    0  1  1  0  0     0  0  0  0  0  0  0  0  ;
  13    0  1  1  0  1     0  0  0  0  0  0  0  0  ;
  14    0  1  1  1  0     0  0  0  0  0  1<-0  0  ;
  15    0  1  1  1  1     0  0  0  0  0  0  1  0  ;/
  16    1  0  0  0  0     0  0  1  0  0  0  0  0  ;\
  17    1  0  0  0  1     0  0  1  0  0  0  0  0  ;
  18    1  0  0  1  0     0  0  0  0  0  0  1  0  ;
  19    1  0  0  1  1     0  0  0  0  0  0  1  0  ;
  20    1  0  1  0  0     0  0  0  0  0  0  0  0  ;
  21    1  0  1  0  1     0  0  0  0  0  0  0  0  ; without
  22    1  0  1  1  0     0  0  0  0  0  0  1  0  ; precompensation
  23    1  0  1  1  1     0  0  0  0  0  0  1  0  ;
  24    1  1  0  0  0     0  0  1  0  0  0  0  0  ;
  25    1  1  0  0  1     0  0  1  0  0  0  0  0  ;
  26    1  1  0  1  0     0  0  0  0  0  0  1  0  ;
  27    1  1  0  1  1     0  0  0  0  0  0  1  0  ;
  28    1  1  1  0  0     0  0  0  0  0  0  0  0  ;
  29    1  1  1  0  1     0  0  0  0  0  0  0  0  ;
  30    1  1  1  1  0     0  0  0  0  0  0  1  0  ;
  31    1  1  1  1  1     0  0  0  0  0  0  1  0  ;/
      mode prev DTA nxt      - CLK +     - DTA +
The lower half of the table is just generating the normal MFM signal (transitions are always located in B5 or B1, ie. the normal "2bit MFM" values are equal to B5 and B1).
In the upper half of the table, some transitions are shifted 250ns earlier/later (as indicated by the arrows, indicating their normal location, and the actual shifted location).

The amiga/double density precompensation is supposedly working similar. But CAUTION about the timings: First of, Amiga is said to use 140ns precomp (though more common would be 125ns according to FDD datasheets), but, anyways, it shouldn't be 250ns. And, second, the above eight 250ns values do sum up to 2us per bit - which would be the transfer rate used on high density discs, whilst double density bits should be 4us wide.
So, to use a similar table for amiga discs, one would need to expand the above 8bit entries to 32bit (for getting 125ns resolution per 4us).

With modern hardware, it would be probably better not to use the above kind of bit patterns with their fixed resolution of 125ns or 250ns. And instead to use precompensation values that are linear (or probably even better somehow nonlinear) interpolated, somehow ranging from 0ns on track 0 to something like 125ns (or even 250ns or whatever) on track 79. No idea about the optimal values, but almost anything should be better than the amiga's 0ns on track 0..39 and 140ns on track 40..79.
Easiest way to generate such variable precomp values would be to stick with the above 8bit table, and to treat bits B6/B4 and B2/B0 as "N nanoseconds" earlier/later than B5 and B1.
Then one could calculate the pulse lengths on the fly during writing, or when having enough memory, one could allocate a huge array with around 64Kbytes or 64Kwords and store the pulse lengths for all transitions on the track in that array (normal DD track should have max 50000 transitions, or less since some data bits have no transitions) (but the array should have at least, say, 55000 entries, since one should write "more data than normally possible" (to ensure that the whole track gets overwritten; even in cases where the disc is spinning slightly slower than 300rpm).

Last edited by nocash; 13 April 2016 at 14:35.
nocash is offline  
Old 11 April 2016, 10:07   #16
idrougge
Registered User
 
Join Date: Sep 2007
Location: Stockholm
Posts: 4,344
Quote:
Originally Posted by nocash View Post
My Amiga has a Matsushita JU-253 floppy drive. But I haven't found a datasheet for it (is there one?)
I think there is one in a supplement to the A2000 technical reference manual.
idrougge is offline  
Old 11 April 2016, 15:03   #17
pandy71
Registered User
 
Join Date: Jun 2010
Location: PL?
Posts: 2,856
Quote:
Originally Posted by nocash View Post
The amiga/double density precompensation is supposedly working similar. But CAUTION about the timings: First of, Amiga is said to use 140ns precomp (though more common would be 125ns according to FDD datasheets), but, anyways, it shouldn't be 250ns. And, second, the above eight 250ns values do sum up to 2us per bit - which would be the transfer rate used on high density discs, whilst double density bits should be 4us wide.
So, to use a similar table for amiga discs, one would need to expand the above 8bit entries to 32bit (for getting 125ns resolution per 4us).
I assume this may vary from drive and disc itself (magnetic purposes + combined response from head and electronics) - to really capture some dependency you need probably use some kryoflux hw/sw and do some tests (or decent digital oscilloscope with some offline analysis) - this may be useful http://www.techtravels.org/?cat=2 also .
Timing on Amiga is very simple - there is only one source for clock, it is well known and all frequencies are in strict relation to it - that's you may have only multiple of 35.242092027ns (PAL) or 34.9206349206(NTSC) ns.
pandy71 is offline  
Old 13 April 2016, 14:27   #18
nocash
Registered User
 
Join Date: Feb 2016
Location: Homeless
Posts: 64
One more caution on using the precomp tables: They can be used only for encoding "normal" bits - not for the special sync bytes. So, it might be a better approach to encode everyting as plain 2bit MFM values without precomp, and, then do the precomp adjustments on boundaries between short and long pulses. I've tried implementing precomp in my tool, but it didn't seem to improve accuracy significantly.

Then I've ported the tool to DOS, and tested it in a DOS window, and in win98 rebooted in DOS mode, and in win98 rebooted in command prompt safe mode (with config.sys skipped, ie. without any protected mode memory managers) - and still got timing errors.
So, the problem seems to be a hardware issue, unrelated to the win98 execption handler. I am not absolutely sure, but I think that it might be caused by DRAM Refresh colliding with OUT opcodes, and both needing to use the address/databus (unlike the RDTSC+LOOP test, which works flawless because it can execute the looped code via cache).
If it's really caused by DRAM refresh then one probably couldn't do much about it (reprogramming the refresh logic would be a bit risky), all one could do would be hoping that the disc gets reliable enough to work with occassional read errors. Computers with shorter refresh durations might also improve reliability.

Oh, and I've tested the tool on a laptop with win7 on it. The computer doesn't have a parallel port, but writing to the parallel port address seems to work okay (no driver signing needed; I think signed drivers are needed only for 64bit windows versions). The timing error seems to be a bit smaller, but there's one huge problem: CLI and STI opcodes are producing some surreal error message (telling me that I'll get notified when the problem is fixed; making me having nightmares about the microsoft service staff giving me obscure signs next time when I am at the supermarket). One could probably use CLI/STI within the driver (the current driver just unlocks I/O for the main program, without doing the actual transfer on driver level).

Quote:
Originally Posted by idrougge View Post
I think there is one in a supplement to the A2000 technical reference manual.
Are you sure? Where exactly? I have a A500/A2000 technical reference manual... but it doesn't contain anything about the internal disc drive (apart from the connector pinout shown in the mainboard schematic).

Last edited by nocash; 13 April 2016 at 14:39.
nocash is offline  
Old 13 April 2016, 19:33   #19
pandy71
Registered User
 
Join Date: Jun 2010
Location: PL?
Posts: 2,856
Quote:
Originally Posted by nocash View Post
Then I've ported the tool to DOS, and tested it in a DOS window, and in win98 rebooted in DOS mode, and in win98 rebooted in command prompt safe mode (with config.sys skipped, ie. without any protected mode memory managers) - and still got timing errors.
So, the problem seems to be a hardware issue, unrelated to the win98 execption handler. I am not absolutely sure, but I think that it might be caused by DRAM Refresh colliding with OUT opcodes, and both needing to use the address/databus (unlike the RDTSC+LOOP test, which works flawless because it can execute the looped code via cache).
If it's really caused by DRAM refresh then one probably couldn't do much about it (reprogramming the refresh logic would be a bit risky), all one could do would be hoping that the disc gets reliable enough to work with occassional read errors. Computers with shorter refresh durations might also improve reliability.
Not sure to which hardware you referring - if this is old AT then you can control timer and DMA for refresh but if this is new system then memory refresh is performed in a different way - synchronous RAM use internal logic and it is beyond normal CPU control, perhaps you can reprogram North bridge to control this but i doubt if this source for your problem.
Perhaps PCI latency timer is an issue try to modify PCI timing or use non PCI motherboard - for non PCI mobos you should have plain AT system and virtually no latency issue (in PCI systems ISA is connected trough PCI channel and have very limited bandwidth).
pandy71 is offline  
Old 14 April 2016, 12:57   #20
nocash
Registered User
 
Join Date: Feb 2016
Location: Homeless
Posts: 64
The mainboard that I am using is this: http://www.commell.com.tw/product/SBC/LS-563.HTM fitted with some 1.13GHz CPU (pentium 3 or so), oh, and it's using shared video memory, so video output might also collide with OUT opcodes in case it's using the same bus for RAM and I/O.

Quote:
Originally Posted by pandy71 View Post
Perhaps PCI latency timer is an issue
I didn't even knew that there's such a thing in newer PCs. Yes, that sounds like a possible source of the timing errors, maybe more likely than my DRAM refresh idea. For both PCI latency and DRAM refresh, I couldn't find any neat info about how long they would be blocking the bus (in cycles/nanoseconds). So it's hard to tell if they could explain the effects that I am getting. Which are as so: When executing 256 OUT opcodes, and measuring the cycles for each OUT with RDTSC, I am getting these statistics:
Code:
  I/O waitstate: Min:1800, Max:2680, Avg:1866, Err:880
MIN is the minimum (=should be) number of cycles per OUT. MAX is the maximum (=worst case) cycles, AVG is the average (all 256 measurements summed up and divided by 256), and ERR is just the difference between MIN and MAX, ie. something is occassionally inserting 880 cycles (around 778ns on the 1.13GHz computer) delays (or possibly, even worse timings might occur once or then, when measuring more than 256 opcodes).
Could that be explained by PCI latency or DRAM refresh or anything else?
Oh, and timings seem to get much worse when having an external USB hub attached to the computer.

I've also experimented with "ignoring" or "accepting" the timing errors. My old code did just "ignore" them (and kept generating the following pulses at their normal/wanted locations), the new code "accepts" them (and moves all following pulses accordingly):
Code:
                 ___   ______   ______   ______   ______
  wanted wdata:     |_|      |_|      |_|      |_|
                 ___   __________   __   ______   ______
  error ignored:    |_|      --->|_|  |_|      |_|
                 ___   __________   ______   ______   ______
  error accepted:   |_|      --->|_|  --->|_|  --->|_|
It's both working, but still both occassionally prompting "Retry" messages during reading; the error/accepted method appears to be working a little bit more reliable (hard to tell for sure). The advantages of accepting the error are avoiding the super-short pulse after the error, and, disc reading should be self-synchronizing itself to the last some 1-3 received pulses, so it may except the following pulses getting shifted that way (or maybe some "medium" solution would be best: if the bad pulse was shifted 880 cycles, apply only 440 cycles shifting to following pulses). The disadvantage is that the shifting causes less than 50000 bits to be stored on a track (but that seems to be no real problem since it's just shortening the length of the blank gap).
nocash is offline  
 


Currently Active Users Viewing This Thread: 1 (0 members and 1 guests)
 
Thread Tools

Similar Threads
Thread Thread Starter Forum Replies Last Post
Serial port, parallel port, and pipe device mount errors Samurai_Crow support.FS-UAE 4 13 March 2014 00:04
Internal floppy drive on external port VoltureX support.Hardware 13 05 August 2013 17:49
Can WHDLoad games access the floppy drive mr_magnell project.WHDLoad 11 07 July 2011 10:49
CD32 Analogic Floppy Drive - Will it work w/SX32, small port on back? RGB_Gamer New to Emulation or Amiga scene 7 05 March 2010 11:12
Free : Parallel port External optical drive alexh MarketPlace 6 03 September 2009 22:51

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +2. The time now is 22:40.

Top

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.
Page generated in 0.14431 seconds with 13 queries