16 February 2023, 20:38 | #21 |
Registered User
Join Date: Feb 2017
Location: Denmark
Posts: 1,098
|
Just for future readers: managed to find a ~30 year old unserviced Chinon FZ-354 drive (A1200 original) that still sort of works and one (1!) disk (noname originally HD disk of unknown provenance) to test it myself (the other 5 I found were unusable).
Both encodings are fine. RLL(1,4) syncword(s) [$5084] is sort-of OK. Most of the time it's recognized on the first scan, but sometimes (for bad tracks?) it takes multiple rotations (up to 10) before sync is found. The unreliability of getting a proper sync makes having multiple sectors (with such a sync word) a challenge. Seems like it would be best to use index sync and not go out of spec, but maybe there are better ways. |
21 March 2023, 14:36 | #22 |
Registered User
Join Date: May 2013
Location: Grimstad / Norway
Posts: 839
|
So I was reading through the recent AAA document dump by (AFAIK) Dave Haynie and came across the disk controller chapter.
I must be missing something, because the description of RLL(2,7) showed the bitcount in the encoded stream to double in size for all 7 listed combinations? Contrary to this the tables indicated you could store 20M on media that is 30M in raw size. I expect this to be a me problem. |
21 March 2023, 17:15 | #23 | |
Defendit numerus
Join Date: Mar 2017
Location: Crossing the Rubicon
Age: 53
Posts: 4,468
|
Quote:
Assume (for convenience) that you have 2us per cell in RLL(1,3), i.e. you can have a polarity change every 4us minimum. This mean 1010... 4/2us. For RLL(2,7) you must necessarily have 100100 in the same 'space': clock cell should be 4/3us -> 4/3+50%=2us Paula can't do it. |
|
21 March 2023, 19:24 | #24 |
Registered User
Join Date: Feb 2017
Location: Denmark
Posts: 1,098
|
Experimented quite a bit (with great input from Ross) since last update, and at least on my drive using GCR mode and doing RLL(2,7) does not work reliably. It misbehaves in the same was as RLL(1,4) so you're worse off.
But using RLL(1,4) syncword with otherwise RLL(1,3) encoded data works very well (I've gotten some fresh disks). There is the very occasional hiccup, but it's rare enough to be manageable. Encoding 6792 input bytes per track (in 4 sectors) works without pushing any limits. Using 82 cylinders that yields 1113888 bytes / disk. Pushing the limits, I've successfully tested 84 cylinders with 6824 bytes/track -> 1146432 bytes/disk, but that is probably going too far. |
25 March 2023, 03:21 | #25 |
Registered User
Join Date: May 2013
Location: Grimstad / Norway
Posts: 839
|
So I spent some time staring into the numbers/patterns, and so I finally saw the point that different source data will generate different size encoded data depending on how you shuffle the pattern generation.
It seems to me that you can optimize for sequential 1 bits, 0 bits, or changes. Have any of you done any tool to make statistics of occurences for this? You can of course just make your encoding tool try out different pattern combos and report on the size... The idea of using a function to XOR with the data before encoding to make it more optimal is intriguing. |
25 March 2023, 07:52 | #26 | |
Defendit numerus
Join Date: Mar 2017
Location: Crossing the Rubicon
Age: 53
Posts: 4,468
|
Quote:
The ' modifier' is basically added to the plain header: to reach a local minimum it is sufficient to break the track into sectors (which of course also has advantages regarding the loading speed). |
|
25 March 2023, 10:12 | #27 |
Registered User
Join Date: Feb 2017
Location: Denmark
Posts: 1,098
|
Ross summarized it right, just a few notes. The way I think about it is that a reversible filter is used on the input. In principle there are no limits to the complexity / number of parameters the filter can have, but in practice there are some constraints:
- It shouldn't need to many extra resources on the decoding side (CPU/RAM) - You need to account for the cost of also storing sector specific parameters: If you've gained 10 bytes, but need 10 bytes to describe the filter parameters you've gotten nowhere (except slower) - The parameter search space needs to have a size that can be explored in reasonable time (however you define that). If it becomes so large that you need to use heuristics it's likely that a simpler filter that could be bruteforced would perform better with the same running time. - The incremental gain from more some of the more complex filters I tried is tiny: A handful or maybe a dozen extra bytes per track (of course you may really need them!). - Depending on what you're trying to cram into the disk it might be better to have a filter that performs slightly worse, but is more predictable. - For exploration I just measured the size of encoding a bunch of random bytes with the proposed filter applied. Then trying on a full disk. Some of the ones that performed well on random data didn't necessarily pan out when confronted with real data. Attached is latest version. |
06 May 2023, 22:23 | #28 |
Registered User
Join Date: Jun 2010
Location: PL?
Posts: 2,741
|
Just to point that in recently available "Advanced AMIGA Architecture (June 18, 1992)" there are some information about Paula and Mary differences, also some additional information's about inner Paula structure are provided, also RLL 2,7 coding covered - perhaps it can be interesting.
|
Currently Active Users Viewing This Thread: 1 (0 members and 1 guests) | |
Thread Tools | |
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
[Found: Space Baller] Bouncing ball in space? | Kada | Looking for a game name ? | 23 | 06 April 2018 03:10 |
Dynamic Drums | illy5603 | request.Music | 2 | 21 September 2017 21:39 |
Dynamic HDF - Please explain | ransom1122 | support.WinUAE | 18 | 04 January 2017 07:32 |
Problem with Dynamic HDF | tero | support.WinUAE | 13 | 27 October 2009 17:33 |
Blitter MFM decoding | Photon | Coders. General | 14 | 16 March 2006 11:24 |
|
|