English Amiga Board


Go Back   English Amiga Board > Coders > Coders. Asm / Hardware

 
 
Thread Tools
Old 16 February 2023, 20:38   #21
paraj
Registered User
 
paraj's Avatar
 
Join Date: Feb 2017
Location: Denmark
Posts: 1,098
Just for future readers: managed to find a ~30 year old unserviced Chinon FZ-354 drive (A1200 original) that still sort of works and one (1!) disk (noname originally HD disk of unknown provenance) to test it myself (the other 5 I found were unusable).

Both encodings are fine. RLL(1,4) syncword(s) [$5084] is sort-of OK. Most of the time it's recognized on the first scan, but sometimes (for bad tracks?) it takes multiple rotations (up to 10) before sync is found. The unreliability of getting a proper sync makes having multiple sectors (with such a sync word) a challenge.

Seems like it would be best to use index sync and not go out of spec, but maybe there are better ways.
paraj is offline  
Old 21 March 2023, 14:36   #22
NorthWay
Registered User
 
Join Date: May 2013
Location: Grimstad / Norway
Posts: 839
So I was reading through the recent AAA document dump by (AFAIK) Dave Haynie and came across the disk controller chapter.
I must be missing something, because the description of RLL(2,7) showed the bitcount in the encoded stream to double in size for all 7 listed combinations? Contrary to this the tables indicated you could store 20M on media that is 30M in raw size. I expect this to be a me problem.
NorthWay is offline  
Old 21 March 2023, 17:15   #23
ross
Defendit numerus
 
ross's Avatar
 
Join Date: Mar 2017
Location: Crossing the Rubicon
Age: 53
Posts: 4,468
Quote:
Originally Posted by NorthWay View Post
So I was reading through the recent AAA document dump by (AFAIK) Dave Haynie and came across the disk controller chapter.
I must be missing something, because the description of RLL(2,7) showed the bitcount in the encoded stream to double in size for all 7 listed combinations? Contrary to this the tables indicated you could store 20M on media that is 30M in raw size. I expect this to be a me problem.
Beware: RLL(1,3) coding requires a clock frequency at least 50% lower than RLL(2,7).
Assume (for convenience) that you have 2us per cell in RLL(1,3), i.e. you can have a polarity change every 4us minimum.
This mean 1010... 4/2us.
For RLL(2,7) you must necessarily have 100100 in the same 'space': clock cell should be 4/3us -> 4/3+50%=2us

Paula can't do it.
ross is offline  
Old 21 March 2023, 19:24   #24
paraj
Registered User
 
paraj's Avatar
 
Join Date: Feb 2017
Location: Denmark
Posts: 1,098
Experimented quite a bit (with great input from Ross) since last update, and at least on my drive using GCR mode and doing RLL(2,7) does not work reliably. It misbehaves in the same was as RLL(1,4) so you're worse off.

But using RLL(1,4) syncword with otherwise RLL(1,3) encoded data works very well (I've gotten some fresh disks). There is the very occasional hiccup, but it's rare enough to be manageable.
Encoding 6792 input bytes per track (in 4 sectors) works without pushing any limits. Using 82 cylinders that yields 1113888 bytes / disk.
Pushing the limits, I've successfully tested 84 cylinders with 6824 bytes/track -> 1146432 bytes/disk, but that is probably going too far.
paraj is offline  
Old 25 March 2023, 03:21   #25
NorthWay
Registered User
 
Join Date: May 2013
Location: Grimstad / Norway
Posts: 839
So I spent some time staring into the numbers/patterns, and so I finally saw the point that different source data will generate different size encoded data depending on how you shuffle the pattern generation.
It seems to me that you can optimize for sequential 1 bits, 0 bits, or changes. Have any of you done any tool to make statistics of occurences for this? You can of course just make your encoding tool try out different pattern combos and report on the size... The idea of using a function to XOR with the data before encoding to make it more optimal is intriguing.
NorthWay is offline  
Old 25 March 2023, 07:52   #26
ross
Defendit numerus
 
ross's Avatar
 
Join Date: Mar 2017
Location: Crossing the Rubicon
Age: 53
Posts: 4,468
Quote:
Originally Posted by NorthWay View Post
So I spent some time staring into the numbers/patterns, and so I finally saw the point that different source data will generate different size encoded data depending on how you shuffle the pattern generation.
It seems to me that you can optimize for sequential 1 bits, 0 bits, or changes. Have any of you done any tool to make statistics of occurences for this? You can of course just make your encoding tool try out different pattern combos and report on the size... The idea of using a function to XOR with the data before encoding to make it more optimal is intriguing.
Yes, this idea has already been explored with a brute-force approach.

The ' modifier' is basically added to the plain header: to reach a local minimum it is sufficient to break the track into sectors
(which of course also has advantages regarding the loading speed).
ross is offline  
Old 25 March 2023, 10:12   #27
paraj
Registered User
 
paraj's Avatar
 
Join Date: Feb 2017
Location: Denmark
Posts: 1,098
Ross summarized it right, just a few notes. The way I think about it is that a reversible filter is used on the input. In principle there are no limits to the complexity / number of parameters the filter can have, but in practice there are some constraints:
- It shouldn't need to many extra resources on the decoding side (CPU/RAM)
- You need to account for the cost of also storing sector specific parameters: If you've gained 10 bytes, but need 10 bytes to describe the filter parameters you've gotten nowhere (except slower)
- The parameter search space needs to have a size that can be explored in reasonable time (however you define that). If it becomes so large that you need to use heuristics it's likely that a simpler filter that could be bruteforced would perform better with the same running time.
- The incremental gain from more some of the more complex filters I tried is tiny: A handful or maybe a dozen extra bytes per track (of course you may really need them!).
- Depending on what you're trying to cram into the disk it might be better to have a filter that performs slightly worse, but is more predictable.
- For exploration I just measured the size of encoding a bunch of random bytes with the proposed filter applied. Then trying on a full disk. Some of the ones that performed well on random data didn't necessarily pan out when confronted with real data.

Attached is latest version.
Attached Files
File Type: c make_eadf.c (15.1 KB, 26 views)
paraj is offline  
Old 06 May 2023, 22:23   #28
pandy71
Registered User
 
Join Date: Jun 2010
Location: PL?
Posts: 2,741
Just to point that in recently available "Advanced AMIGA Architecture (June 18, 1992)" there are some information about Paula and Mary differences, also some additional information's about inner Paula structure are provided, also RLL 2,7 coding covered - perhaps it can be interesting.
pandy71 is offline  
 


Currently Active Users Viewing This Thread: 1 (0 members and 1 guests)
 
Thread Tools

Similar Threads
Thread Thread Starter Forum Replies Last Post
[Found: Space Baller] Bouncing ball in space? Kada Looking for a game name ? 23 06 April 2018 03:10
Dynamic Drums illy5603 request.Music 2 21 September 2017 21:39
Dynamic HDF - Please explain ransom1122 support.WinUAE 18 04 January 2017 07:32
Problem with Dynamic HDF tero support.WinUAE 13 27 October 2009 17:33
Blitter MFM decoding Photon Coders. General 14 16 March 2006 11:24

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +2. The time now is 16:17.

Top

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.
Page generated in 0.07750 seconds with 16 queries