Nice thread, getting the itch as always, a few thoughts.
Leonard: Floppy speed is a 'decent' reference but also relative, I've apparently used 28936 b/s as definition (don't remember the calculation but it includes MFM decoding.) However if performance is desired, it's not good to settle for floppy speed, because then you have little time for 'action' (Tai-Pan/Phalanx definition
) For such needs I would put 'good enough' at twice floppy speed at least or 58K/s.
Sometimes performance isn't a big deal (example: onefiler-on-floppy) and then any decompression speed is good, slower than floppy speed could even save buffers if you risk it. So this is why I think floppy speed is a decent reference but not necessarily the goal of a competitive decruncher.
This was the reasoning behind creating Nibbler (new algorithm).
a/b: Like the initiative
but Shrinkler is at 0 bytes/s? If you could check the axis, feel free to place
Nibbler somewhere. My chart has quite few data points and was measured before all these legacy algorithms were ported and explored. (Though old, they can reach great ratios if run exhaustively, and the same is true if some features are removed to improve decompression speed, so the fastest versions of them should not be discounted but run a million times to make the most of them with modern tech 35y later.)
It would be better with more datapoints and categorized by
type of content (sorry, was too lazy to add all at the time), because algorithm and setting can affect ratio depending on it. Often I see this "ratio!" with no concern for the type of content. There is nothing that says you shouldn't use multiple crunchers in a single release, or over separate releases, but there's some desire there to just "make it smaller and never change my tools". It hasn't been possible yet, and maybe there's a lesson there to keep exploring
I have a burning desire to finish my improvements to Nibbler, but the stats-running+analysis is very time-consuming, and I must finish previous obligations first.