English Amiga Board


Go Back   English Amiga Board > Coders > Coders. Asm / Hardware

 
 
Thread Tools
Old 14 April 2019, 14:41   #21
ross
Sum, ergo Cogito

ross's Avatar
 
Join Date: Mar 2017
Location: Crossing the Rubicon
Age: 48
Posts: 1,582
I just checked what RNC_1.S do.

Before unpacking relocate the data
Slow but working route.

Cheers!
ross is offline  
Old 15 April 2019, 07:06   #22
modrobert
old bearded fool

modrobert's Avatar
 
Join Date: Jan 2010
Location: Bangkok
Age: 51
Posts: 510
In my humble opinion, the only positive part about Photon's comment was Jackie Chan, here's remembering double standards of the 90s (sorry for sidetracking the thread slightly, maybe worth it, not sure).


Nice, fellow pirate using a SNES copier (first reaction).

Then this:
[ Show youtube player ]
modrobert is offline  
Old 15 April 2019, 19:30   #23
Photon
Moderator

Photon's Avatar
 
Join Date: Nov 2004
Location: Eksjö / Sweden
Posts: 4,655
Quote:
Originally Posted by mcgeezer View Post
There's no need for this response, you're a moderator but more importantly have some respect for a fellow coder, I suspect you should owe me an apology.
I'm sorry if I offended you. So how do I explain how wrong I think this is? This (or needing to) so goes against best practice!! (For example, if you can't fit a small offset, what do you do if you have to add 4 sprites to your game?)

With this cruncher you can do it, but now it consumes more memory for the stack, and a few more bytes. A small offset means you could have saved 159K instead of (160-stack-extracode*).

If space is extremely tight (e.g. support A500 without fastram), this is a thing to consider, as it can bite you in the butt as you get close to release and disk and RAM is filling up.

You don't share why you need this, and I guess it would be rude to ask why or suggest alternatives.

But one case I can see is when loading many small files. If this is the case, almost all crunchers will pack better if several small files of the same data type (code/graphics/audio) can be crunched as one, simply because they have more data to work with and can find more patterns to compress.

* a few dozen bytes for the in-place handling
Photon is offline  
Old 15 April 2019, 20:12   #24
mcgeezer
Registered User

 
Join Date: Oct 2017
Location: Sunderland, England
Posts: 1,030
Quote:
Originally Posted by Photon View Post
I'm sorry if I offended you. So how do I explain how wrong I think this is? This (or needing to) so goes against best practice!! (For example, if you can't fit a small offset, what do you do if you have to add 4 sprites to your game?)

With this cruncher you can do it, but now it consumes more memory for the stack, and a few more bytes. A small offset means you could have saved 159K instead of (160-stack-extracode*).

If space is extremely tight (e.g. support A500 without fastram), this is a thing to consider, as it can bite you in the butt as you get close to release and disk and RAM is filling up.

You don't share why you need this, and I guess it would be rude to ask why or suggest alternatives.

But one case I can see is when loading many small files. If this is the case, almost all crunchers will pack better if several small files of the same data type (code/graphics/audio) can be crunched as one, simply because they have more data to work with and can find more patterns to compress.

* a few dozen bytes for the in-place handling
I appreciate your apology, it goes a long way.

From my point of view, I don't know the intricacies of packers on the Amiga, I know very little about them. What triggered my message was one of needing to save on memory space and not on increasing speed.

All the packers I had looked at for in place depack stated that the packed file needed to be placed at the end of the memory allocation. In my original code I was having to use a 160KB memory allocation which had a dual purpose of being used for holding the background scenery and for holding the largest compressed asset so that it could be unpacked from that location (my largest compressed scenery asset is 90KB but my largest asset in the game is 160KB which is the module.

So the loader was going like this...

Code:
Allocate ram for load buffer (holds the packed file) = 160KB

Allocate ram for tiles asset = 258KB 
Load packed Tiles asset (120KB) into the load buffer 
Unpack from load buffer to tiles allocation

Allocate ram for sprites asset = 200KB 
Load packed Sprites asset (100KB) into the load buffer 
Unpack from load buffer to Sprites allocation

Allocate ram for music module = 200KB 
Load packed module asset (160KB) into the load buffer 
Unpack from load buffer to module allocation

...

(Final asset to load which is background scenery)
Load packed scenery asset (90KB) into the screen buffer 
Unpack from screen buffer to load buffer (background scenery).
I'm sure you'll agree, it's a mess.


The reason I needed to unpack in place should now be clear, the loader can be made much simpler with the following process for every asset:-

Code:
Allocate ram of unpacked asset size..
Load packed asset to start of allocation
Unpack in place..

....

(this for every asset).
So my loader is now much less complex and more streamlined.

Now from what I can gather your point is..

If I have a memory allocation of $12340 for my asset, I can set a0 to $12340 and a1 to $12344 - and I will get a speed increase because the unpacker is not having to shift in place bytes around?

If that's the case then I can live with that level of complexity as it is much easier to contain in the loader. If I have to manage packed and unpacked file sizes with additional small offsets then that becomes an overhead and more complex.

Hope this makes sense... if not I can explain more.

Geezer
mcgeezer is online now  
Old 15 April 2019, 21:09   #25
ross
Sum, ergo Cogito

ross's Avatar
 
Join Date: Mar 2017
Location: Crossing the Rubicon
Age: 48
Posts: 1,582
Hi Graeme,
suppose you get an LZ compressor with forward bitcoding (the majority because is faster).
Suppose your unpacked data is $10000 bytes and packed is $6000.
Suppose your start address for the data is at start, then you need to allocate $10000+safety at start.
Safety depends on compressor and compressed data and bitcoding (but take $20 as a valid value).

Then you should load your packed data at (a1)=start+$10000+safety-$6000.
Set you unpack address at (a0)=start, unpack the data and voila, done .
Fast and no relocation to be made by the unpacker.

You can reverse the thinking for backward unpacker.
if you build the decompressor to avoid completely safety (special bitcoding, stack, witchcraft or whatever you like) then you understand that you don't need anything but to load at start and unpack from there.

Cheers!
ross is offline  
Old 15 April 2019, 21:21   #26
mcgeezer
Registered User

 
Join Date: Oct 2017
Location: Sunderland, England
Posts: 1,030
Quote:
Originally Posted by ross View Post
Hi Graeme,
suppose you get an LZ compressor with forward bitcoding (the majority because is faster).
Suppose your unpacked data is $10000 bytes and packed is $6000.
Suppose your start address for the data is at start, then you need to allocate $10000+safety at start.
Safety depends on compressor and compressed data and bitcoding (but take $20 as a valid value).

Then you should load your packed data at (a1)=start+$10000+safety-$6000.
Set you unpack address at (a0)=start, unpack the data and voila, done .
Fast and no relocation to be made by the unpacker.

You can reverse the thinking for backward unpacker.
if you build the decompressor to avoid completely safety (special bitcoding, stack, witchcraft or whatever you like) then you understand that you don't need anything but to load at start and unpack from there.

Cheers!
Thanks for the explanation Ross, I can't remember but does your loader return the number of bytes read?

This still becomes a key point though as I will need to manage the packed sizes of each of the assets, sure very straight forward to do still but if I pack something and I forget to change the packed size then ughhh... time lost.

However this is an optimsation I can make right at the end of the project without any issues.

Graeme
mcgeezer is online now  
Old 15 April 2019, 21:26   #27
ross
Sum, ergo Cogito

ross's Avatar
 
Join Date: Mar 2017
Location: Crossing the Rubicon
Age: 48
Posts: 1,582
Quote:
Originally Posted by mcgeezer View Post
Thanks for the explanation Ross, I can't remember but does your loader return the number of bytes read?

Code:
;output:	Z=0->success, d1=file lenght (can be 0, file as a flag)
;		Z=1->error, static.retry=0->read error else file not found
ross is offline  
Old 15 April 2019, 21:31   #28
Photon
Moderator

Photon's Avatar
 
Join Date: Nov 2004
Location: Eksjö / Sweden
Posts: 4,655
Quote:
Originally Posted by mcgeezer
Now from what I can gather your point is..

If I have a memory allocation of $12340 for my asset, I can set a0 to $12340 and a1 to $12344 - and I will get a speed increase because the unpacker is not having to shift in place bytes around?
Yep, absolutely you don't have to have an entire separate memory area (!!) for compressed data or most games just couldn't load!

(Side note for completeness: Only file archivers along the lines of .zip would need that (which is for install, not load/run - still, could prevent install on setups that can actually run the install, so: most/all of those check RAM and if low, use a smaller buffer to prevent that. Well, it's just a completely different use case.))

So, offset.

(Just 4 bytes? - Depends. (Still possible!) Loading to start at bottomOfDest-a few bytes / end at topOfDest+a few bytes will prevent the dreaded extra copyloop of most crunchers (autodetect/don't care) plus avoid use of a stack buffer which is only needed for in-place.)

Propack (all versions) should certainly benefit from loading to an offset like most/all crunchers or they couldn't be used at all really as per above.

So I mean, you don't have to switch packer, it's just they all work like this with no need for a complete available memory area for the compressed data (and should give you the offset a.k.a. overhead or buffer size if necessary).

I've used Propack, but only for testing - Galahad knows all there is about Propack, and I think he provided this in-place version only because he's used to fitting two-game disks onto one and that kinda thing, another totally different use case (I think?)

As ross mentioned, there's so much to say about this topic. I think if you're coding a game (without the requirement of one-disking multiple disks, fitting in intros, etc), then just keeping a rough RAM map of where things are with some margin for changes is simple, and will work out fine for almost every use case I think.
Photon is offline  
 


Currently Active Users Viewing This Thread: 1 (0 members and 1 guests)
 
Thread Tools

Similar Threads
Thread Thread Starter Forum Replies Last Post
Data Packers bjadams Coders. Blitz Basic 3 20 September 2018 23:41
What are MOD Packers ? chip New to Emulation or Amiga scene 9 20 August 2018 16:01
Runtime data unpackers similar to PPDATA/RTDD? BarryB support.Apps 4 21 December 2012 18:25
A better place to be! fatboy Amiga scene 10 29 September 2011 17:54

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +2. The time now is 20:57.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2019, vBulletin Solutions Inc.
Page generated in 0.09908 seconds with 15 queries