English Amiga Board

English Amiga Board (https://eab.abime.net/index.php)
-   Coders. General (https://eab.abime.net/forumdisplay.php?f=37)
-   -   area fill max mem used (https://eab.abime.net/showthread.php?t=94008)

meynaf 31 August 2018 10:38

area fill max mem used
 
Hi,

This time it's a question about how much memory i need to allocate.
The program is actually just a small function that can be included anywhere ; it does flood filling shapes exactly like paint programs do.

So it maintains a list of current pixels to examine, like a path finder would do with cells. And now the problem is : how big has that list to be ?
I thought this was widely known and documented, but googling returned nothing useful.

That's not exactly a programming question ; rather, the maths behind.

I could use a safe bet and have an entry per pixel, but that's a lot of waste.
I want to know how many entries will be in the list at the maximum, for the most pathological case of flood fill.
Normally this can be computed from screen size...

Samurai_Crow 31 August 2018 11:39

An alternative approach would be to do an iterative job list. In the DazzleDraw paint program for the Apple II, it would trace left to right and keep track of the potential regions above and below the current one. Those regions would get pushed into a queue which would be processed sequentially so that all regions would not need to be stored but could be processed just-in-time. Some parallel programming languages like Rust allow all cores in a multicore setup to process a region each.

meynaf 31 August 2018 11:45

But there you still need to store something and you don't know its size.

Besides, just because i don't know some buffer size i have to use, i'll need to change the whole algorithm ?

Samurai_Crow 31 August 2018 11:49

The advantage in my case is that an allocation pool will do most of the allocation work so you don't need to know the size.

meynaf 31 August 2018 11:57

An allocation pool ? Do I have an allocation pool in my program ?

Anyway it's some kind of push_back, right ? I could do the same, reallocate if i discover it's too small...

During all that time, i know there is some maths giving that value but i just can't remember what it was.
Tests suggest it could be min(x,y)*2.

Samurai_Crow 31 August 2018 12:03

I was thinking of Kickstart 3.1 and it's pool functions in Exec.library for a singly linked list queue implementation but push_back() in a C++ vector would be a slower implementation.

meynaf 31 August 2018 12:10

That looks a little bit overkill to me. Implement a whole memory allocator just because of an unknown - but existing - value...

roondar 31 August 2018 12:36

I thought I remembered something about this, so I googled around a bit and the consensus appears to be that flood fill algorithms are a variant of a depth-first search algorithm.

If this is indeed true (I didn't do exhaustive research here), then we can simply look at the time and space complexity of a depth-first search to find the worst case:

Quote:

Originally Posted by depth first WikiPedia
The time and space analysis of DFS differs according to its application area. In theoretical computer science, DFS is typically used to traverse an entire graph, and takes time ?(|V| + |E|), linear in the size of the graph. In these applications it also uses space O(|V|) in the worst case to store the stack of vertices on the current search path as well as the set of already-visited vertices.

Which, if I'm reading it correctly, suggests the worst case scenario is indeed one entry per pixel.

meynaf 31 August 2018 13:12

Quote:

Originally Posted by roondar (Post 1265620)
Which, if I'm reading it correctly, suggests the worst case scenario is indeed one entry per pixel.

In total there can not be more entries added than one per pixel (unless we can return in old terrain, which may happen in a path finder in somes cases but not here).
However, while entries are created, others are consumed. So the real number is a lot less than one per pixel.

Note that maintaining an ordered list is easy in our case : a first-in/first-out queue is enough.

roondar 31 August 2018 14:02

That is a fair point, I was indeed only really considering a rather theoretical worst case here. As to how much can be cut, I don't rightly know. I do know there is are fixed-memory approaches to flood filling, but they're quite a bit slower.

I do wonder why do you want to know how much memory is needed in advance.
Is this to avoid dynamic allocation/freeing memory as the algorithm goes?

meynaf 31 August 2018 14:36

Quote:

Originally Posted by roondar (Post 1265643)
I do wonder why do you want to know how much memory is needed in advance.
Is this to avoid dynamic allocation/freeing memory as the algorithm goes?

It's a lot simpler for me to allocate a block at startup and free it at cleanup, because it is used as a circular buffer (= free area can be anywhere in the middle).

roondar 31 August 2018 17:28

Fair enough, which means I can't really help you.

I did find some Google links that suggest using a FIFI queue (as you do) is the most memory efficient way that is still fast, but I didn't find anything that suggests an easy way to calculate the number of entries you need.

Leffmann 31 August 2018 18:30

I don't think there's any sure answer to this, it will vary not just from one algorithm to another, but with the exact implementation of it.

What I do is use a FILO queue to keep track of horizontal spans. I remove one item from the queue and fill in the whole span, then search across it to find all individual spans above and below, and add them to the queue. When the queue is empty, the whole area is filled in.

For most test images the queue rarely grows larger than a dozen items, even if I draw a lot of "jaggies", but I'm not sure how to stress-test it properly.


All times are GMT +2. The time now is 14:33.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.

Page generated in 0.04643 seconds with 11 queries