26 February 2021, 00:53 | #1 |
Registered User
Join Date: Apr 2012
Location: Cyberspace
Posts: 57
|
Freeing large linked lists
Greetings!
Complete newbie question, this one. I'm a rookie at C programming and certainly C programming on Amiga. * I've written a program that constructs an arbitrarily-sized linked list of char pointers. * I use malloc() for each item. * free():ing the items gets rather slow even for "smallish" lists (say, 500 items or so). If I don't free the list, the used memory is freed anyway by the OS - and it's blazingly fast! Should I even bother with free()? Is there another way to free the memory? Considering the nature of my program I could pick a different approach than a linked list, but I'm curious about this particular issue. Any advice would be highly appreciated! |
26 February 2021, 01:28 | #2 | |
Coder/webmaster/gamer
Join Date: Oct 2001
Location: Canberra/Australia
Posts: 2,630
|
Quote:
|
|
26 February 2021, 01:34 | #3 |
Registered User
Join Date: Apr 2012
Location: Cyberspace
Posts: 57
|
|
26 February 2021, 01:45 | #4 |
Registered User
Join Date: Dec 2019
Location: Newcastle
Posts: 67
|
If all the objects are the same size, you could use an arena memory pool, and then you only have to free the pool at program close. If you are creating and freeing the objects during runtime, you can create a free link list, and pull a slot out of there first before pulling more from the memory pool.
This also uses less memory, as each malloc has a header block of 8 bytes, which soon adds up. Heavy malloc use is not recommended on any platform. Last edited by DMWCashy; 26 February 2021 at 01:53. |
26 February 2021, 02:01 | #5 | ||
Registered User
Join Date: Apr 2012
Location: Cyberspace
Posts: 57
|
Quote:
Alas, each item is of arbitrary size. I'm combining various text files together and each item represents a line from a file. Using a linked list felt like a reasonable approach, since I can easily move around and insert and remove lines at specific positions. However, the same end result can certainly be obtained without the list. Quote:
Last edited by carls; 26 February 2021 at 02:02. Reason: Post quoted was edited |
||
26 February 2021, 02:03 | #6 |
Registered User
Join Date: Dec 2019
Location: Newcastle
Posts: 67
|
What is in each item, if it is to a line then next, prev and char* to line would be all you need ?
|
26 February 2021, 02:10 | #7 | |
Registered User
Join Date: Apr 2012
Location: Cyberspace
Posts: 57
|
Quote:
Code:
typedef struct type_STRITEM STRITEM; struct type_STRITEM { char *conts; STRITEM *next; }; malloc(sizeof(STRITEM));and conts is allocated using malloc(strlen(lnin) + 1). |
|
26 February 2021, 02:13 | #8 | |
Registered User
Join Date: Dec 2019
Location: Newcastle
Posts: 67
|
Quote:
You can then do everything in 2 mallocs. |
|
26 February 2021, 02:49 | #9 | |
Registered User
Join Date: Apr 2012
Location: Cyberspace
Posts: 57
|
Quote:
First, thanks for taking time helping me. I truly appreciate it! I don't think I completely follow you here though - as I said, I'm a rookie. I'm not sure what you mean by "point conts to the start of each line". As in, using arithmetic (or strtok) to determine the conts pointer for subsequent items? I'm not sure how I could reduce it to two mallocs, though - won't each list item still have to be allocated separately? There are other optimizations I could do that I haven't, yet. The program is a small preprocessor for include files, and I currently read all include files into lists as well, whereas a single block would suffice for each of them. I was mostly surprised to see that free() was the bottleneck. |
|
26 February 2021, 03:16 | #10 |
Registered User
Join Date: Dec 2019
Location: Newcastle
Posts: 67
|
Malloc itself creates a linked list item, when you malloc you link list item which is 8 bytes :-
char *conts; - 4 bytes STRITEM *next; - 4 bytes Malloc also adds a header, of 8 bytes, which is a pointer and the size of the block. So that uses twice the memory per linked item. You can malloc (items * sizeof(STRITEM)), then just add sizeof(STRITEM) to your pointer for each new entry. |
26 February 2021, 03:18 | #11 |
Registered User
Join Date: Mar 2019
Location: Poland
Posts: 59
|
You can use AllocMem from exec to allocate a block of memory and then use Allocate to allocate items from this block. At exit you do not need to free each allocated items, freeing whole memory block via FreeMem is enough.
This should work on any KS version, for KS 3.0 or higher you can use exec's memory pools. |
26 February 2021, 03:27 | #12 | ||
Registered User
Join Date: Apr 2012
Location: Cyberspace
Posts: 57
|
Quote:
Quote:
|
||
27 February 2021, 00:52 | #13 | |
Registered User
Join Date: Nov 2015
Location: Perth, Australia
Posts: 73
|
Quote:
Once that works you could look at creating a pool of STRITEMs with a single malloc (or AllocMem) instead of allocating each one separately. This will reduce memory fragmentation and make clean up quicker. If you're making a lot of changes to the text then maybe a data structure like a Piece Table might be worth investigating. |
|
27 February 2021, 05:03 | #14 | |
Registered User
Join Date: Apr 2012
Location: Cyberspace
Posts: 57
|
Quote:
Thanks for the tip, hadn't heard about piece tables before, though I think it's overkill here. Honestly, I could do it without a list, just reading all the files into buffers and combining them when writing the result to disk. I'm basically inserting a chunk of text at a given line in another chunk of text. I first collect all the files to be included, do a bit of sanity checking, load them, insert them at the specific points and write the whole thing. The parsing, combining and writing part is fast enough even on a 68000 and memory usage is of course file size dependent but considering the intended use cases more than feasible on a 1 meg system. Being new to C, I didn't expect freeing memory to be a possible bottleneck, but the intended purpose of the project is to learn something, which I've most certainly done already |
|
27 February 2021, 08:26 | #15 | |
bye
Join Date: Jun 2016
Location: Some / Where
Posts: 680
|
Quote:
a proper malloc/free implementation must free it's allocated memory on program exit. that implementation usually frees the underlying large memory chunks. Consider a 16kB chunk size and single allocations of 16 bytes, then it's ~1000 times faster in freeing. All free() invocations which only occur at the end of the program can be omitted. == snip == There is still the spirit of 'I code all by myself, to get smaller, faster, whatever programs' but in fact malloc/free or auto-open/close libraries or whatever is provided by the link libraries is usually the better choice, since it's more tested (maybe not on the Amiga ) and more portable. |
|
27 February 2021, 11:02 | #16 | ||
Registered User
Join Date: Apr 2012
Location: Cyberspace
Posts: 57
|
Quote:
I'm using Dice C to compile natively, which seems proper based on the results mentioned in my first post. Quote:
In that case I might as well test the above assumption a bit more extensively, because my program ends with freeing my readargs struct and the lists I've used. All intermittent mallocs are freed as I go, so to speak. |
||
13 June 2021, 15:24 | #17 |
Registered User
Join Date: Aug 2018
Location: Untergrund/Germany
Posts: 408
|
|
07 September 2021, 18:44 | #18 |
Registered User
Join Date: Apr 2012
Location: Cyberspace
Posts: 57
|
I don't remember actually but it was significant. Like, 10 seconds or something for maybe 1000 items.
Anyway, I rewrote the whole thing to read the entire file into a large char and then do tokenization on that, it's now almost instant instead. PS. Coda is a wonderful intro, one I keep returning to. |
07 September 2021, 22:16 | #19 |
Registered User
Join Date: Sep 2019
Location: Essen/Germany
Age: 55
Posts: 463
|
1000 items doesn't sound so much, even for an Amiga.
However, it might be faster, if you have lots of malloced chunks, to free the in the reverse order. I once had a project where there were millions of items and when they were freed in the same order they were malloced, it was significantly slower than when freeing in the reverse order. It was some minutes vs. seconds. |
07 September 2021, 22:49 | #20 |
Registered User
Join Date: Apr 2012
Location: Cyberspace
Posts: 57
|
|
Currently Active Users Viewing This Thread: 1 (0 members and 1 guests) | |
Thread Tools | |
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
HQ2x not smooth (some pictures linked so possibly slow loading) | NewDeli | support.WinUAE | 53 | 17 March 2014 12:56 |
linked games | turrican3 | support.WinUAE | 5 | 29 July 2013 22:05 |
Freeing up even more RAM? | Muzer | project.ClassicWB | 2 | 13 September 2009 13:42 |
Dragon's lair & Escape from singe's castle linked and WHDloaded problems | KONEY | support.Games | 3 | 02 August 2009 16:44 |
Workbench 3.0 - Freeing up more memory | vroom6sri | support.Games | 3 | 08 November 2005 21:54 |
|
|