English Amiga Board


Go Back   English Amiga Board > Coders > Coders. Language > Coders. C/C++

 
 
Thread Tools
Old 26 February 2021, 00:53   #1
carls
Registered User
 
Join Date: Apr 2012
Location: Cyberspace
Posts: 57
Question Freeing large linked lists

Greetings!
Complete newbie question, this one. I'm a rookie at C programming and certainly C programming on Amiga.

* I've written a program that constructs an arbitrarily-sized linked list of char pointers.
* I use malloc() for each item.
* free():ing the items gets rather slow even for "smallish" lists (say, 500 items or so).

If I don't free the list, the used memory is freed anyway by the OS - and it's blazingly fast! Should I even bother with free()? Is there another way to free the memory?

Considering the nature of my program I could pick a different approach than a linked list, but I'm curious about this particular issue. Any advice would be highly appreciated!
carls is offline  
Old 26 February 2021, 01:28   #2
Minuous
Coder/webmaster/gamer
 
Minuous's Avatar
 
Join Date: Oct 2001
Location: Canberra/Australia
Posts: 2,630
Quote:
If I don't free the list, the used memory is freed anyway by the OS
This is not generally the case on AmigaOS, are you sure it is being freed?
Minuous is offline  
Old 26 February 2021, 01:34   #3
carls
Registered User
 
Join Date: Apr 2012
Location: Cyberspace
Posts: 57
Quote:
Originally Posted by Minuous View Post
This is not generally the case on AmigaOS, are you sure it is being freed?

I'm using a small program called MemLeak that counts the memory before and after execution. It reports no discrepancies, neither does Avail.
carls is offline  
Old 26 February 2021, 01:45   #4
DMWCashy
Registered User
 
Join Date: Dec 2019
Location: Newcastle
Posts: 67
If all the objects are the same size, you could use an arena memory pool, and then you only have to free the pool at program close. If you are creating and freeing the objects during runtime, you can create a free link list, and pull a slot out of there first before pulling more from the memory pool.

This also uses less memory, as each malloc has a header block of 8 bytes, which soon adds up. Heavy malloc use is not recommended on any platform.

Last edited by DMWCashy; 26 February 2021 at 01:53.
DMWCashy is offline  
Old 26 February 2021, 02:01   #5
carls
Registered User
 
Join Date: Apr 2012
Location: Cyberspace
Posts: 57
Quote:
Originally Posted by DMWCashy View Post
If all the objects are the same size, you could use an arena memory pool

Alas, each item is of arbitrary size. I'm combining various text files together and each item represents a line from a file. Using a linked list felt like a reasonable approach, since I can easily move around and insert and remove lines at specific positions. However, the same end result can certainly be obtained without the list.

Quote:
This also uses less memory, as each malloc has a header block of 8 bytes
Thanks, I didn't know that!

Last edited by carls; 26 February 2021 at 02:02. Reason: Post quoted was edited
carls is offline  
Old 26 February 2021, 02:03   #6
DMWCashy
Registered User
 
Join Date: Dec 2019
Location: Newcastle
Posts: 67
What is in each item, if it is to a line then next, prev and char* to line would be all you need ?
DMWCashy is offline  
Old 26 February 2021, 02:10   #7
carls
Registered User
 
Join Date: Apr 2012
Location: Cyberspace
Posts: 57
Quote:
Originally Posted by DMWCashy View Post
What is in each item, if it is to a line then next, prev and char* to line would be all you need ?
It's singly linked:
Code:
typedef struct type_STRITEM STRITEM;
struct type_STRITEM {
  char *conts;
  STRITEM *next;
  };
Each item is allocated using
malloc(sizeof(STRITEM));
and conts is allocated using
malloc(strlen(lnin) + 1)
.
carls is offline  
Old 26 February 2021, 02:13   #8
DMWCashy
Registered User
 
Join Date: Dec 2019
Location: Newcastle
Posts: 67
Quote:
Originally Posted by carls View Post
It's singly linked:
Code:
typedef struct type_STRITEM STRITEM;
struct type_STRITEM {
  char *conts;
  STRITEM *next;
  };
Each item is allocated using
malloc(sizeof(STRITEM));
and conts is allocated using
malloc(strlen(lnin) + 1)
.
Why not load the whole file into one memory block, and then just point conts to the start of each line.

You can then do everything in 2 mallocs.
DMWCashy is offline  
Old 26 February 2021, 02:49   #9
carls
Registered User
 
Join Date: Apr 2012
Location: Cyberspace
Posts: 57
Quote:
Originally Posted by DMWCashy View Post
Why not load the whole file into one memory block, and then just point conts to the start of each line.

You can then do everything in 2 mallocs.

First, thanks for taking time helping me. I truly appreciate it!
I don't think I completely follow you here though - as I said, I'm a rookie. I'm not sure what you mean by "point conts to the start of each line". As in, using arithmetic (or strtok) to determine the conts pointer for subsequent items? I'm not sure how I could reduce it to two mallocs, though - won't each list item still have to be allocated separately?


There are other optimizations I could do that I haven't, yet. The program is a small preprocessor for include files, and I currently read all include files into lists as well, whereas a single block would suffice for each of them. I was mostly surprised to see that free() was the bottleneck.
carls is offline  
Old 26 February 2021, 03:16   #10
DMWCashy
Registered User
 
Join Date: Dec 2019
Location: Newcastle
Posts: 67
Malloc itself creates a linked list item, when you malloc you link list item which is 8 bytes :-

char *conts; - 4 bytes
STRITEM *next; - 4 bytes

Malloc also adds a header, of 8 bytes, which is a pointer and the size of the block. So that uses twice the memory per linked item.

You can malloc (items * sizeof(STRITEM)), then just add sizeof(STRITEM) to your pointer for each new entry.
DMWCashy is offline  
Old 26 February 2021, 03:18   #11
Docent
Registered User
 
Join Date: Mar 2019
Location: Poland
Posts: 59
You can use AllocMem from exec to allocate a block of memory and then use Allocate to allocate items from this block. At exit you do not need to free each allocated items, freeing whole memory block via FreeMem is enough.
This should work on any KS version, for KS 3.0 or higher you can use exec's memory pools.
Docent is offline  
Old 26 February 2021, 03:27   #12
carls
Registered User
 
Join Date: Apr 2012
Location: Cyberspace
Posts: 57
Quote:
You can malloc (items * sizeof(STRITEM)), then just add sizeof(STRITEM) to your pointer for each new entry.
Ahh, of course. Thanks!

Quote:
You can use AllocMem from exec to allocate a block of memory and then use Allocate to allocate items from this block. At exit you do not need to free each allocated items, freeing whole memory block via FreeMem is enough.
Thanks! I'll certainly look into this. Sound like the Amiga(TM) way of implementing the above.
carls is offline  
Old 27 February 2021, 00:52   #13
bwldrbst
Registered User
 
bwldrbst's Avatar
 
Join Date: Nov 2015
Location: Perth, Australia
Posts: 73
Quote:
Originally Posted by carls View Post
First, thanks for taking time helping me. I truly appreciate it!
I don't think I completely follow you here though - as I said, I'm a rookie. I'm not sure what you mean by "point conts to the start of each line". As in, using arithmetic (or strtok) to determine the conts pointer for subsequent items? I'm not sure how I could reduce it to two mallocs, though - won't each list item still have to be allocated separately?
I think this means to place all the text in a single buffer and then each STRITEM points to a location in that buffer. You might also want to include the length of the line in STRITEM.

Once that works you could look at creating a pool of STRITEMs with a single malloc (or AllocMem) instead of allocating each one separately. This will reduce memory fragmentation and make clean up quicker.

If you're making a lot of changes to the text then maybe a data structure like a Piece Table might be worth investigating.
bwldrbst is offline  
Old 27 February 2021, 05:03   #14
carls
Registered User
 
Join Date: Apr 2012
Location: Cyberspace
Posts: 57
Quote:
Originally Posted by bwldrbst View Post
If you're making a lot of changes to the text then maybe a data structure like a Piece Table might be worth investigating.

Thanks for the tip, hadn't heard about piece tables before, though I think it's overkill here. Honestly, I could do it without a list, just reading all the files into buffers and combining them when writing the result to disk.


I'm basically inserting a chunk of text at a given line in another chunk of text. I first collect all the files to be included, do a bit of sanity checking, load them, insert them at the specific points and write the whole thing. The parsing, combining and writing part is fast enough even on a 68000 and memory usage is of course file size dependent but considering the intended use cases more than feasible on a 1 meg system.



Being new to C, I didn't expect freeing memory to be a possible bottleneck, but the intended purpose of the project is to learn something, which I've most certainly done already
carls is offline  
Old 27 February 2021, 08:26   #15
bebbo
bye
 
Join Date: Jun 2016
Location: Some / Where
Posts: 680
Quote:
Originally Posted by carls View Post
Greetings!
Complete newbie question, this one. I'm a rookie at C programming and certainly C programming on Amiga.

* I've written a program that constructs an arbitrarily-sized linked list of char pointers.
* I use malloc() for each item.
* free():ing the items gets rather slow even for "smallish" lists (say, 500 items or so).

If I don't free the list, the used memory is freed anyway by the OS - and it's blazingly fast! Should I even bother with free()? Is there another way to free the memory?

Considering the nature of my program I could pick a different approach than a linked list, but I'm curious about this particular issue. Any advice would be highly appreciated!

a proper malloc/free implementation must free it's allocated memory on program exit.


that implementation usually frees the underlying large memory chunks. Consider a 16kB chunk size and single allocations of 16 bytes, then it's ~1000 times faster in freeing.


All free() invocations which only occur at the end of the program can be omitted.


== snip ==



There is still the spirit of 'I code all by myself, to get smaller, faster, whatever programs' but in fact malloc/free or auto-open/close libraries or whatever is provided by the link libraries is usually the better choice, since it's more tested (maybe not on the Amiga ) and more portable.
bebbo is offline  
Old 27 February 2021, 11:02   #16
carls
Registered User
 
Join Date: Apr 2012
Location: Cyberspace
Posts: 57
Quote:
Originally Posted by bebbo View Post
a proper malloc/free implementation must free it's allocated memory on program exit.

I'm using Dice C to compile natively, which seems proper based on the results mentioned in my first post.



Quote:
All free() invocations which only occur at the end of the program can be omitted.

In that case I might as well test the above assumption a bit more extensively, because my program ends with freeing my readargs struct and the lists I've used. All intermittent mallocs are freed as I go, so to speak.
carls is offline  
Old 13 June 2021, 15:24   #17
pink^abyss
Registered User
 
Join Date: Aug 2018
Location: Untergrund/Germany
Posts: 408
Quote:
Originally Posted by carls View Post
* free():ing the items gets rather slow even for "smallish" lists (say, 500 items or so).

How much time does it take (on 68000)?
pink^abyss is offline  
Old 07 September 2021, 18:44   #18
carls
Registered User
 
Join Date: Apr 2012
Location: Cyberspace
Posts: 57
Quote:
Originally Posted by pink^abyss View Post
How much time does it take (on 68000)?
I don't remember actually but it was significant. Like, 10 seconds or something for maybe 1000 items.

Anyway, I rewrote the whole thing to read the entire file into a large char and then do tokenization on that, it's now almost instant instead.

PS.
Coda is a wonderful intro, one I keep returning to.
carls is offline  
Old 07 September 2021, 22:16   #19
sparhawk
Registered User
 
sparhawk's Avatar
 
Join Date: Sep 2019
Location: Essen/Germany
Age: 55
Posts: 463
1000 items doesn't sound so much, even for an Amiga.

However, it might be faster, if you have lots of malloced chunks, to free the in the reverse order. I once had a project where there were millions of items and when they were freed in the same order they were malloced, it was significantly slower than when freeing in the reverse order. It was some minutes vs. seconds.
sparhawk is offline  
Old 07 September 2021, 22:49   #20
carls
Registered User
 
Join Date: Apr 2012
Location: Cyberspace
Posts: 57
Quote:
Originally Posted by sparhawk View Post
However, it might be faster, if you have lots of malloced chunks, to free the in the reverse order.

Interesting, I didn't think of trying that. That would require a doubly linked list I suppose?
carls is offline  
 


Currently Active Users Viewing This Thread: 1 (0 members and 1 guests)
 
Thread Tools

Similar Threads
Thread Thread Starter Forum Replies Last Post
HQ2x not smooth (some pictures linked so possibly slow loading) NewDeli support.WinUAE 53 17 March 2014 12:56
linked games turrican3 support.WinUAE 5 29 July 2013 22:05
Freeing up even more RAM? Muzer project.ClassicWB 2 13 September 2009 13:42
Dragon's lair & Escape from singe's castle linked and WHDloaded problems KONEY support.Games 3 02 August 2009 16:44
Workbench 3.0 - Freeing up more memory vroom6sri support.Games 3 08 November 2005 21:54

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +2. The time now is 15:05.

Top

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.
Page generated in 0.10324 seconds with 13 queries