I suspect that this may not make much of a difference. The memory pools, which is what the malloc()/alloca()/realloc()/free() functions in clib2 are built upon, were intended to avoid fragmenting main memory. This is accomplished by having all allocations smaller than the preset puddle size draw from a puddle that still has enough room left for it to fit. Fragmentation happens inside that puddle.
Hi Olaf, thanks for commenting. I increased the puddle size to 16K from the default 4K and it seems to have helped in my limited testing (no feedback yet from anybody else). I figure this reduces the number of puddles in the list which need to be searched through, as well as allowing larger allocations into the pool.
From the clib2 side I'm afraid that the library can only leverage what the operating system provides, and that is not well-suited for applications which have to juggle large number of allocated memory fragments.
Which is exactly what you get in a web browser, with lots of memory being allocated for one page, and then for the next, and some of the old memory being deallocated...
Question is what size of memory chunk is common for NetSurf, how many chunks are in play, how large they are. If you have not yet implemented it, you might want to add a memory allocation debugging layer and collect statistics for it over time.
I haven't. I think there are cache statistics in the log which might offer some clues though. Beyond the cache data everything else is structures which should all be very small.