Question is what size of memory chunk is common for NetSurf, how many chunks are in play, how large they are. If you have not yet implemented it, you might want to add a memory allocation debugging layer and collect statistics for it over time.
It may be worth investigating how the NetSurf memory allocations could be handled by an application-specific, custom memory allocator that sits on top of what malloc()/alloca()/realloc()/free() can provide and which should offer better scalability.
I would just take a shortcut and install TSLFmem:
http://dump.platon42.de/files/Other than that, there is no really solution. Designing good memory allocator is an art of its own where one has to consider memory fragmentation, allocation performance and deallocation performance.
Yeah, I read from previous page that with TLSFmem Netsurf is crashing but this is very likely due to internal memory trash somewhere in Netsurf... with good old memory lists and standard memory pools buffer under/overflows often go unnoticed but with TLSF you are likely going to crash right away.
Of course, Wipeout session could reveal this albeit it is going to be painfully slow experience with such a complex application.