Hi Olaf, thanks for commenting. I increased the puddle size to 16K from the default 4K and it seems to have helped in my limited testing (no feedback yet from anybody else). I figure this reduces the number of puddles in the list which need to be searched through, as well as allowing larger allocations into the pool.
It might help, but you will have to put up with the limitations of the list-based memory management system if you stick with the pools.
As I mentioned before, you might want to cast a bigger net for alternative memory management solutions. No matter how much you tweak the puddle size, there will be side-effects which are just as strange as the ones you hope to get under better control (fragmentation, number of items in each list that need to be checked, etc.). Hence, I would suggest you look into finding a memory management system which can be tuned to the needs of the application.
I haven't. I think there are cache statistics in the log which might offer some clues though. Beyond the cache data everything else is structures which should all be very small.
Well, you can't get a good handle on the memory problem until you have enough data to permit you to make a decision on how to proceed.
For example, you could collect information on the average sizes of allocations made and then group them. Let's say you have one group of 128 bytes or less, one of 256 bytes or less, up to 4KBytes or less and then everything else. You could set up different pools from which only these specific allocation sizes would be drawn. For best effect match the puddle sizes to the most frequently used allocation sizes.
Another idea would be to recycle memory allocations once they are freed. Again, you would group allocations by size (128 bytes or less, 256 bytes or less, etc.) and once an allocation is freed, you'd stick it into a list of chunks of the same size available for reuse. If an allocation is made which matches the chunk size of an entry in the list, you'd pick up the first entry and reuse it. That saves effort because you don't have to merge allocations back into bigger chunks upon freeing them, and because you don't have to search for allocations inside the fragmented puddles: you just pick up the first entry in the respective list, remove it from the list and the use it. Mind you, you will have to watch how much memory is tied up in these lists and prune them to avoid running out of memory over time.