Notice (2018-05-24): bugzilla.xamarin.com is now in
Please join us on
Visual Studio Developer Community and in the
Mono organizations on
GitHub to continue tracking issues. Bugzilla will remain
available for reference in read-only mode. We will continue to work
on open Bugzilla bugs, copy them to the new locations
as needed for follow-up, and add the new items under Related
Our sincere thanks to everyone who has contributed on this bug
tracker over the years. Thanks also for your understanding as we
make these adjustments and improvements for the future.
Please create a new report for Bug 35136 on
GitHub or Developer Community if you have new
information to add and do not yet see a matching new report.
If the latest results still closely match this report, you can use the
In special cases on GitHub you might also want the comments:
GitHub Markdown with public comments
Mono mempools are linked lists of memory chunks. When mono_mempool_alloc receives a request for memory, it follows this strategy for allocating:
1. If possible, assigns the memory out of the beginning of the free space of the current chunk.
2. If the request size is larger than the current chunk free space, creates a new chunk and adds it to the head of the linked list.
A consequence of this strategy is that when case 2. is hit, the free space from the previous current chunk becomes unusable-- it is unused, but because the chunk will never be the head of the linked list again, it will never be allocated from. The memory is reclaimed when the mempool itself is destroyed, but until then the memory is totally wasted.
Consider the following innocuous set of operations:
- Create a mempool with initial pool size 2048
- Allocate an entity of size 2049
- Allocate an entity of size 1024
The first allocation will lead to a new chunk of size 3072 being allocated, wasting all the free space in the initial chunk. The second allocation will also lead to a new chunk being allocated, wasting the 1k at the end of the previous new chunk.
This might not actually be a problem-- we could view the memory wastage as an intentional tradeoff of memory efficiency for time efficiency, and there is the trick that because a separate path is used for allocations of >=4096 bytes (the "current chunk" does not change in this case) you can never waste more than 4095 bytes per allocation this way. However, a worrying thing is we are not tracking how much memory is lost this way. We should at least keep enough statistics to tell how much of a problem this is.